Search This Blog

Wednesday, February 3, 2016

Scientists in Germany Take Another Major Step Towards Nuclear Fusion

As reported by GizmodoPhysicists in Germany have used an experimental nuclear fusion device to produce hydrogen plasma in a process similar to what happens on the Sun. The test marks an important milestone on the road towards this super-futuristic source of cheap and clean nuclear energy. 

Earlier today in an event attended by German Chancellor Angela Merkel (herself a PhD physicist), researchers from the Max Planck Institute in Greifswald turned on the Wendelstein 7-X stellarator, an experimental nuclear fusion reactor. (Actually, the researchers let Merkel do the honors.) This €400 million ($435 million) stellarator is being used by physicists to test the technical viability of a future fusion reactor.
Unlike nuclear fission, in which the nucleus of an atom is split into smaller parts, nuclear fusion creates a single heavy nucleus from two lighter nuclei. The resulting change in mass produces a massive amount of energy that physicists believe can be harnessed into a viable source of clean energy.
It’ll likely be decades (if not longer) before true nuclear fusion energy is available, but advocates of the technology say it could replace fossil fuels and conventional nuclear fission reactors. Unlike conventional fission reactors, which produce large amounts of radioactive waste, the by-products from nuclear fusion are deemed safe. 
Scientists in Germany Take a Major Step Towards Nuclear Fusion
Via Max-Planck-Institut für Plasmaphysik, Tino Schulz - Public Relations Department, Max-Planck-Institut.
Back in December, the same team of researchers fired up the donut-shaped device for the first time, heating a tiny amount of helium. During today’s experiment, a 2-megawatt pulse of microwave was used to heat the hydrogen gas and convert it into an extremely low density hydrogen plasma. “With a temperature of 80 million degrees and a lifetime of a quarter of a second, the device’s first hydrogen plasma has completely lived up to our expectations,” said physicist Hans-Stephan Bosch in a press statement.
W7-X isn’t expected to produce any energy, but it will be used to test many of the extreme conditions that future devices will be subjected to in order to generate power. Temperatures within the device could conceivably reach 180 million degrees F (100 million degrees C). 
As noted by John Jelonnek, a physicist at Germany’s Karlsruhe Institute of Technology in a Guardian article, “It’s a very clean source of power, the cleanest you could possibly wish for. We’re not doing this for us but for our children and grandchildren.”

China Launches New MEO BeiDou Navigation Satellite

As reported by Inside GNSSChina successfully completed its first BeiDou launch of the year, lifting a new-generation satellite into orbit yesterday (February 1, 2016) and adding to its 17 operational spacecraft in the nation’s GNSS (GPS-like) constellation.

The fifth of the new series, the middle-Earth-orbiting (MEO) satellite will join its four predecessors in testing inter-satellite crosslinks and a new navigation payload that will set the framework and technical standards for global coverage, according to the Xinhua state news agency.
Designated BDS M3-S, this is the first of two BeiDou satellites scheduled for launch in 2016, according to China Aerospace Science and Technology Corporation. The spacecraft contains a technology demonstator —identified as a “chip” by program officials — that will, if proved successful, help in the design of smaller, better integrated, more reliable satellites, BeiDou deputy commander-in-chief Li Guotong said. It also contains a particle detector to assess radiation conditions in the BeiDou constellation's environment.
According to Libin Xiang, commander-in-chief of the BeiDou Navigation Satellite System (BDS) project, the latest satellite is crucial to implementing a transition from the regional system declared operational in December 2011 and a full-fledged system expected to be completed by 2020.
"Our new intersatellite crosslink system, featuring strong disturbance resistance and high-level privacy, is the core technology to compete with other countries' navigation networks. The new satellite will fully verify our technology," said Baojun Lin, the satellite's chief designer.
Lin said the satellite will operate without the help of ground control and broadcast continually, key requirements for navigation services.
According to Xinhua, China plans to expand the Beidou services to most of the countries covered in the Silk Road Economic Belt (also knowns as the “Belt and Road”) initiative by 2018, a swatch of countries stretching through Central Asia, West Asia, the Middle East, and Europe.
The Beidou Phase III system of which BDS M3-S will be a part will migrate its civil B1 signal from 1561.098 MHz to a frequency centered at 1575.42 MHz – the same as the GPS L1 and Galileo E1 civil signals.  The former quadrature phase shift keying (QPSK) modulation will be a time-multiplexed binary offset carrier (TMBOC) modulation similar to the new civil GPS L1C and Galileo's Open Service signal.Meanwhile, a United Launch Alliance Atlas V 401 will place the final GPS follow-on block satellite (IIF-12) into orbit for the U.S. Air Force on Friday (February 5, 2016) from Cape Canaveral Air Force Station, Florida. A 19-minute launch window opens at 8:38 a.m. EST. On Sunday (February 7, 2016), Russia is scheduled to launch another GLONASS-M satellite from the Plesetsk Cosmodrome north of Moscow.

Monday, February 1, 2016

The Oil Crash is Kicking Off One of the Largest Wealth Transfers in Human History: $3 Trillion per Year

As reported by Yahoo FinanceEconomists are still hotly debating whether the oil crash has been a net positive for advanced economies.

Optimists argue that cheap oil is a good thing for consumers and commodity-sensitive businesses, while pessimists point to the hit to energy-related investment and possible spillover into the financial system.
A new note from Francisco Blanch at Bank of America Merrill Lynch, however, puts the oil move into a much bigger perspective, arguing that a sustained price plunge "will push back $3 trillion a year from oil producers to global consumers, setting the stage for one of the largest transfers of wealth in human history."
Blanch and his team already see evidence that the fall in the price of crude is having a positive impact on demand, and say that it could accelerate even further if prices don't pick up. 
Says Blanch: "Alternatively in a lower oil price scenario, e.g. if prices were to average just $40 over the next five years which is close to the current forward curve, demand would grow by 1.5 million barrels per day, which is 0.3 above our base case. Finally, at $20 oil demand would grow by an explosive by 1.7 per year on average, 0.5 above the base case, on our estimates."
Meanwhile, in emerging markets, where much of the story of late has been about disappointing economic growth, Blanch still sees huge upside potential in terms of automobile penetration and consumption.
Take China for example, where the strategist sees the oil plunge helping to fuel a boom in SUV sales: "Moreover, the low oil price is encouraging Chinese consumers to buy increasingly larger cars. Sales of SUVs, the heaviest passenger vehicles category, are up 60 percent year-on-year in the last three months, while overall passenger vehicle sales are growing robustly at 22 percent."
And it's not just emerging markets where the impact of cheaper gasoline is being seen. 
After years of stagnation, vehicle miles traveled in the U.S. clearly ticked higher in 2015.
Combine these trends with the decline in, say, Saudi Arabia's foreign exchange reserves, or the stock price of any oil company, and you can see the dramatic wealth shifts now taking place in the world. 

MIT Hyperloop Team Rockets Past Competition

As reported by the Boston HeraldIt may seem like so much pie in the sky right now, but a globally embraced dream of creating a levitating, 700-mile-per-hour public transit system is already coming true for enterprising engineers at the Massachusetts Institute of Technology, who learned Saturday that their design of a futuristic bullet train bested 100 others submitted from around the world for the SpaceX Hyperloop.

“Wow! We are beyond excited to announce we just won 1st place in the SpaceX Hyperloop competition!!!!!!!!” the MIT Hyperloop team posted on Facebook from the weekend competition at Texas A&M University, along with a video showing them erupting in screams, whistles and applause as their winning entry was announced.

MIT’s team of more than two dozen graduate and undergraduate students will receive $50,000 from Hyperloop Technologies Inc. to build their creation.

Powered by renewable energy, Hyperloop aims to rocket floating passenger pods through elevated tubes at nearly the speed of sound. California-based aerospace company SpaceX, which sponsored the design competition, is planning to start testing human-scale pods on a specially designed track as early as this summer.

Elon Musk, SpaceX’s billionaire brainiac, told the awards ceremony his inspiration for Hyperloop comes from being stuck in Los Angeles traffic and being an hour late for a speech.

“I’m starting to think this is really going to happen,” Musk said. “It’s clear that the public and the world wants something new, and it’s clear that you guys are going to bring it to them.”

Friday, January 29, 2016

Army Demonstrates Autonomous Vehicle Capabilities at Detroit Auto Show

As reported by Popular Mechanics: The military is continuing to experiment with autonomous vehicle technologies. Earlier this month, the U.S. Army successfully deployed a fully-autonomous ground vehicle by having it flown in by a fully-autonomous helicopter. Now the Army has revealed that autonomous driving technology can be used in a number of their trucks and military vehicles.

At the Detroit Auto Show, the U.S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC) showed off one of its autonomous vehicles. In addition to cutting down on required personnel by having a convoy of autonomous vehicles follow one human driver, driverless vehicles could also navigate areas with a high number of IEDs or other hazards without risking human life. A new video from Stars and Stripes takes a look at the new driverless Army trucks. 
Just like Google's driverless cars, the autonomous military vehicles use a LIDAR system to create a three-dimensional world map and navigate around obstacles. The Army's large autonomous trucks are still being developed—they're a little harder to fine tune than the little pods that Google has out on the roads—but finding new ways to perform wartime operations while keeping soldiers out of harm's way is one of the Army's top priorities.

Droneboarding: The Sport We Should Have Seen Coming

As reported by The VergeOkay, there's a lot going on here so let's break things down. Is it impractical to use quadcopters to tow a toddler on a snowboard? Yes, very. Did the battery on the drone in this video last long? Probably not. Did the kid go very fast? He's barely moving. Is this a great idea? Of course, of course, of course.
Using a consumer grade drone to tow people on snow (or on water?) makes about as much sense as strapping a giant fan to someone's back while they're paragliding — but people do that and it looks like great fun. And while drones may be pretty weak with batteries that only last tens of minutes, they are at least getting stronger, and we can definitely imagine a team of drones pulling a fully-grown human. Maybe they could do it on a sled? Like huskies? Hunting down rogue robots through the future-frozen wastelands of middle America after the coming Ice Age / AI revolt??
Okay, that's too far, but this is still pretty alright.

Thursday, January 28, 2016

The Artificial Intelligence Technology That Solved 'Go' Will be Accessible In and Through Your Smartphone

As reported by TechRepublic:Google has developed a machine learning system capable of mastering Go - an ancient Chinese game whose complexity stumped computers for decades.
While IBM's Deep Blue computer mastered chess in the mid 1990s and in more recent years a system built by Google's DeepMind lab has beaten humans at classic 70s arcade games - Go was a different matter.
go.jpgGo has 200 moves per turn compared to 20 per turn in Chess. Over the course of a game of Go there are so many possible moves that searching through each of them to identify the best play is too costly from a computational point of view.
Now a system developed by Google DeepMind has beaten European Go champion and elite player Fan Hui. Rather than being programmed in how to play the game, the AlphaGo system learned how to do so using two deep neural networks and an advanced tree search.
Go is typically played on a 19-by-19-square board and sees players attempt to capture empty areas and surround an opponent's pieces. To teach the system how to play the game, moves from 30 million Go games played by human experts were fed into AlphaGo's neural networks. The system then used reinforcement learning to work out the type of moves that were most likely to succeed, based on these past matches. This approach allows AlphaGo to restrict the number of possible moves it needs to search through during a game - making the process more manageable.
DeepMind CEO Demis Hassibis described Go as "probably the most complex game that humans play. There's more configurations of the board then there are atoms in the universe."
It was that complexity that meant the game had been so difficult for machines to master said DeepMind's David Silver. "In the game of Go we need this amazingly complex intuitive machinery, which people previously thought was only available in the human brain, to even have the right idea of who's ahead and what the right move is."
Google has suggested that the approach used by AlphaGo to learn how to master Go could be extended to solving more weighty problems, such as climate change modelling, as well as to improving Google's interactions with users of its services.
For instance, DeepMind's Silver suggests the technology could help personalize healthcare by using a similar reinforcement learning technique to understand which treatments would "lead to the best outcomes for individual patients based on their particular track record and history".

 More significantly, Hassabis sees the achievement as progress towards an even grander goal, of building an AI with the same general capabilities and understanding as humans.

"Most games are fun and were designed because they're microcosms of some aspect of life. They might be slightly constrained or simplified in some way but that makes them the perfect stepping stone towards building general artificial intelligence."
Similar AI initiatives are underway at tech giants across the world, with Facebook recently revealing its deep learning system's ability to recognize people and things in images and to predict real-world outcomes, such as when a tower of blocks will topple.

Why Google is pursuing narrow, not general, AI

Simon Stringer, director of the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, said that AlphaGo and other deep learning systems are good at specific tasks - be that spotting objects or animals in photos or mastering a game. But these systems work very differently from the human brain and shouldn't be viewed as representing progress towards developing a general, human-like intelligence - which he believes requires an approach guided by biology.
"If you want to solve consciousness you're not going to solve it using the sorts of algorithms they're using," he said.
"We all want to get to the moon. They've managed to get somewhere up this stepladder, ahead of us, but we're only going to get there by building a rocket in the long term.
"They will certainly develop useful algorithms with various applications but there will be a whole range of applications that we're really interested in that they will not succeed at by going down that route."
In the case of DeepMind, Stringer says the reinforcement learning approach used to teach systems to play classic arcade games and Go has limitations compared to how animals and human acquire knowledge about the world.
Whereas these reinforcement learning algorithms can learn to map which actions lead to the best outcomes they are "model-free", meaning the system "knows nothing about its world".
That approach is very different to how a rat's brain enables it to navigate a maze, he said.
"It's been shown over a half a century ago that what rats do is learn about the structure of their environment, they learn about the spatial structure and the causal relations in their world and then, when they want to get from A to B, they juggle that information to create a novel sequence of steps to get to that reward."
When you teach a system using model-free reinforcement learning, Stringer says it's "behaviorally very limiting".
"As the environment changes, for example one route is blocked off, the system doesn't know anything about its world so it can't say 'This path is blocked, I'm going to take the next shortest one'. It can't adapt but rats can."
Similarly, Google's announcement a few years back that it had trained a neural network to spot cats in images doesn't represent a step towards developing a human-like vision system.
"When we look at a cat, we're not just aware there's a cat in the image, we see all of the millions of visual features that make up that cat and how they're related to each other. In other words our visual experience is much richer than one of these deep learning architectures, which simply tells you whether there's a particular kind of feature in an image."
In particular, such systems lack the human ability to bind features together - he said - to comprehensively understand how features in an image are related to one another. Deep learning neural networks also generally don't model biological systems that appear to play a key role in how humans assign meaning to the world. These models typically exclude, for example, feedback in the brain's visual cortex and the precise timings in the electrical pulses between neurons, he said, adding that the centre in Oxford had developed concrete theories about the importance of these features in the visual cortex.
"We bought all of those elements together. At the very least it gives us a deep insight into what is so special about human vision that hasn't been captured in artificial vision systems yet."
This biologically-inspired approach is very different to that taken by DeepMind but Stringer believes it is necessary to have a chance of one day cracking general artificial intelligence.
"If you want to solve consciousness you're not going to solve it using the sorts of algorithms they're using."
The downside is that Stringer believes the ultimate payoff for his research will be a long time coming, a factor he thinks has driven DeepMind's decision to focus on narrow AI that could be applicable in the near-future.
"I have to admit, I'm always a bit surprised, given the resources that DeepMind have, why they don't devote more resources to actually trying to recreate the dynamics of brain function and I think it's because when you're trying to raise funding you need to produce jam today, you need these algorithms to work quickly otherwise that tap gets turned off.
With that in mind, Google recently announced that it will use Movidius's processors to power its advanced neural computation engine on mobile devices and in turn help the chip maker with it's neural network technology.
Google plans to use Movidius's flagship MA2450 chip, touted as the only commercial solution on the market today that can perform complex neural network computations in "ultra-compact" form factors.
Stringer says: "My aim is to produce the first prototypical conscious systems, something very simple, somewhere between a mouse and a rat, within the next 20 - 30 years."
The DeepMind software that beat Go champion Hui, in a match that took place last October, was running on Google Cloud Platform and reportedly distributed across about 170 GPUs (graphics processing units) and 1,200 CPUs (central processing units).
Google has also been experimenting with Cloud Vision Tech, through a mobile accessible API.
The next major challenge for Google's AlphaGo will come in March, when it will play the world's reigning Go champion Lee Sedol.
DeepMind's Silver is confident AlphaGo has what it takes to beat all comers, at least in the long run.
"A human can perhaps play 1,000 games a year, AlphaGo can play through millions of games every single day. It's at least conceivable that as a result AlphaGo could, given enough processing, given enough training, given enough search power, reach a level that's beyond any human."