Search This Blog

Wednesday, March 9, 2016

Google's DeepMind Beats Go Champion in Historic Moment for Artificial Intelligence

As reported by The Telegraph: A computer program has beaten the world champion of one of civilisation's oldest board games for the first time in history.

Lee Se-dol, a 33-year-old South Korean, resigned the first of five matches of the fiendishly complex strategy game against the AlphaGo program, which is built by the Google-owned British company DeepMind.

The game, which lasted a brief 3.5 hours, was officially declared as a win for AlphaGo in Seoul today. Commentators called it a "superb" game that would be studied for years to come.

The breakthrough is seen as a watershed moment for artificial intelligence, a milestone potentially more significant than IBM defeating the world champion Gary Kasparov at chess in 1997. Go takes a lifetime to master and unlike chess, a computer cannot play by simply assessing all possible moves but must rely on something akin to intuition.


#AlphaGo WINS!!!! We landed it on the moon. So proud of the team!! Respect to the amazing Lee Sedol too— Demis Hassabis (@demishassabis) March 9, 2016

Well done #AlphaGo!! Fantastic game from Lee Sedol. Four more games, but indubitably a new milestone has been reached in AI research today.— Edward Grefenstette (@egrefen) March 9, 2016


The game involves two players putting black and white markers on a 19-by-19 grid. It is said to have more possible playing permutations than the number of atoms in the universe.

The AlphaGo program, which uses algorithms as practiced by analyzing data from 100,000 professional human games and playing itself some 30 million times.
Mr Lee, who has been a professional Go player since the age of 12, and won 18 international titles, said at a pre-game press conference: “It would be a computer’s victory if it wins even one game.”

“I believe human intuition and human senses are too advanced for artificial intelligence to catch up. I doubt how far AlphaGo can mimic such things.”

After the game he admitted that he was "shocked".


"I admit I am in shock, I did not think I would lose. I couldn't foresee that AlphaGo would play in such a perfect manner. I in turn would like to express my respect to the team who developed this amazing program," he said.

Four more games will be played over the course of this week, although AlphaGo would only have to win two of those to be crowned the victor.

What is Go?

Go is a 3000 year old Chinese board game, making it probably the oldest game still played in its original form. It literally means "encircling game" although it has different names in Korea, China and Japan - the Chinese is Weiqi, Korean is Baduk, and Japanese is Go.

How do you play?

Each game has two players, who alternately place black or white stones on the 19 x 19 grid on the board. The objective is to surround territory - like two people dividing up a map and trying to draw borders. You score the game by the number of stones you are able to surround.

Where did it come from?

The game was thought to have been invented by an ancient Chinese emperor in order to teach his son about political strategy. It was considered one of the four marks of a Chinese scholar, along with calligraphy, painting and playing a musical instrument.

How hard is it really?

Despite the relatively simple rules, the game is devilishly complex in how it plays out. It is primarily a game of strategy and imagination, and the number of possible games is vast (10^761 compared to 10^120 possible in chess).
Defeating a professional human player at Go has been seen as one of the "holy grails" of artificial intelligence research, due to its high level of complexity.

Tuesday, March 8, 2016

BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century

As reported by JalopnikI have to hand it to BMW. In celebrating its 100th birthday, the brand could have easily cranked out some retro roadster or sedan concept. As cool as that would have been, I like what it did instead: looking to the far future of cars and driving. Meet the BMW Vision Next 100 Concept.

BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century

Starting with a special event in Munich today, BMW is unveiling the first of four concepts for each of its four brands—BMW, Mini, Rolls, and motorcycles—that showcase some wild dreams of tomorrow. Those cars will be shown at global events later this year, but the BMW concept is here now.
So what the hell is this thing? Like the cars of the future will be, for better or worse, it can switch between autonomous and human-driven modes. (At least the second one is an option.) Those modes are called Ease and Boost, respectively. The interior controls transform to meet the needs of each mode.
Boost is interesting because it takes the whole “Ultimate Driving Machine” thing to the next level by using augmented reality to help the driver become the best driver they can be. The car shows the ideal racing line, steering points and speeds, BMW says. In Ease, the car is completely self-driving.
Then there’s the “skin” of the car. As Top Gear explains, inside the triangular “scales” turn red to warn the driver of upcoming hazards, and as the front wheels turn, the bodywork wrapped around them stretches and contorts.
I don’t know how much better that would be than conventional wheels and tires, but it’s neat to look at and think about.
Obviously, it’s not meant for production—a lot of this technology has to be invented first before that can happen. But as tends to happen with BMW concepts, it could preview some future design. It’s easy to imagine this sleek four-door setup in a BMW i5 or something similar.
I’m intrigued by it. Can’t wait to see the other 100th birthday concepts, too.

BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century
BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century
BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century
BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century
BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century
BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century
BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century
BMW's Vision Next 100 Is A Wild Shapeshifter From The 22nd Century

Monday, March 7, 2016

UK to Test Self-Driving Trucks Later This Year

As reported by Engadget: Later this year, the UK will open up its motorways to self-driving trucks under new plans to speed up deliveries and cut traffic congestion. The Times reports that Chancellor George Osborne will confirm funding for the project, which could see convoys of up to 10 autonomous trucks -- or lorries as Brits call them -- driving a few meters apart, during this month's budget announcement, helping Britain position itself as one of the leading proponents of self-driving vehicles.

According to reports, a stretch of the M6 motorway near Carlisle has been touted as a possible testing ground. On this quieter part of the UK's major road network, a driver can lead a "platoon" of autonomous trucks without having to navigate various entry and exit points.

Although it's not known which vehicles will be tested on British roads, Daimler's autonomous truck is likely to be a frontrunner. The company has already driven an augmented Mercedes-Benz Actros down Germany's Autobahn 8 and also received the green light to test them on US roads.

The UK government is already putting the finishing touches to stretches of smart roads. Jaguar Land Rover, Huawei and Vodafone have joined various UK universities to test a number of self-driving car technologies, including LTE, Wi-Fi, LTE-V and DSRC. Another project in West Yorkshire uses infrared cameras to monitor traffic levels and introduce variable speed limits to help keep vehicles moving.

The Department of Transport believes the new test "has the potential to bring major improvements to journeys and the UK" and save fuel in the process. We'll learn more when George Osborne brings his red briefcase to the House of Commons on March 16th.

Friday, March 4, 2016

SpaceX Launches Satellite, but Doesn't Stick the Landing on the Drone Ship

Today at 6:35 p.m. EST, SpaceX hoped, at last, to make its fifth attempt to launch and then land, its Falcon 9 rocket on an at-sea platform. The launch attempt has been delayed for a multitude of reasons over the last nine days, including bad weather, heavy winds, and even a boat roaming into a safety zone.
However, it was not to be:


Launching from Cape Canaveral Air Force Station, Florida, the rocket is set to deliver a commercial satellite into orbit. Shortly after liftoff, the first stage of the rocket was to automatically attempt a landing on a so-called "drone ship" at sea, which SpaceX has named "Of Course I Still Love You."
The company’s four previous attempts to land the Falcon 9 at sea have ended without success, some in spectacular explosions, some in oh-so-close misses, and one in which the rocket blew up while still ascending. In February, the first stage made it back to the drone ship, but exploded when it fell onto the deck of the drone ship after one of its legs broke on impact.
In December, SpaceX did successfully return the the Falcon 9 first stage to Earthfor the first time. But the company's ultimate plans are to be able to land it both on land and at sea, giving it maximum flexibility in the future.
About 10 minutes after launch, the first stage will attempt to return upright on the deck of "Of Course I Still Love You," a 100-foot-by-300-foot, unmanned floating platform currently off the coast of Florida. The rocket is meant to guide itself to the barge using GPS.
Those hoping for a successful landing, however, should temper their expectations. SpaceX said in a mission description (PDF) published ahead of time that because of the launch’s specific profile, "a successful landing is not expected."
When Elon Musk’s company eventually does complete an at-sea landing of the first stage, it will secure a key element of a future of affordable launches.
"SpaceX believes a fully and rapidly reusable rocket is the pivotal breakthrough needed to substantially reduce the cost of space access," the company says on itswebsite. "The majority of the launch cost comes from building the rocket, which flies only once. Compare that to a commercial airliner—each new plane costs about the same as Falcon 9, but can fly multiple times per day, and conduct tens of thousands of flights over its lifetime. Following the commercial model, a rapidly reusable space launch vehicle could reduce the cost of traveling to space by a hundredfold."
Today's mission, of course, also has a scientific purpose beyond returning the rocket home. The launch is meant to deliver the SES-9 commercial communications satellite for SES, a global satellite company, to a geostationary transfer orbit (GTO). SES clients, who receive satellite-based communications from the company, include Internet service providers, broadcasters, business and governmental organizations, and mobile and fixed network operators. The company has a fleet of more than 50 geostationary satellites.
"SES-9 is the largest satellite dedicated to serving the Asia-Pacific region for SES," SpaceX wrote in the mission description. "With its payload of 81 high-powered Ku-band transponder equivalents, SES-9 will be the 7th SES satellite providing unparalleled coverage to over 20 countries in the region."

The new satellite will be co-located with SES-7.



The satelite launch itself has been successful, but will take some additional time before it reaches full orbit.

Monday, February 29, 2016

Google’s PlaNet AI Can Figure Out Where a Picture Was Taken Just by Looking at it

As reported by Android AuthorityGoogle’s deep-learning experiments have produced a wide array of results both practical and fascinating. The newest AI endeavor can look at a picture and guess where it was taken by comparing it to millions of other pictures at once. It’s called PlaNet, and it’s heralding an era in which photos won’t need geotags for their photographer’s location to be pinpointed.

The premise is fairly simple and easy to grasp. If you put a picture of the Statue of Liberty in front of someone, billions of people would be able to guess correctly that the photo was taken in New York City. We identify locations based on landmarks all the time, and this program does essentially the same thing. However, it’s able to identify locations in the absence of obvious landmarks by comparing and contrasting the photo with a massive database of pictures taken all around the world.0The technology is by no means perfect. Or even consistent. Team lead Tobias Weyland says that PlaNet can can pinpoint a photo’s location with street-level accuracy 3.6 percent of the time. Just guessing the city hops it up to 10.1 percent accuracy, and it gets the country right 28.4 percent of the time. It can guess the continent almost half the time, 48 percent accuracy on that one.

That may sound pathetic, but the point isn’t that it’s perfect. The point is that it’s better at doing this than human beings. To see how well you fair at guessing your location, give Geoguessr a whirl. This little online game will drop you in a random location and let you stroll around and try to figure out where you are. It’s pretty tough, and PlaNet isn’t given the luxury of a stroll when trying to suss out these photos’ locations.
To showcase this tech’s ability, researchers pitted the program against a group of well-traveled human beings. PlaNet was able to out-guess human players in 56 percent of guessing rounds. Although that seems like a narrow victory, researchers pointed out that PlaNet had a “localization error rate” less than half that of its human competitors.
Google hasn’t revealed how they’re planning on using this tech or how its development will be pushed forward, but it does have some interesting implications.

Self-Driving Cars Could Mean the End of Parking Spaces, and That’s Great for Cities

As reported by Science AlertWe’re always being told how self-driving cars will reward us with almost unimaginable benefits when they finally hit the streets. Aside from the sheer convenience of being chauffeured everywhere by artificial intelligence (AI), there’s the safety factor of not having error-prone humans behind the wheel, not to mention how environmentally friendly driverless electric vehicles could be.

But there’s also another advantage to not driving our own vehicles around, and it’s one that could have a vast impact on the look, feel, and function of the cities that we live in: parking. Put simply, if we’re not driving our own vehicles to and from destinations any more, we won’t need to park idle vehicles on public streets or in car parks – something that could radically change the vibe of congested urban spaces."The biggest impact is going to be on parking. We aren’t going to need it, definitely not in the places we have it now," Alain L. Kornhauser, a researcher in autonomous vehicles at Princeton University, told Patrick Sisson at Curbed. "Having parking wedded or close to where people spend time, that’s going to be a thing of the past. If I go to a football game, my car doesn’t need to stay with me. If I’m at the office, it doesn’t need to be there. The current shopping center with the sea of parking around it, that’s dead."
While the extreme case of totally empty car parks and city streets with no stationary vehicles on them would probably require people to fully let go of personal car ownership – something many people won’t feel comfortable doing, in the near future at least – even moderate uptake of self-driving vehicles would constitute an improvement to clogged urban real estate jam-packed with stationary metal and rubber.
"An average vehicle in the US is parked for a staggering 95 percent of the time," Carlo Ratti, director of the Senseable City laboratory at the Massachusetts Institute of Technology (MIT), told Curbed. "Car sharing is already reducing the need for parking spaces: it has been estimated that every shared car removes between 10 and 30 privately owned cars from the street."
"Self-driving vehicles will reinforce this trend and promise to have a dramatic impact on urban life, because they will blur the distinction between private and public modes of transportation," he added. "‘Your’ car could give you a lift to work in the morning and then, rather than sitting idle in a parking lot, give a lift to someone else in your family – or, for that matter, to anyone else in your neighborhood, social-media community, or city."
If these kinds of predictions turn out to be accurate, cleared road areas no longer used for parked cars could be put to all sorts of uses, substantially adding to the footprint of city spaces, and letting people reclaim public territory once lost to machines.
"In this environment, you don’t need to park your car, it’ll park by itself (possibly while recharging), so you can think about recapturing the space from the front of one building to the front of another building," said Gerard Tierney of US-based architecture and design firm, Perkins+Will. "It does become a pedestrian-dominated environment, where these vehicles would need to take a more subsidiary role. We would see a huge increase in the amount of space given up to the public realm and a huge increase in the width of sidewalks, bike lanes, and space for any other kind of alternate transportation."
In all likelihood, we won’t see these kinds of changes happening in our cities for many years to come, with self-driving vehicles only just beginning to be considered by road and legal authorities. But it’s exciting to think of the tangential benefits this much-hyped technology could have in all sorts of fringe areas. We can’t wait to see what else the future of driving turns up.

Thursday, February 25, 2016

Facebook can Map More of Earth in a Week than We Have in History

As reported by New ScientistWe just learned that Facebook’s artificial-intelligence software can probably map more in a week than humanity has mapped over our entire history.

In a blog post, the social network announced that its AI system took two weeks to build a map that covers 4 per cent of our planet. That’s 14 per cent of Earth’s land surface, with 21.6 million square kilometres of photographs taken from space, digested and traced into a digital representation of the roads, buildings and settlements they show. And Facebook says it can do it better and faster, potentially mapping the entire Earth in less than a week.
This is the starkest example I’ve seen so far of the most important phenomenon in technology – computers doing human work really fast. It’s going to change the way we work forever, and will have massive implications for how we acquire knowledge, cooperate on large projects and even understand the world.
The stated goal of Facebook’s data-science team is to build maps to help the social network plan how to deliver internet to people who are currently offline. It’s a dubious starting point, but whatever you think about Facebook’s internet colonialism, the company’s drones won’t be able to beam Wi-Fi to the disconnected until they know where they are.
Fast learner
The model was able to map 20 different countries after being trained on just 8000 human-labelled satellite photos from a single nation. This is mind-boggling – and Facebook’s data-science team wasn’t even trying to go fast.
The company says it has now improved the process to the point where it could do the same mapping in a few hours. Assuming it had the photographs, it could map Earth in about six days. That’s something that humanity still hasn’t managed to do.
“We processed 14.6 billion images with our convolutional neural nets, typically running on thousands of servers simultaneously,” said Facebook in its blog post.
Using its AI, Facebook aped how humans make maps in the 21st century. One way they are now being made is a project called Open Street Map, which uses volunteer labour to trace satellite photographs by hand, picking out roads and houses. The resulting maps have been used all over the world, often for disaster response – with the system able to build maps of an entirely unmapped region in a few days. Facebook’s AI can do the same in seconds.
Mapping on the scale that its AI system has demonstrated would take decades for a human team of any size – and it is more data than people or their organisations are built to handle.
The power of AI
Facebook’s map-making AI is just one of probably thousands of narrow AIs – ones that are trained to focus on a single task – churning through human tasks around the planet right now, faster and on larger scales than we ever could. The CERN particle physics laboratory near Geneva, Switzerland, is using deep learning to find patterns in the mass of its collision data; pharmaceutical companies are using it to find new drug ideas in data sets that no human could plumb.
Nvidia’s Alison Lowndes, who helps organisations build deep-learning systems, says she now works with everyone: governments, doctors, researchers, parents, retailers and even, mysteriously, meatpackers.
What’s exciting is that all neural networks can scale like Facebook’s mapping AI. Have a narrow AI that can spot the signs of cancer in a scan? Good: if you have the data, you can now search for cancer in every human on Earth in a few hours. An AI that knows how to spot a crash in the markets? Great: it can watch all 20 of the world’s major stock exchanges at the same time, as well as the share prices of individual companies.
The real power of narrow AI isn’t in what it can do, because its performance is almost never as good as that of a human would be. The maps that Facebook’s AI produces are nowhere near as good as those that come out of a company such as custom map developer Mapbox.
But the smart systems being built in labs at Google, Facebook and Microsoft are powerful because they run on computers. What the future of human work looks like will be determined by whether it is better to do an average-quality job 50 million times a second or a human-quality job once every few minutes.
Make no mistake, AI is here – and it’s real and powerful. But humans are still in total control. We’re just all about to get some extremely clever help.