As reported by MIT Technology Review: After catching the world and the auto industry by surprise with its
progress with self-driving cars, Google has begun the latest, most
difficult phase of its project – making the vehicles smart enough to
handle the chaos of city streets.
But while the company describes
its work with its typical tight-lipped optimism, academic experts in
robotics are cautious about the prospects of fully autonomous vehicles.
They estimate it will be decades until they can perform as well as human
drivers in all situations – if they ever do at all.
Google’s cars
make extensive use of detailed maps that describe not only roads and
restrictions such as speed limits, but the 3-D location of stop lights
and curbstones to within inches. The company is now working to make its
vehicles capable of seeing and understanding the kind of unexpected
obstacles that don’t appear on those maps and are particularly common in
urban areas, said Chris Urmson, the director of the project, last week.
“Obviously, the world doesn't stay the same,” said Urmson, speaking at a conference
bringing together academics and auto-industry engineers working on
autonomous driving. “You need to be able to deal with things like
temporary construction, and so we've been putting a lot of effort into
understanding the semantic meaning of the world.”
For example, an
autonomous car should be capable of recognizing that a school bus is
different from other vehicles of a similar size and may behave
differently, said Urmson.
Urmson showed video of a prototype
Google car navigating through a real-life construction zone marked by
flashing yellow arrow signs, and even stopping when a “construction
worker”—actually a Google employee—waved a hand-held stop sign.
Having
cars understand those types of hazards is crucial to Google because of a
recent change in the direction of its project. The company’s original
prototypes were based on conventional vehicles, and a human passenger
could use the steering wheel and brake pedal to intervene in the event
of a glitch. But in May, Google said that humans couldn't be counted on
to stay focused enough on the road (see “Lazy Humans Shaped Google’s Autonomous Car”).
It unveiled a new prototype without a steering wheel or pedals and said
research would now focus on making vehicles that are 100 percent
autonomous—leaving no room for error.
Academic experts at the
conference say Google is taking on some of the hardest problems in
artificial intelligence and robotics, essentially trying to replicate
the ability of humans to effortlessly make sense of their environment.
That’s because driving safely relies on much more than just knowing to
avoid big objects, such as people or other cars, or being able to
recognize symbols such as a stop sign.
Humans make use of myriad “social cues” while on the road, such as establishing eye contact or making inferences about how a driver will behave based on the car’s make and model, Alberto Broggi, a researcher at Italy’s Universita di Parma, told MIT Technology Review.
Even if a computer system can recognize something, understanding the context that gives it meaning is much more difficult, said Broggi, who has directed several major European Research Council grants in autonomous driving. For example, a fully autonomous car would need to understand that someone waving his arms by the side of the road is actually a policeman trying to stop traffic.
When surveyed by the
conference organizers, the 500 experts in attendance were not optimistic
such problems would be solved soon. Asked when they would trust a fully
robotic car to take their children to school, more than half said 2030
at the very earliest. A fifth said not until 2040, and roughly one in 10
said “never.”
Several of them told MIT Technology Review
they wouldn't be surprised if self-driving cars were, for many decades,
limited to specific, well-controlled settings, such as construction
sites and campus-like environments with low speed limits and minimal
traffic.
Most big auto companies are exploring self-driving cars. One of them,
Nissan, caused a stir last year when it predicted it would be selling
them by 2020. Last week, though, Nissan used the conference to dial back
that forecast, saying instead that cars by the end of the decade will
be able to handle selected tasks, such as parking and freeway driving.
Despite being bullish about its technology, Google doesn’t make
predictions about when fully autonomous vehicles might arrive.
John Leonard,
an MIT expert in autonomous driving who attended the conference, says
that he and other academics find themselves constantly battling the
assumption that all of the technology challenges associated with robotic
cars have been solved, with only regulatory and legal issues remaining.
“It’s hard to convey to the public how hard this is,” he says.
Leonard stands by a comment that earned him some online criticism in an MIT Technology Review story last year, when he predicted that he wouldn’t see a self-driving Manhattan taxi in his lifetime (see “Driverless Cars Are Further Away Than You Think”).
No comments:
Post a Comment