Search This Blog

Wednesday, March 11, 2015

Smartphones Will Soon Learn to Recognize Faces and More

As reported by MIT Technology Review: Smartphone camera apps could soon do more than just capture images. Software integrated with a new line of smartphone chips will be capable of recognizing, say, a cat or city skyline in a photo, and tagging pictures of your friends with their names.

The chip maker Qualcomm announced last week that it will bundle the software with its next major chip for mobile devices. The technology, which Qualcomm calls Zeroth, could make sophisticated machine learning more common on mobile devices. As well as processing images, the Zeroth software is designed to allow phones to recognize speech or other sounds, and to learn to spot patterns of activity from a device’s sensors.

The technology uses an approach to machine learning known as deep learning that has led to recent advances in speech and object recognition, as well as software able to play Atari games with superhuman skill (see “Google’s Intelligence Designer”). Deep learning software is loosely modeled on some features of brains. It can be trained to recognize certain objects in images by processing many example photos through a network of artificial “neurons” arranged into hierarchical layers.

Tim Leland, a vice president of product management with Qualcomm, says the company plans to work with partners to build apps to make use of the new capabilities. He wouldn’t say exactly what those apps might do, but Qualcomm’s demonstrations of the technology have so far focused mostly on enhancing the features of camera apps. Last week at the Mobile World Congress event in Barcelona, where Zeroth was announced, Qualcomm showed it powering a camera app that could recognize faces it had seen before, and detect different types of photo scenes.

The Zeroth software is being developed to launch with Qualcomm’s Snapdragon 820 processor, which will enter production later this year. The chip and the Zeroth software are also aimed at manufacturers of drones and robots.

Normally an app has to send data out over the Internet to a powerful server in order to perform such tricks (see “10 Breakthrough Technologies 2013: Deep Learning”). By doing such computation on the phone itself, the software should be better at interpreting data from location and motion sensors on a device, Leland says. He predicts that one of the first applications of Zeroth will be extending the battery life of devices by tracking the way a person uses a phone and learning when it could power down to save energy without affecting the user experience.

Qualcomm is also experimenting with chips that have physical networks of “neurons” made from silicon that communicate with spiking electrical signals (see “Qualcomm to Build Neuro-Inspired Chips”). That might give devices more powerful learning abilities, but it would mean they’d work much differently than today’s devices do.

Tuesday, March 10, 2015

Solar-Powered Plane Takes off for Round-The-World Flight


As reported by GigaOM: Two pilots aboard a solar-powered aircraft took off at 7:12 a.m. local time from Abu Dhabi for the first leg of what they hope will be the first complete solar-powered circumnavigation flight.

If all goes well, Solar Impulse-2 should take about 12 hours to reach Oman (a flight that would take about an hour on a commercial airliner) where it will land before continuing on to India, Myanmar, China and the U.S. The total 22,000-mile trip is expected to take five months. You can follow the flight’s progress at SolarImpulse.com.

The aircraft has a 236-foot wingspan but weighs just 5,070 pounds, according to Gizmag, and is powered by more than 17,000 solar panels. One of the main materials used to build the plane was lightweight carbon fiber composites. One problem was fashioning long wings that were both extremely lightweight and strong enough to withstand multi-day flights without failing.

This plane’s predecessor, piloted by the same team of AndrĂ© Borschberg and Bertrand Piccard, flew from California to New York in 2013. That trip took three months.

The Failed Attempt to Destroy GPS

Syncom satellite developed by Hughes Aircraft Company in 1961.
As reported by the Atlantic: On May 10, 1992, the activists Keith Kjoller and Peter Lumsdaine snuck into a Rockwell International facility in Seal Beach, California. They used wood-splitting axes to break into two clean rooms containing nine satellites being built for the U.S. government. Lumsdaine took his axe to one of the satellites, hitting it over 60 times.

They were arrested and faced up to 10 years in prison for destroying federal government property, causing an estimated $2 million in damage. Ultimately, Kjoller and Lumsdaine took guilty pleas and were sentenced to 18 months and two years in prison respectively for an act of civil disobedience they named "The Harriet Tubman-Sarah Connor Brigade."

Acting in a tradition of civil disobedience established by the Plowshares movement while citing the leader of the Underground Railroad and the heroine of the Terminator series, the Brigade's target was the Navigation Satellite Timing And Ranging (NAVSTAR) Program and the Global Positioning System (GPS). Back then, GPS was still a fairly obscure and incomplete military technology, used in some civilian applications (the first civilian GPS device, the Magellan NAV 1000, came on the market in 1988) but far from a mainstream resource. Today, GPS feels almost more intimate than industrial or weaponized.

I tend to look at GPS mostly when I'm looking at myself. Or more precisely, for myself, rendered as a small blue dot on a map on my phone. Generally while doing this, I don't pause to consider how that blue dot on a screen is a function of at network of multi-million-dollar satellites in space sending signals to and receiving signals from my phone (yes, in addition to signals from local wi-fi devices and cell towers, but still: Giant machines in space talk to a tiny phone and that is totally normal and expected). It’s easy to take our machines of loving grace for granted when we experience them mostly as blue dots on tiny screens.

Twenty-three years ago, the Harriet Tubman-Sarah Connor Brigade was thinking about personal relationships to GPS, but more in the context of civilians killed by precision warfare and a population threatened by a growing first-strike nuclear capability. All of this is GPS' provenance. It’s a provenance easily forgotten given its far-reaching influence and impact—not just on navigation but on networks and on networked time. While the Brigade couldn't foresee GPS' temporal impact, their actions are a small but resonant moment in its history, and a reminder of how we neglect technology’s ambivalent histories at our own risk.
* * *
Peter Lumsdaine didn't express any regrets when I contacted him to learn more about the Brigade. He doesn't really share my sense of personal connection to GPS. Even if the technology has more and more civilian uses, Lumsdaine said, GPS remains “military in its origins, military in its goals, military in its development and [is still] controlled by the military.”


NAVSTAR, the Department of Defense program initiated in 1973 responsible for constructing GPS, was originally called the Defense Navigation Satellite System (DNSS) and emerged from work by the Naval Research Laboratory and the Air Force. In addition to using the system for precise missile targeting and military navigation, GPS satellites were equipped with sensors for detecting nuclear detonations around the world starting around 1980. The NAVSTAR architects always foresaw and planned for civilian applications. Initially, civilians had access to Selective Availability, a deliberately distorted and less precise GPS signal. Industries like shipping and aviation were given access to unjammed GPS in the mid-1990s. In 2000, Selective Availability was disabled and from that point on, anyone with a GPS receiver could get location data as precise as the data used for military and missile navigation.

GPS' major media debut took place on the battlefield during the 1991 Gulf War, where GPS-guided cruise missiles took out Iraqi infrastructure and soldiers carried commercial GPS receivers (the system was still incomplete in 1991, and as a result all GPS operations during the Gulf War had to be coordinated within specific time windows to be sure there were enough satellites overhead). When explaining the Gulf War's influence on the Brigade, Lumsdaine noted that "most of the civilian casualties of Operation Desert Storm came after the war because the infrastructure was targeted; the water, the electric lines, the generating stations. GPS was critical for taking out the electric grid of Iraq… with the electricity came repercussions with water filtration plans and so forth." Crippling infrastructure is a long-term attack strategy, and GPS let the military enact it with ruthless precision.

Of course, GPS wasn't the only satellite network that shaped the conflict. The Gulf War is remembered for being America's first real-time war, a military conflict subjected to 24-hour live news coverage thanks to cable networks like CNN using satellite uplinks. The real-time activities satellites facilitated—from real-time war to real-time news—are part of GPS's less recognized but still powerful legacy. It's a legacy that isn't experienced as small blue dots on smartphones so much as the constant need to check those phones. Despite its reference to a science fiction franchise whose entire plot is predicated on time travel, the Harriet Tubman-Sarah Connor Brigade's critique of NAVSTAR didn't—and, frankly, couldn’t—anticipate GPS’ future role as essentially a giant time machine, playing a quiet but crucial role in our perception and experience of networked time.
* * *
Understanding how GPS shapes time requires a detour into the concept of navigation itself. Historically, navigation has always been tied to synchronizing time across distance. For a person to know where she was, she needed to reconcile when she was against a when somewhere else—if it's midnight and Constellation X is 45 degrees off from its position in City Y, she could determine the distance traveled from City Y. For much of the 20th century, City Y was usually Greenwich, England, home to Greenwich Mean Time. In 1972, Greenwich Mean Time was replaced with the formal adoption by the International Telecommunications Union of Coordinated Universal Time (UTC), which determined time using a collection of distributed atomic clocks. Atomic clocks were already being used in experimental satellite projects prior to the creation of the NAVSTAR program, and all GPS satellites rely on atomic clocks to triangulate location. GPS still functions in a similar way to navigation systems of the past, but time has been abstracted away from the position of stars and down to oscillating atoms instead.

Synchronized time across distance is a dilemma for communication networks as well as navigation systems. All of the seemingly instantaneous services of the internet require timestamps, and figuring out the when of the network depends on a service we mostly know for giving a where to our networked lives. While almost all networked devices have a real-time clock that internally keeps track of time, when that device connects to a network it usually syncs with a time server using the Network Time Protocol (NTP). All time servers rely on a reference clock, a device or source for the most accurate current time. The type of reference clock used can vary (atomic clocks, radio waves), but GPS receivers are one of the commonly used reference-clock sources because of the system's ubiquity and reliability. No real-time without real-space, and vice versa.


The 24-satellite GPS constellation is at an altitude of approximately 12,550 miles. (NASA)
Living in the age of endless real-time often feels more like accelerated time, and living in accelerated time really means living in an age of increasingly precise archives. The difference between the everyday interactions and transactions of the past and the ones we experience now is that, previously, they didn't all come with a timestamp. (Or, if they did, that timestamp wouldn't be accurate down to the level of the microsecond, stored in fragments across multiple data centers, and synchronized across networks.) Precision time accommodates precision logistics, precision financial transactions, and, perhaps it goes without saying, precision surveillance.

When I suggested this connection to Lumsdaine, he was polite. "That was not really in our minds at the time, but I, uh, I see your point." The Brigade’s name had more to do with a deep admiration of Tubman and a desire to tap into a contemporary zeitgeist by citing a popular film franchise. Lumsdaine was initially skeptical of the Terminator films, but watched them at Kjoller's insistence and was moved by their message: “What the film actually says is that our society is plunging towards two things. And one is the takeover of AI and the other is global nuclear war and that people have a responsibility to fight and stop it.”

I'm not sure I agree with Lumsdaine's interpretation, but those would probably be the most resonant themes to an anti-nuclear activist watching Terminator 2 just after the fall of the Soviet Union. The Harriet Tubman-Sarah Connor Brigade didn't seek to free us from the shackles of accelerated time. Still, there is something poetic about how often civil disobedience takes the form of a demand to slow things down, be it traffic on a highway, labor in a factory, or access to a server. It's hard to imagine someone taking similarly visceral action against Google data centers today, or even the NSA's infamous Utah Data Center—not only because of the security around those buildings but because an attack on a single node just isn't an effective tactic.

While searching for information about the Brigade online, I came across an archived Usenet thread that reminded me of debates over another technology currently reshaping time, distance, war, and commerce: drones. Contributors to the thread criticized the Brigade for overemphasizing GPS' military origins and being unable to conceive of the technology as neutral, if not ultimately “for good.” As the FAA introduces proposals for civilian drone policies and industry associations aggressively distance commercial drones from the drones used for targeted killing, the discourse around the future of unmanned systems is similarly contemptuous of any critique that acknowledges the existence of unethical applications.

Today, Lumsdaine views the thread connecting GPS and drones as part of a longer-term movement by military powers toward automated systems. He compared today’s conditions to the opening sequence of Terminator 2, where Sarah Connor laments that the survivors of Skynet’s nuclear apocalypse “lived only to face a new nightmare: the war against the machines.” While we luckily avoided the worst-case scenarios of the Cold War, Lumsdaine explained, the technologies that emerged from it shape today's increasingly decentralized and automated conflicts. It makes a weird sense that the end of history would bring forth conflicts driven by Total Information Awareness, synchronized in what constantly strives to be realer real-time.


The LAGEOS satellite was the precursor to today's GPS. (NASA/Flickr)
An accelerated age often appears to be a more anxious age—every now feels more now than ever, every crisis more urgent than the last. The Harriet Tubman-Sarah Connor Brigade offers a reminder that to some extent, our technological anxieties are the same as they ever were.

States continue to build breathtaking killing machines, scrubbing the blood on their hands in the rhetorical lather of efficiency, of promising civilian applications. Resistance to these regimes is marked with ambivalence at the technologies, tactical instruments often mistaken for ideology manifest. Technologies and the power dynamics that shape their use become normalized. The accelerated age buries technological origin stories beneath endless piles of timestamped data.

When people lose sight of these origin stories, they do a disservice to our technologies and to ourselves. Forgetting that we live among dormant killing machines makes it easy to believe that they are merely machines of loving grace and not tools beholden to the power structures that control them, tools that paradoxically become inescapable as they grow more accessible. Recognizing and living with the ghosts in our machines is a precondition of using them honestly and, hopefully, responsibly.

When I asked Lumsdaine what he thought civil disobedience today might look like in lieu of taking axes to server racks he replied, "I think in a general way people need to look for those psychological, spiritual, cultural, logistical, technological weak points and leverage points and push hard there. It is so easy for all of us as human beings to take a deep breath and step aside and not face how very serious the situation is, because it's very unpleasant to look at the effort and potential consequences of challenging the powers that be. But the only thing higher than the cost of resistance is the cost of not resisting."


What Lumsdaine describes as resistance might be as easily called living with ethics, but ultimately the call to action for either term is, essentially, to take time. In the rush of a persistent accelerated now, interruptions and challenges to life in real-time are sometimes necessary in order to ask what kind of future we're building.

SpaceX Warns FCC Fake Competitors Could Disrupt its Space Internet Plan

As reported by Motherboard: The biggest impediment to SpaceX's plan to create a worldwide, satellite broadband network might not be the sheer technological difficulty of putting 4,000 satellites into space. Instead, outdated international and domestic regulations on satellite communications could stand in the way, according to a ​new Federal Communications Commission filing by the company.

In January, SpaceX founder Elon Musk said the company plans to  ​launch an array of internet-providing satellites that will orbit roughly 750 miles above the Earth, giving the company something to do with those reusable rockets it's been developing. But, in order to launch and operate any satellite, manufacturers have to reserve communications spectrum with either the FCC or another country's communications body, which then must deal with the International Telecommunication Union.

The spectrum-reserving process is also extremely important, from both a business and practical standpoint. Without dedicated spectrum, ground control systems can't contact satellites, which means you have a useless piece of junk hurtling through space at very fast speeds. But spectrum is also very crowded, meaning it's not handed out like candy.

As you'd expect, it's a complex, expensive process. In fact, it's so complex and so expensive that many commercial space operators have been skipping the FCC altogether and getting their satellites registered overseas, say, in the Isle of Man, which has a far more straightforward licensing system. SpaceX’s system of many, non-geostationary orbit (NGSO) satellites complicates the process further.

"The FCC is very good at what it does, but spectrum licensing is an extremely arcane part of the whole system," Christopher Stott, chairman of ManSat, a company that does licensing in the Isle of Man, told me. "All countries need to comply with the ITU rules, but countries' internal regulations are different, and it's become this kind of competitive environment. There are 100 countries active in orbital flying, and only 65 or so satellite companies."

There are no space launch facilities on the Isle of Man, but you can still license spectrum from them. Same with Tonga, and many other countries. The ease of licensing through a foreign country has been, on some levels, a boon for satellite companies. But SpaceX says it’s so easy to get spectrum from some countries that it could ultimately create false competitors who are out only to impede SpaceX and other serious space companies.

"US-based satellite operators may prefer to operate as US licensees but are often forced to seek ITU filing and coordination through foreign administrations given the current FCC regulatory environment, which often places US networks at a disadvantage in the competitive international marketplace," SpaceX attorneys wrote in a filing last week with the FCC.

The ease of licensing in other countries “create[s] additional incentives for foreign administrations to pursue NGSO broadband satellite filing strategies that effectively block access to available spectrum and orbital resources,” the company continued. “This, in turn, substantially undermines competition and innovation by significantly delaying or preventing bona fide NGSO broadband satellite system proponents from coordinating and ultimately deploying competitive systems.”


For companies wanting to fly to space under an American "flag," or using American spectrum, there has been a workaround—many satellite manufacturers have pushed to get their satellites classified as "experimental" by the FCC, a designation that makes the licensing process much more straightforward and many orders of magnitude cheaper. Stott says the experimental distinction is, from his perspective, simply semantics.

  An experimental license from the FCC also won't work for SpaceX, because experimental satellites technically aren’t supposed to operate for longer than a year or two. SpaceX’s satellites presumably would need to be in space for much longer in order to be useful.

"SpaceX will need to put up a couple satellites to do experiments, of course," Gus Hurwitz, a space law professor at the University of Nebraska, told me. "But it won't make sense for them to use experimental for the entire array. They'll need to reconfigure the constellation for the long haul."

So it's understandable that SpaceX wants the FCC to reform its satellite licensing regulations. The FCC realizes that its rules are currently too complex, and  ​has been trying to lower both the financial and legal barriers to entry for commercial satellite operators. The FCC and SpaceX did not respond to my multiple requests for comment.

While it has asked the FCC to reassess regulations, SpaceX doesn't want the hoops companies need to jump through to be too low. The company's attorneys said the company worries that new regulations might make it too easy for companies to “abuse” the system in the United States, much like they do internationally.

"Spectrum warehousing can be extremely detrimental and unprepared, highly speculative, or disingenuous applicants must be prevented from pursuing 'paper satellites' (or 'paper constellations'), which can unjustly obstruct and delay qualified applicants from deploying their systems," SpaceX wrote.

See, SpaceX isn't the only company that wants to launch broadband satellites. And the company worries that a competitor will be able to launch, say, one satellite, "test" it, and then sit around while it works to procure funding or figure out technical problems, all the while sitting on very valuable spectrum.

"If a licensee is authorized for 10,000 satellites, the launch of a single satellite after three and a half years is not an indicator that the licensee can successfully deploy the other 9,999 satellites, or even a significant fraction thereof," the company wrote.

And this is where  ​we get some insight into SpaceX's plans. The company proposes that, within six years of being granted a license, any satellite broadband-providing company should be required to launch 75 percent of the total satellites they planned for. So, if SpaceX gets approval for 4,000 satellites, which is a number it's bandied about before, it believes it'll be able to launch 3,000 of them within six years of getting approval from the FCC.

That's a Herculean task. The company might be able to launch more than one satellite at a time—we know virtually nothing about what its satellites will look like, but its various rockets should be able to carry several—but Musk is still looking at a lot of launches, perhaps more than one a week.

Musk has  ​made it no secret that the reusable rocket is key to SpaceX's future plans, and this filing makes that painfully obvious. If he's going to build an array of internet satellites, launches that cost many millions of dollars each are not going to work. And, if he's going to build an array of internet satellites, it appears he wants to do it very, very quickly. The question, now, is whether the FCC sees space the same way he does.

NASA Orders Missions to Resupply Space Station in 2017

As reported by Spaceflight Now: NASA has ordered four additional launches to deliver cargo to the International Space Station in 2017 — three from SpaceX and one from Orbital ATK — to cover the research lab’s logistics needs until a new set of resupply contracts take effect.
The extra missions for SpaceX and Orbital ATK will serve as a bridge between the contractors’ current contracts and new commercial cargo deals that will cover resupply missions launching from 2018 through at least 2020.

SpaceX and Orbital ATK won Commercial Resupply Services contracts from NASA in December 2008, covering 12 cargo deliveries by SpaceX’s Dragon spacecraft and eight missions with Orbital ATK’s Cygnus supply ship.

A NASA spokesperson said the space agency has extended the CRS contract one year, giving SpaceX and Orbital ATK until the end of 2017 to complete the extra missions to bring provisions, food, clothes, experiments and spare parts to the 450-ton complex more than 250 miles above Earth.


“NASA has ordered three additional flights with SpaceX in the extension period,” said Stephanie Schierholz, a NASA spokesperson. “NASA has ordered one additional flight with Orbital in the extension period. This is based on the projected needs of the ISS program for cargo upmass, return and disposal and the unique capabilities of each contractor.”

Shierholz declined to release the value of the contract modifications, saying the data is sensitive.

SpaceX and Orbital ATK’s original contracts, which included mechanisms to add more missions, had “not-to-exceed” values of $3.1 billion each. That value does not change with the extra missions, Schierholz said.

The new launch orders give SpaceX 15 resupply missions under the contract.

Orbital ATK lost a Cygnus supply ship during an explosive launch mishap moments after liftoff from Virginia in October, forcing the company to redesign its Antares rocket for a new engine. The next Cygnus cargo carrier is due for launch from Cape Canaveral in October on a United Launch Alliance Atlas 5 rocket, then Orbital ATK plans to resume launches on the Antares booster in March 2016.

The Atlas 5 rocket can lift more cargo into orbit than the Antares launcher — allowing officials to use more of the Cygnus spaceship’s expansive internal volume — and Orbital ATK says it can now meet its contractual obligations to NASA with seven flights.

NASA required each contractor to deliver at least 20 metric tons, or about 44,000 pounds, of cargo to the space station.

The new mission launching in 2017 gives Orbital ATK eight flights, including the failed launch last year.
An Orbital ATK Cygnus spacecraft is grappled by the space station's robotic arm in July 2014. Credit: NASA
An Orbital ATK Cygnus spacecraft is grappled by the space station’s robotic arm in July 2014. Credit: NASA

NASA is conducting an open competition for a second round of Commercial Resupply Services (CRS 2) contracts. Officials plan to select winners in June.

Agency officials are expected to choose at least two contractors to ensure the space station has redundant supply chains in case one company runs into problems.

The competition is packed, with Orbital ATK, Boeing Co., Sierra Nevada Corp. and Lockheed Martin Corp. confirming they submitted bids. SpaceX is also believed to be in the running, but company officials have not confirmed their participation.

Boeing’s CST-100 crew capsule in development to transport astronauts to and from the space station could be outfitted to carry cargo on round-trip missions. Sierra Nevada is developing the Dream Chaser space plane, but it lost a NASA contract to ferry crews to Boeing and SpaceX.

Lockheed Martin told reporters this week it plans to unveil details of its bid to resupply the space station March 12.

Monday, March 9, 2015

Goodyear Concept Tire Could Generate Electricity

As reported by the Dallas Morning News: Goodyear Tire & Rubber Co. introduced a concept tire at the Geneva auto show that it says could generate electricity for use by electric cars.
Most mainstream electric cars are limited to a range of 100 miles or less before they must be recharged.

The concept “BHO3” tire “offers the possibility” of helping recharge the batteries of electric cars by transforming heat from a rolling tire into electrical energy, Goodyear officials said.

“These concept tires re-imagine the role that tires may play in the future,” said Joe Zekoski, senior vice president and chief technical officer at Goodyear. “We envision a future in which our products become more integrated with the vehicle and the consumer, more environmentally friendly and more versatile.”

Tires flex as they roll, creating heat. Material in the BHO3 concept tire captures that heat and transforms it into electrical energy.

As demand for electric cars grows, the technology could “significantly contribute” to efforts to extend the driving range of the vehicles, Goodyear officials said.

Goodyear also displayed a tire at the Geneva show with three tubes inside it that can be inflated by an internal pump to change the driving characteristics of the tire.

The “triple tube” tire can be altered with air pressure while driving for “eco-safety,” sporty characteristics or wet-traction.

Although concepts, both tires represent “essential aspects” of Goodyear’s innovation strategy, Zekoski said.

Friday, March 6, 2015

Will You Need a New License to Operate a Self-Driving Car?

As reported by IEEE Spectrum: How do you train a driver not to drive? That’s a question officials in California are wrestling with. The U.S. state furthest along the road to self-driving vehicles is drawing up regulations for the operation of autonomous vehicles by the general public—and it may require motorists to undergo additional instruction or evaluation before they can be chauffeured by robots.

Self-driving cars promise a future where you can watch television, sip cocktails, or snooze all the way home. But what happens when something goes wrong? Today’s drivers have not been taught how to cope with runaway acceleration, unexpected braking, or a car that wants to steer into a wall.

“Driver training or driver readiness is a component that we are actively discussing,” says Bernard Soriano, deputy director for California’s Department of Motor Vehicles. “Some of the elements that the manufacturers have in their test-driver training programs could be something that we could consider.”

These include classroom lessons on the abilities and limitations of autonomous technologies, computer simulations of failures, and real-world driving sessions. However, carmakers’ training programs can vary considerably. (See our investigation of robocar test-driver certification here.) Google requires that its test drivers complete weeks of in-depth lessons and rigorous exams, while Audi’s entire program lasts just a couple of hours.

One problem is that regulators do not know whether self-driving technologies will arrive in production vehicles as optional features in luxury cars or as the master control of fully autonomous robo-taxis. Ryan Calo, who teaches a robotics law and policy class at the University of Washington, believes the distinction is crucial. “For an autonomous vehicle without a steering wheel, I’m not sure you need any more training than you’d get for a dishwasher,” he says. “But for a vehicle primarily meant to be driven by a human driver and that has an autonomous mode, I could imagine some additional degree of certification.”

Today’s experimental autonomous cars occasionally need to hand control back to their human operators, either because of a bug in the system or for something as innocuous as the car leaving a well-mapped area. These “disengagements” may require the driver to take action quickly. California takes disengagements so seriously that it requires manufacturers testing self-driving cars to log each one. “Today, drivers are not trained or tested for that change in control,” says Patrick Lin, director of the ethics and emerging sciences group at California Polytechnic State University. “Humans aren’t hardwired to sit and monitor a system for long periods of time and then quickly react properly when an emergency happens.”

Drivers might also need help setting up an autonomous vehicle, training in how to deactivate systems in situations that no self-driving car could anticipate, such as an approaching dust storm, or dealing with errors made by the system as it is driving.
However, not everyone believes that self-driving technology presents drivers with any special challenges. In a document called The Pathway to Driverless Cars,” [pdf] which was released 11 February, the British government said a normal driver’s license would be sufficient to operate cars with an autonomous mode in the United Kingdom,  even for test drivers of experimental vehicles. It also anticipates that fully automated vehicles would require no driver’s license at all. But it acknowledges that those views might change once self-driving cars take to the roads:  “Emergent properties of the way automated systems interact…may potentially [require] changes to driver training, testing, and licensing.”

The Swedish authorities have a similarly flexible attitude. A report from the Swedish Transport Agency [pdf] last year said, “As things stand at present, it is too early to determine what authorization requirements would be appropriate” for fully autonomous cars.

Soriano doesn't have the luxury of such a wait-and-see attitude. California has already missed a 1 January deadline to establish regulations for the public use of self-driving cars. When it comes to the issue of driver training and certification, he admits, “We haven’t made a decision on it yet.”