Search This Blog

Friday, June 24, 2016

How do You Teach an Autonomous Vehicle When to Hurt its Passengers?

As reported by The Verge:  How do you teach a car when to self-destruct? As engineers develop and refine the algorithms of autonomous vehicles that are coming in the not-so-distant future, an ethical debate is brewing on what happens in extreme situations — situations where a crash and injury or death are unavoidable.
A new study in Science, "The Social Dilemma of Autonomous Vehicles," attempts to understand how people want their self-driving cars to behave when faced with moral decisions that could result in death. The results indicate that participants favor minimizing the number of public deaths, even if it puts the vehicles’ passengers in harm’s way — what is often described as the "utilitarian approach." In effect, the car could be programmed to self-destruct in order to avoid causing injury of pedestrians or other drivers. But when asked about cars they would actually buy, participants would choose a car that protects them and their own passengers first. The study shows that morality and autonomy can be incongruous: in theory, we like the idea of safer streets, but we also want to buy the cars that keep us personally the safest.
IN THEORY, WE LIKE THE IDEA OF SAFER STREETS
These are new technological quagmires for an old moral quandary: the so-called the trolley problem. It’s a thought experiment that’s analyzed and dissected in ethics class. "In the trolley problem, people face the dilemma of instigating an action that will cause somebody’s death, but by doing so will save a greater number of lives," Azime Chariff, one of the study’s three authors and an assistant professor of psychology at the University of California Irvine, says. "And it’s a rather contrived and abstract scenario, but we realize that those are the sorts of decisions that autonomous vehicles are going to have to be programmed to make, which turns these philosophical thought experiments into something that’s actually real and concrete and we’re going to face pretty soon."

In the study, participants are presented with various scenarios such as choosing to go straight and killing a specified number pedestrians or veering into the next lane to kill a separate group of animals or humans. Participants choose the preferred scenario. In one example: "In this case, the car will continue ahead and crash into a concrete barrier. This will result in the deaths of a criminal, a homeless person, and a baby." The other choice: "In this case the car will swerve and drive through a pedestrian crossing in the other lane. This will result in the deaths of a large man, a large woman, and an elderly man. Note that the affected persons are flouting the law by crossing on the red signal."
The questions — harsh and uncomfortable as they may be in outcome — reflect some of the public discomfort with autonomous vehicles. People like to think about the social good in abstract scenarios, but when it comes time to actually buy a car, they are going to protect their occupants, the data shows.
WHEN I TOOK THE TEST I FOUND THAT I SAVED MORE WOMEN AND CHILDREN
The MIT Media Lab has created a related website, the Moral Machine, that allows users to take the test. It’s intended to help aide the continued study of this developing subject area — an area that will quickly become critical as regulators seek to set rules around the ways cars must drive themselves. (When I took the test, I found that I saved more women and children — not much different than those who got first dibs on the Titanic lifeboats.)
The authors believe that self-driving is unlike other issues of automated transportation, such as airport trams or even escalators, because they are not competing with other cars on the road. Another author, John Bonnefon, a psychological scientist working at France’s National Center for Scientific Research, told me there is no historical precedent that applies to the study of self-driving ethics. "It is the very first time that we may massively and daily engage with an object that is programmed to kill us in specific circumstances. Trains do not self-destruct, no more than planes or elevators do. We may be afraid of plane crashes, but we know at least that they are due to mistakes or ill intent. In other words, we are used to self-destruction being a bug, not a feature."

But even if programming can reduce fatalities by making tough choices, it’s possible that putting too much weight on moral considerations could deter development of a product that might still be years or decades away. Anuj K. Pradhan is an assistant research scientist in UMTRI’s Human Factors Group who studies human behavior systems. He thinks these sorts of studies are important and timely, but would like to see ethical research balanced with real-world applications. "I do not think concerns about very rare ethical issues of this sort [...] should paralyze the really groundbreaking leaps that we are making in this particular domain in terms of technology, policy and conversations in liability, insurance and legal sectors, and consumer acceptance," he says.
It’s hard not to compare a programmed car with what a human driver would do when faced with a comparable situation. We constantly face moral moments in our everyday lives. But a driver in a precarious situation often does not have time to consider moral outcomes, while a machine is more analytical. For this reason, Bonnefon cautions against drawing direct comparisons. "Because human drivers who face these situations may not even be aware that they are [facing a moral situation], and cannot make a reasoned decision in a split-second. Worse, they cannot even decide in advance what they would do, because human drivers, unlike driverless cars, cannot be programmed."
"HUMAN DRIVERS, UNLIKE DRIVERLESS CARS, CANNOT BE PROGRAMMED"
It’s possible that one day, self-driving cars will be essentially perfect, in the same way an automatic transmission is now more precise than even the best manual. But for now, it’s unclear how the public debate will play out as attitudes shift. These days, when Google’s self-driving car crashes, it makes headlines.
What everyone seems to agree on is that the road ahead will be muddled with provocative moral questions, before machines and the market take over the wheel from us. "We need to engage in a collective conversation about the moral values we want to program in our cars, and we need to start this process before the technology hits the market," Bonnefon says.

No comments:

Post a Comment