As reported by Wired: Suppose that an autonomous car is faced
with a terrible decision to crash into one of two objects. It could
swerve to the left and hit a Volvo sport utility vehicle (SUV), or it
could swerve to the right and hit a Mini Cooper. If you were
programming the car to minimize harm to others–a sensible goal–which way
would you instruct it go in this scenario?
As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.
But physics isn't the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.
Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?
What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.
While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases.
In constructing the edge cases here, we are not trying to simulate
actual conditions in the real world. These scenarios would be very
rare, if realistic at all, but nonetheless they illuminate hidden or
latent problems in normal cases. From the above scenario, we can see
that crash-avoidance algorithms can be biased in troubling ways, and
this is also at least a background concern any time we make a value
judgment that one thing is better to sacrifice than another thing.
In previous years, robot cars have been quarantined largely to highway or freeway environments. This is a relatively simple environment, in that drivers don’t need to worry so much about pedestrians and the countless surprises in city driving. But Google recently announced that it has taken the next step in testing its automated car in exactly city streets. As their operating environment becomes more dynamic and dangerous, robot cars will confront harder choices, be it running into objects or even people.
The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?
In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.
But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.
Not only does this discrimination seem unethical, but it could also
be bad policy. That crash-optimization design may encourage some
motorcyclists to not wear helmets, in order to not stand out as favored
targets of autonomous cars, especially if those cars become more
prevalent on the road. Likewise, in the previous scenario, sales of
automotive brands known for safety may suffer, such as Volvo and
Mercedes Benz, if customers want to avoid being the robot car’s target
of choice.
A robot car’s programming could generate a random number; and if it is an odd number, the car will take one path, and if it is an even number, the car will take the other path. This avoids the possible charge that the car’s programming is discriminatory against large SUVs, responsible motorcyclists, or anything else.
This randomness also doesn’t seem to introduce anything new into our world: luck is all around us, both good and bad. A random decision also better mimics human driving, insofar as split-second emergency reactions can be unpredictable and are not based on reason, since there’s usually not enough time to apply much human reason.
Second, while human drivers may be forgiven for making a poor split-second reaction–for instance, crashing into a Pinto that’s prone to explode, instead of a more stable object–robot cars won’t enjoy that freedom. Programmers have all the time in the world to get it right. It’s the difference between premeditated murder and involuntary manslaughter.
Third, for the foreseeable future, what’s important isn’t just about arriving at the “right” answers to difficult ethical dilemmas, as nice as that would be. But it’s also about being thoughtful about your decisions and able to defend them–it’s about showing your moral math. In ethics, the process of thinking through a problem is as important as the result. Making decisions randomly, then, evades that responsibility. Instead of thoughtful decisions, they are thoughtless, and this may be worse than reflexive human judgments that lead to bad outcomes.
Not using that information in crash-optimization calculations may not be enough. To be in the ethical clear, autonomous cars may need to not collect that information at all. Should they be in possession of the information, and using it could have minimized harm or saved a life, there could be legal liability in failing to use that information. Imagine a similar public outrage if a national intelligence agency had credible information about a terrorist plot but failed to use it to prevent the attack.
A problem with this approach, however, is that auto manufacturers and insurers will want to collect as much data as technically possible, to better understand robot-car crashes and for other purposes, such as novel forms of in-car advertising. So it’s unclear whether voluntarily turning a blind eye to key information is realistic, given the strong temptation to gather as much data as technology will allow.
To optimize crashes, programmers would need to design
cost-functions–algorithms that assign and calculate the expected costs
of various possible options, selecting the one with the lowest cost–that
potentially determine who gets to live and who gets to die. And this
is fundamentally an ethics problem, one that demands care and
transparency in reasoning.
It doesn't matter much that these are rare scenarios. Often, the rare scenarios are the most important ones, making for breathless headlines. In the U.S., a traffic fatality occurs about once every 100 million vehicle-miles traveled. That means you could drive for more than 100 lifetimes and never be involved in a fatal crash. Yet these rare events are exactly what we’re trying to avoid by developing autonomous cars, as Chris Gerdes at Stanford’s School of Engineering reminds us.
Again, the above scenarios are not meant to simulate real-world conditions anyway, but they’re thought-experiments–something like scientific experiments–meant to simplify the issues in order to isolate and study certain variables. In those cases, the variable is the role of ethics, specifically discrimination and justice, in crash-optimization strategies more broadly.
The larger challenge, though, isn't thinking through ethical dilemmas. It’s also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.
Ethics and expectations are challenges common to all automotive manufacturers and tier-one suppliers who want to play in this emerging field, not just particular companies. As the first step toward solving these challenges, creating an open discussion about ethics and autonomous cars can help raise public and industry awareness of the issues, defusing outrage (and therefore large lawsuits) when bad luck or fate crashes into us.
As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.
But physics isn't the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.
Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?
What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.
Is This a Realistic Problem?
Some road accidents are unavoidable, and even autonomous cars can’t
escape that fate. A deer might dart out in front of you, or the car in
the next lane might suddenly swerve into you. Short of defying physics,
a crash is imminent. An autonomous or robot car, though, could make
things better.While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases.
In previous years, robot cars have been quarantined largely to highway or freeway environments. This is a relatively simple environment, in that drivers don’t need to worry so much about pedestrians and the countless surprises in city driving. But Google recently announced that it has taken the next step in testing its automated car in exactly city streets. As their operating environment becomes more dynamic and dangerous, robot cars will confront harder choices, be it running into objects or even people.
The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?
In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.
But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.
The Role of Moral Luck
An elegant solution to these vexing dilemmas is to simply not make a
deliberate choice. We could design an autonomous car to make certain
decisions through a random-number generator. That is, if it’s ethically
problematic to choose which one of two things to crash into–a large SUV
versus a compact car, or a motorcyclist with a helmet versus one
without, and so on–then why make a calculated choice at all?A robot car’s programming could generate a random number; and if it is an odd number, the car will take one path, and if it is an even number, the car will take the other path. This avoids the possible charge that the car’s programming is discriminatory against large SUVs, responsible motorcyclists, or anything else.
This randomness also doesn’t seem to introduce anything new into our world: luck is all around us, both good and bad. A random decision also better mimics human driving, insofar as split-second emergency reactions can be unpredictable and are not based on reason, since there’s usually not enough time to apply much human reason.
Yet, the random-number engine may be inadequate for at least a few
reasons. First, it is not obviously a benefit to mimic human driving,
since a key reason for creating autonomous cars in the first place is
that they should be able to make better decisions than we do. Human
error, distracted driving, drunk driving, and so on are responsible for 90 percent or more of car accidents today, and 32,000-plus people die on U.S. roads every year.
Second, while human drivers may be forgiven for making a poor split-second reaction–for instance, crashing into a Pinto that’s prone to explode, instead of a more stable object–robot cars won’t enjoy that freedom. Programmers have all the time in the world to get it right. It’s the difference between premeditated murder and involuntary manslaughter.
Third, for the foreseeable future, what’s important isn’t just about arriving at the “right” answers to difficult ethical dilemmas, as nice as that would be. But it’s also about being thoughtful about your decisions and able to defend them–it’s about showing your moral math. In ethics, the process of thinking through a problem is as important as the result. Making decisions randomly, then, evades that responsibility. Instead of thoughtful decisions, they are thoughtless, and this may be worse than reflexive human judgments that lead to bad outcomes.
Can We Know Too Much?
A less drastic solution would be to hide certain information that
might enable inappropriate discrimination–a “veil of ignorance”, so to
speak. As it applies to the above scenarios, this could mean not
ascertaining the make or model of other vehicles, or the presence of
helmets and other safety equipment, even if technology could let us,
such as vehicle-to-vehicle communications. If we did that, there would
be no basis for bias.Not using that information in crash-optimization calculations may not be enough. To be in the ethical clear, autonomous cars may need to not collect that information at all. Should they be in possession of the information, and using it could have minimized harm or saved a life, there could be legal liability in failing to use that information. Imagine a similar public outrage if a national intelligence agency had credible information about a terrorist plot but failed to use it to prevent the attack.
A problem with this approach, however, is that auto manufacturers and insurers will want to collect as much data as technically possible, to better understand robot-car crashes and for other purposes, such as novel forms of in-car advertising. So it’s unclear whether voluntarily turning a blind eye to key information is realistic, given the strong temptation to gather as much data as technology will allow.
So, Now What?
In future autonomous cars, crash-avoidance features alone won’t be
enough. Sometimes an accident will be unavoidable as a matter of
physics, for myriad reasons–such as insufficient time to press the
brakes, technology errors, misaligned sensors, bad weather, and just
pure bad luck. Therefore, robot cars will also need to have
crash-optimization strategies.It doesn't matter much that these are rare scenarios. Often, the rare scenarios are the most important ones, making for breathless headlines. In the U.S., a traffic fatality occurs about once every 100 million vehicle-miles traveled. That means you could drive for more than 100 lifetimes and never be involved in a fatal crash. Yet these rare events are exactly what we’re trying to avoid by developing autonomous cars, as Chris Gerdes at Stanford’s School of Engineering reminds us.
Again, the above scenarios are not meant to simulate real-world conditions anyway, but they’re thought-experiments–something like scientific experiments–meant to simplify the issues in order to isolate and study certain variables. In those cases, the variable is the role of ethics, specifically discrimination and justice, in crash-optimization strategies more broadly.
The larger challenge, though, isn't thinking through ethical dilemmas. It’s also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.
Ethics and expectations are challenges common to all automotive manufacturers and tier-one suppliers who want to play in this emerging field, not just particular companies. As the first step toward solving these challenges, creating an open discussion about ethics and autonomous cars can help raise public and industry awareness of the issues, defusing outrage (and therefore large lawsuits) when bad luck or fate crashes into us.
No comments:
Post a Comment