|
We rely on computers to fly our planes, find our cancers, design
our buildings, audit our businesses. That's all well and good. But
what happens when the computer fails? |
As
reported by The Atlantic:
On the evening of February 12, 2009, a
Continental Connection commuter flight made its way through blustery weather
between Newark, New Jersey, and Buffalo, New York. As is typical of commercial
flights today, the pilots didn’t have all that much to do during the hour-long
trip. The captain, Marvin Renslow, manned the controls briefly during takeoff,
guiding the Bombardier Q400 turboprop into the air, then switched on the
autopilot and let the software do the flying. He and his co-pilot, Rebecca Shaw,
chatted—about their families, their careers, the personalities of air-traffic
controllers—as the plane cruised uneventfully along its northwesterly route at
16,000 feet. The Q400 was well into its approach to the Buffalo airport, its
landing gear down, its wing flaps out, when the pilot’s control yoke began to
shudder noisily, a signal that the plane was losing lift and risked going into
an aerodynamic stall. The autopilot disconnected, and the captain took over the
controls. He reacted quickly, but he did precisely the wrong thing: he jerked
back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of
pushing the yoke forward to gain velocity. Rather than preventing a stall,
Renslow’s action caused one. The plane spun out of control, then plummeted.
“We’re down,” the captain said, just before the Q400 slammed into a house in a
Buffalo suburb.
The crash, which killed all 49 people on board as well as one person on the
ground, should never have happened. A National Transportation Safety Board
investigation concluded that the cause of the accident was pilot error. The
captain’s response to the stall warning, the investigators reported, “should
have been automatic, but his improper flight control inputs were inconsistent
with his training” and instead revealed “startle and confusion.” An executive
from the company that operated the flight, the regional carrier Colgan Air,
admitted that the pilots seemed to lack “situational awareness” as the emergency
unfolded.
The Buffalo crash was not an isolated incident. An eerily similar disaster,
with far more casualties, occurred a few months later. On the night of May 31,
an Air France Airbus A330 took off from Rio de Janeiro, bound for Paris. The
jumbo jet ran into a storm over the Atlantic about three hours after takeoff.
Its air-speed sensors, coated with ice, began giving faulty readings, causing
the autopilot to disengage. Bewildered, the pilot flying the plane,
Pierre-Cédric Bonin, yanked back on the stick. The plane rose and a stall
warning sounded, but he continued to pull back heedlessly. As the plane climbed
sharply, it lost velocity. The airspeed sensors began working again, providing
the crew with accurate numbers. Yet Bonin continued to slow the plane. The jet
stalled and began to fall. If he had simply let go of the control, the A330
would likely have righted itself. But he didn’t. The plane dropped 35,000 feet
in three minutes before hitting the ocean. All 228 passengers and crew members
died.
Pilots today work inside what they call “glass cockpits.” The old analog dials
and gauges are mostly gone.
They’ve been replaced by banks of digital displays.
Automation has become so sophisticated that on a typical passenger flight, a
human pilot holds the controls for a grand total of just three minutes. What
pilots spend a lot of time doing is monitoring screens and keying in data.
They’ve become, it’s not much of an exaggeration to say, computer operators.
And that,
many aviation and automation experts have concluded, is a problem.
Overuse of automation erodes pilots’ expertise and dulls their reflexes, leading
to what Jan Noyes, an ergonomics expert at Britain’s University of Bristol,
terms “a de-skilling of the crew.” No one doubts that autopilot has contributed
to improvements in flight safety over the years. It reduces pilot fatigue and
provides advance warnings of problems, and it can keep a plane airborne should
the crew become disabled. But the steady overall decline in plane crashes masks
the recent arrival of “a spectacularly new type of accident,” says Raja
Parasuraman, a psychology professor at George Mason University and a leading
authority on automation. When an autopilot system fails, too many pilots, thrust
abruptly into what has become a rare role, make mistakes. Rory Kay, a veteran
United captain who has served as the top safety official of the Air Line Pilots
Association, put the problem bluntly in a 2011 interview with the Associated
Press: “We’re forgetting how to fly.” The Federal Aviation Administration has
become so concerned that in January it issued a “safety alert” to airlines,
urging them to get their pilots to do more manual flying. An overreliance on
automation, the agency warned, could put planes and passengers at risk.
Doctors use computers to make diagnoses and to perform surgery. Wall Street
bankers use them to assemble and trade financial instruments. Architects use
them to design buildings. Attorneys use them in document discovery. And it’s not
only professional work that’s being computerized. Thanks to smartphones and
other small, affordable computers, we depend on software to carry out many of
our everyday routines. We launch apps to aid us in shopping, cooking,
socializing, even raising our kids. We follow turn-by-turn GPS instructions. We
seek advice from recommendation engines on what to watch, read, and listen to.
We call on Google, or Siri, to answer our questions and solve our problems. More
and more, at work and at leisure, we’re living our lives inside glass
cockpits.
Psychologists have found that when we work with computers, we often fall
victim to two cognitive ailments—complacency and bias—that can undercut our
performance and lead to mistakes. Automation complacency occurs when a computer
lulls us into a false sense of security. Confident that the machine will work
flawlessly and handle any problem that crops up, we allow our attention to
drift. We become disengaged from our work, and our awareness of what’s going on
around us fades. Automation bias occurs when we place too much faith in the
accuracy of the information coming through our monitors. Our trust in the
software becomes so strong that
we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or
insufficient data, we remain oblivious to the error.
What’s most astonishing, and unsettling, about computer automation is that
it’s still in its early stages. Experts used to assume that there were limits to
the ability of programmers to automate complicated tasks, particularly those
involving sensory perception, pattern recognition, and conceptual knowledge.
They pointed to the example of driving a car, which requires not only the
instantaneous interpretation of a welter of visual signals but also the ability
to adapt seamlessly to unanticipated situations. “Executing a left turn across
oncoming traffic,” two prominent economists wrote in 2004, “involves so many
factors that it is hard to imagine the set of rules that can replicate a
driver’s behavior.” Just six years later, in October 2010, Google announced that
it had built a fleet of seven “
self-driving cars,” which had already logged more
than 140,000 miles on roads in California and Nevada.
Driverless cars provide a preview of how robots will be able to navigate and
perform work in the physical world, taking over activities requiring
environmental awareness, coordinated motion, and fluid decision making. Equally
rapid progress is being made in automating cerebral tasks. Just a few years ago,
the idea of a computer competing on a game show like
Jeopardy would have
seemed laughable, but in a celebrated match in 2011, the IBM supercomputer
Watson trounced
Jeopardy’s all-time champion, Ken Jennings. Watson
doesn’t think the way people think; it has no understanding of what it’s doing
or saying. Its advantage lies in the extraordinary speed of modern computer
processors.
In
Race Against the Machine, a 2011 e-book on the economic
implications of computerization, the MIT researchers Erik Brynjolfsson and
Andrew McAfee argue that Google’s driverless car and IBM’s Watson are examples
of a new wave of automation that, drawing on the “exponential growth” in
computer power, will change the nature of work in virtually every job and
profession. Today, they write, “computers improve so quickly that their
capabilities pass from the realm of science fiction into the everyday world not
over the course of a human lifetime, or even within the span of a professional’s
career, but instead in just a few years.”
In a classic 1983 article in the journal
Automatica, Lisanne
Bainbridge, an engineering psychologist at University College London, described
a conundrum of computer automation. Because many system designers assume that
human operators are “unreliable and inefficient,” at least when compared with a
computer, they strive to give the operators as small a role as possible. People
end up functioning as mere monitors, passive watchers of screens. That’s a job
that humans, with our notoriously wandering minds, are especially bad at.
Research on vigilance, dating back to studies of radar operators during World
War II, shows that people have trouble maintaining their attention on a stable
display of information for more than half an hour. “This means,” Bainbridge
observed, “that it is humanly impossible to carry out the basic function of
monitoring for unlikely abnormalities.” And because a person’s skills
“deteriorate when they are not used,” even an experienced operator will
eventually begin to act like an inexperienced one if restricted to just
watching. The lack of awareness and the degradation of know-how raise the odds
that when something goes wrong, the operator will react ineptly. The assumption
that the human will be the weakest link in the system becomes
self-fulfilling.
Psychologists have discovered some simple ways to temper automation’s ill
effects. You can program software to shift control back to human operators at
frequent but irregular intervals; knowing that they may need to take command at
any moment keeps people engaged, promoting situational awareness and learning.
You can put limits on the scope of automation, making sure that people working
with computers perform challenging tasks rather than merely observing. Giving
people more to do helps sustain the generation effect.
You can incorporate
educational routines into software, requiring users to repeat difficult manual
and mental tasks that encourage memory formation and skill building.
Some software writers take such suggestions to heart. In schools, the best
instructional programs help students master a subject by encouraging
attentiveness, demanding hard work, and reinforcing learned skills through
repetition. Their design reflects the latest discoveries about how our brains
store memories and weave them into conceptual knowledge and practical know-how.
But most software applications don’t foster learning and engagement. In fact,
they have the opposite effect. That’s because taking the steps necessary to
promote the development and maintenance of expertise almost always entails a
sacrifice of speed and productivity. Learning requires inefficiency. Businesses,
which seek to maximize productivity and profit, would rarely accept such a
trade-off. Individuals, too, almost always seek efficiency and convenience.
We
pick the program that lightens our load, not the one that makes us work harder
and longer. Abstract concerns about the fate of human talent can’t compete with
the
allure of saving time and money.
The small island of Igloolik, off the coast of
the Melville Peninsula in the Nunavut territory of northern Canada, is a
bewildering place in the winter. The average temperature hovers at about
20 degrees below zero, thick sheets of sea ice cover the surrounding waters, and
the sun is rarely seen. Despite the brutal conditions, Inuit hunters have for
some 4,000 years ventured out from their homes on the island and traveled across
miles of ice and tundra to search for game. The hunters’ ability to navigate
vast stretches of the barren Arctic terrain, where landmarks are few, snow
formations are in constant flux, and trails disappear overnight, has amazed
explorers and scientists for centuries. The Inuit’s extraordinary way-finding
skills are born not of technological prowess—they long eschewed maps and
compasses—but of a profound understanding of winds, snowdrift patterns, animal
behavior, stars, and tides.
Inuit culture is changing now. The Igloolik hunters have begun to rely on
computer-generated maps to get around. Adoption of GPS technology has been
particularly strong among younger Inuit, and it’s not hard to understand why.
The ease and convenience of automated navigation makes the traditional Inuit
techniques seem archaic and cumbersome.
But as GPS devices have proliferated on Igloolik, reports of serious
accidents during hunts have spread. A hunter who hasn't developed way-finding
skills can easily become lost, particularly if his GPS receiver fails. The
routes so meticulously plotted on satellite maps can also give hunters tunnel
vision, leading them onto thin ice or into other hazards a skilled navigator
would avoid. The anthropologist Claudio Aporta, of Carleton University in
Ottawa, has been studying Inuit hunters for more than 15 years. He notes that
while satellite navigation offers practical advantages, its adoption has already
brought a deterioration in way-finding abilities and, more generally, a weakened
feel for the land. An Inuit on a GPS-equipped snowmobile is not so different
from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to
the instructions coming from the computer, he loses sight of his surroundings.
He travels “blindfolded,” as Aporta puts it. A unique talent that has
distinguished a people for centuries may evaporate in a generation.
Whether it’s a pilot on a flight deck, a doctor in an examination room, or an
Inuit hunter on an ice floe, knowing demands doing. One of the most remarkable
things about us is also one of the easiest to overlook: each time we collide
with the real, we deepen our understanding of the world and become more fully a
part of it. While we’re wrestling with a difficult task, we may be motivated by
an anticipation of the ends of our labor, but it’s the work itself—the
means—that makes us who we are. Computer automation severs the ends from the
means. It makes getting what we want easier, but it distances us from the work
of knowing.
As we transform ourselves into creatures of the screen, we face an
existential question: Does our essence still lie in what we know, or are we now
content to be defined by what we want? If we don’t grapple with that question
ourselves, our gadgets will be happy to answer it for us.