As
reported by The New Yorker: Are we getting smarter or stupider? In “
The Shallows: What the Internet Is Doing to Our Brains,” from 2010, Nicholas Carr blames the Web for growing cognitive problems, while Clive Thompson, in his recent book, “
Smarter Than You Think: How Technology Is Changing Our Minds for the Better,”
argues that our technologies are boosting our abilities.
To settle the
matter, consider the following hypothetical experiment: A well-educated time traveler from 1914 enters
a room divided in half by a curtain. A scientist tells him that his
task is to ascertain the intelligence of whoever is on the other side of
the curtain by asking whatever questions he pleases.
The traveler’s queries are answered by a voice with an accent that
he does not recognize (twenty-first-century American English). The woman
on the other side of the curtain has an extraordinary memory. She can,
without much delay, recite any passage from the Bible or Shakespeare.
Her arithmetic skills are astonishing—difficult problems are solved in
seconds. She is also able to speak many foreign languages, though her
pronunciation is odd. Most impressive, perhaps, is her ability to
describe almost any part of the Earth in great detail, as though she is
viewing it from the sky. She is also proficient at connecting seemingly
random concepts, and when the traveler asks her a question like “How
can God be both good and omnipotent?” she can provide complex
theoretical answers.
Based on this modified
Turing test,
our time traveler would conclude that, in the past century, the human
race achieved a new level of superintelligence. Using lingo unavailable
in 1914, (it was coined later by John von Neumann) he might conclude
that the human race had reached a “singularity”—a point where it had
gained an intelligence beyond the understanding of the 1914 mind.
The woman behind the curtain, is, of course, just one of us. That is
to say, she is a regular human who has augmented her brain using two
tools: her mobile phone and a connection to the Internet and, thus, to
Web sites like Wikipedia, Google Maps, and Quora. To us, she is
unremarkable, but to the man she is astonishing. With our machines, we
are augmented humans and prosthetic gods, though we’re remarkably blasé
about that fact, like anything we’re used to. Take away our tools, the
argument goes, and we’re likely stupider than our friend from the early
twentieth century, who has a
longer attention span, may read and write Latin, and does arithmetic
faster.
The time-traveler scenario demonstrates that how you answer the
question of whether we are getting smarter depends on how you classify
“we.” This is why Thompson and Carr reach different results: Thompson is
judging the cyborg, while Carr is judging the man underneath.
The project of human augmentation has been under way for the past
fifty years. It began in the Pentagon, in the early nineteen-sixties,
when the psychologist J. C. R. Licklider, who was in charge of the
funding of advanced research, began to contemplate what he called
man-computer symbiosis. (Licklider also proposed that the Defense
Department fund a project which became, essentially, the Internet).
Licklider believed that the great importance of computers would lie in
how they improved human capabilities, and so he funded the research of,
among others, Douglas Engelbart, the author of “Augmenting Human
Intellect,” who proposed “a new and systematic approach to improving the
intellectual effectiveness of the individual human being.” Engelbart
founded the Augmentation Research Center, which, in the
nineteen-sixties, developed the idea of a graphical user interface based
on a screen, a keyboard, and a mouse (demonstrated in “
The Mother of all Demos”). Many of the researchers at A.R.C. went on to work in the famous Xerox
PARC laboratories.
PARC’s interface ideas were borrowed by Apple, and the rest is history.
Since then, the real project of computing has not been the creation of independently intelligent entities (
HAL,
for example) but, instead, augmenting our brains where they are weak.
The most successful, and the most lucrative, products are those that
help us with tasks which we would otherwise be unable to complete. Our
limited working memory means we’re bad at arithmetic, and so no one does
long division anymore. Our memories are unreliable, so we have
supplemented them with electronic storage. The human brain, compared
with a computer, is bad at networking with other brains, so we have
invented tools, like Wikipedia and Google search, that aid that kind of
interfacing.
Our time-traveling friend proves that, though the human-augmentation
project has been a success, we cannot deny that it has come at some
cost. The idea of biological atrophy is alarming, and there is always a
nagging sense that our auxiliary brains don’t quite count as “us.” But
make no mistake: we are now different creatures than we once were,
evolving technologically rather than biologically, in directions we must
hope are for the best.
Tim Wu is a professor at Columbia Law School and the author of “The Master Switch.” This is the first in a series of posts he will be writing about technology and intelligence.