As reported by the Economist: IT WAS not quite a whitewash, but it was close. When DeepMind, a London-based artificial intelligence company, challenged Lee Sedol to a five-game Go match, Mr Lee—one of the best human players of that ancient and notoriously taxing board game—was confident that he would win. He predicted a scoreline of 5-0, or maybe 4-1.
He was right about the score, but wrong about the winner. After the final match, played in Seoul to a crowd on the edges of their seats and streamed to tens of millions more online, the computer had won four games to the human’s one.
For AI researchers and Go aficionados, it is as big a moment as 1997, when Garry Kasparov lost a chess match to Deep Blue, a supercomputer built by IBM. It is much harder to program a computer to play Go than chess—the sheer number of options in every move makes the sort of “brute-force” approach adopted by IBM unfeasible. But DeepMind has managed it. After the match its program, called AlphaGo, was awarded the top professional rank by the Korean Baduk Association (“baduk” being the Korean word for Go.) And it has entered the world rankings in 4th place (see chart).
The win is another demonstration of the power of deep learning, an AI technique that is being used by companies such as Google, Amazon and Baidu, for everything from face-recognition to serving advertisements on websites. As the name implies, deep learning allows computers to learn: that is, to extract patterns from masses of data with a minimum of hand-holding from their human masters.
Technology companies are throwing money at the technology (DeepMind was bought by Google for $400m in 2014). AI researchers are impressed because, unlike many older AI techniques, which must be hand-tuned to address a given problem, deep learning is much more broadly useful. In 2015 DeepMind published a paper describing how a single program, similar to AlphaGo, had learned and mastered 49 different classic video games with no input beyond the pixels on a screen. Games make a good testing ground for AI, but DeepMind hopes to apply the technology in medicine and scientific research.
With good reason
The match proved an emotional one. AlphaGo won the first three games on the trot. Along the way commentators were convinced it had made serious mistakes, but as the machine racked up its wins, they were forced to concede that perhaps there had been no mistakes after all. The machine, which had learned from a mixture of watching humans play and playing against itself, was actually using valid strategies that its human masters had simply overlooked.
The fourth game, though, was thrilling. Mr Lee changed his tactics, playing around the edges of the board, leaving the machine to its own devices in the center. A brilliant play by Mr Lee at move 78 seemed to throw the machine: it had not predicted the strategy, and its next dozen moves were, in the view of commentators, simply bad ones. The machine seemed to fall into a pattern of missteps common to inferior Go programs which use some of the same technology that AlphaGo does. This suggests that, for a while at least, it may be possible for the best humans to exploit the few remaining weaknesses of Go computers.
The fifth game underlined how hard that already is. Afterwards, Demis Hassabis, one of DeepMind’s founders, said it had been the most stressful and exciting of all. Once again, the human commentators reckoned that the machine had made a serious mistake early on. That was the only one, though, and it managed to claw its way back into contention.
Computers are already clearly superior to humans at chess, Scrabble and even “Jeopardy!”, a punny American quiz show that was won by Watson, another IBM supercomputer, in 2011. Go had been, until now, a redoubt of human mental superiority. Yet some see this shift as an opportunity: AlphaGo already seems to have found new ways to play the game, and the best way to get better at a game is to play against people—or machines—that are better than you. Asked if AlphaGo’s play had given him new insights into the game, Mr Lee said it had. “The typical, traditional, classical beliefs of how to play—I’ve come to question them a bit,” he reflected.