The Korea Herald

지나쌤

[Michael Hiltzik] A computer is now the master of Go, but let’s see it win at poker

By Lee Yoon-joo

Published : March 20, 2016 - 16:02

    • Link copied

The worlds of Go and artificial intelligence were both unsettled by the victory this week of an advanced AI computer over one of the world’s leading masters of the intricate Japanese board game, 4 games to 1. It’s an achievement that experts in both fields didn’t expect to happen for as long as 10 years.

The triumph of AlphaGo, the product of a Google lab named DeepMind, over the fourth-ranked Go champion Lee Se-dol of South Korea, is widely viewed as a landmark in artificial intelligence much greater the victory of IBM’s Deep Blue over chess Grandmaster Garry Kasparov in 1997. That result, Kasparov wrote in 2010, “was met with astonishment and grief by those who took it as a symbol of mankind’s submission before the almighty computer.” Go is a far more complex challenge than chess, so it’s unsurprising that AlphaGo’s victory is seen as bringing the era of ultimate submission that much closer.

But is it? Some AI experts are cautious. “People’s minds race forward and say, if it can beat a world champion it can do anything,” Oren Etzioni, head of the nonprofit Allen Institute for Artificial Intelligence in Seattle, told Nature this week. The computational technique employed by AlphaGo isn’t broadly applicable, he said. “We are a long, long way from general artificial intelligence.”

And there’s another aspect to consider. Chess and Go are both “deterministic perfect information” games, Alan Levinovitz of Wired observed in 2014 — games in which “no information is hidden from either player, and there are no built-in elements of chance, such as dice.” How will a computer do in a game where key information is hidden, and where the best players win by using the unique human skill of lying? In games like poker, Science magazine points out, where victory depends not on pursuing the optimal strategy, but deviating from it?

First, a few details of the game at the center of this week’s match. Invented more than two millennia ago in China, Go is played on a grid of 19-by-19 lines on which two players place black or white stones, attempting to build chains that surround territory without being enveloped by their opponent. Because of the size of the board and the lack of specific rules for each piece, the number of potential moves in Go is much larger than in chess.

The number of possible arrangements of stones is on the order of 10 to the 100th power; far more than the options that Deep Blue had to consider in defeating Kasparov. The route to victory at any point can be obscure; experts talk as though their play emerges from intuition, or even the subconscious, as much as experience. Indeed, during Lee’s lone victorious game, DeepMind chief Demis Hassabis tweeted that AlphaGo hadn’t played a wrong move, but had got deluded into believing it was winning. Go occupies a special position in oriental life; in his classic work “The Master of Go,” the Nobel novelist Yasunari Kawabata spun a trenchant tale of youth vs. age, the past vs. the present, and the power of culture out of the game.

The DeepMind designers’ solution to the intricacy of Go was to use an architecture known as neural networks. These mimic the structure of the human brain by creating connections that become stronger with experience — in other words, to learn. AlphaGo could test millions of options and assess their outcomes rapidly, but its design allowed it to develop shortcuts to discard all but the most promising choices. Observers found the result strikingly human-like; the machine’s 37th move in game two was so unexpected that some commentators thought it was a mistake. Lee left the room in shock, and later confessed, “Today I am speechless.”

Does AlphaGo’s performance on Go really point to the coming dominion of learning-capable machines over us mere mortals? The idea of superintelligent machines exercising a hostile attitude toward the human race is colorful, certainly and frightening in a Hollywood fright-night feature sort of way. (“The Terminator” movies are the best example of the genre.) Not a few well-respected technologists, including Bill Gates and Elon Musk, have sounded the alarm.

But just as many AI experts dismiss such notions. “The extraordinary claim that machines can become so intelligent as to gain demonic powers requires extraordinary evidence, particularly since artificial intelligence researchers have struggled to create machines that show much evidence of intelligence at all,” wrote Edward Moore Geist of Stanford last year, reviewing the very book that got Musk tweeting so nervously, Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies.”

More likely, humans will learn how to exploit the capabilities of superintelligent machines to augment our own efforts. That’s the guess of Garry Kasparov, who in 1998 participated in a chess match in which he and his opponent both had a computer partner at their side. “Having a computer partner,” he wrote, “meant never having to worry about making a tactical blunder. … With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.”

In that 2010 piece, Kasparov also drew distinctions between chess and poker. “While chess is a 100 percent information game — both players are aware of all the data all the time — and therefore directly susceptible to computing power, poker has hidden cards and variable stakes, creating critical roles for chance, bluffing, and risk management.

“These might seem to be aspects of poker based entirely on human psychology and therefore invulnerable to computer incursion. A machine can trivially calculate the odds of every hand, but what to make of an opponent with poor odds making a large bet?”

And yet, as he observed, computers are getting better at poker already. A computer better than a human at bluffing and lying? That might be really frightening.

———

By Michael Hiltzik


Michael Hiltzik is a columnist for the Los Angeles Times. Readers may send him email at mhiltzik@latimes.com. --Ed.

(Tribune Content Agency)