Google build first champion Go AI

January 28, 2016 // 2:28 p.m.

Tags: #ai #alphago #artificial-intelligence #chess #deep-learning #deepmind #facebook #fan-hui #go #nature #neural-networks

Google has hit a major milestone in the development of artificial intelligence, developing the first system capable of beating a Go without being given a handicap.

While computers have long dominated the chess scene, Go is a very different game: where chess has 10-to-the-power-of-120 games, Go has 10-to-the-power-of-761. There's no way, in other words, to take a brute-force approach to finding optimal moves, as in simpler games, meaning that computers - which excel at brute force but lack intuition - have traditionally performed poorly against Go grandmasters. Until now.

Using the company's DeepMind platform and combining two neural networks combined with a new search algorithm, Google has been able to produce an artificial intelligence dubbed AlphaGo capable of soundly trouncing a human Go expert five games to zero. 'The key to AlphaGo is reducing the enormous search space to something more manageable. To do this, it combines a state-of-the-art tree search with two deep neural networks, each of which contains many layers with millions of neuron-like connections,' explained the team behind the project. 'One neural network, the “policy network”, predicts the next move, and is used to narrow the search to consider only the moves most likely to lead to a win. The other neural network, the “value network”, is then used to reduce the depth of the search tree -- estimating the winner in each position in place of searching all the way to the end of the game.

'AlphaGo’s search algorithm is much more human-like than previous approaches. For example, when Deep Blue played chess, it searched by brute force over thousands of times more positions than AlphaGo. Instead, AlphaGo looks ahead by playing out the remainder of the game in its imagination, many times over - a technique known as Monte-Carlo tree search. But unlike previous Monte-Carlo programs, AlphaGo uses deep neural networks to guide its search. During each simulated game, the policy network suggests intelligent moves to play, while the value network astutely evaluates the position that is reached. Finally, AlphaGo chooses the move that is most successful in simulation.

The result is undeniably impressive: AlphaGo, running on a distributed computing platform, was able to convincingly beat all existing Go AI applications in internal testing then best three-time reigning European Go champion Fan Hui five games to zero - a feat experts had declared would not be possible for another decade at least.

For Google, the project - published this week in the journal Nature - is proof of its deep-learning and artificial intelligence chops; for Google's rivals, including Facebook which had been working on the same problem, the breakthrough is a clear slap in the face.
Discuss this in the forums