AI Learns to Beat Master of Ancient Chinese Board Game

By Luke Plunkett on at

Go is a board game from China that’s over 2,000 years old. It’s complex as hell: there are more possible positions in the game than there are atoms in the universe. So news that a Google AI has beaten a human master at the game is fascinating/terrifying.

AI has been beating humans at chess for years, but chess is a relatively simple game compared to Go; the Chinese game is “more than a googol times larger than chess”, so while AI has gradually been able to trump us at everything from checkers to noughts and crosses, Go has remained one pastime where we paltry humans can kick a machine’s ass.

Or, it was, until current (and three-time) European Go champion Fan Hui was beaten 5-0 by Google’s AlphaGo AI, making it the first time a pro player has ever been beaten by an AI.

To beat it, the AI had to do more than just consult a pre-programmed catalogue of moves. Because Go is so complex, it had to actually learn:

We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.