Google’s neural network won a decisive victory in the game of go.
Computer program AlphaGo won over Lee Se-dol, who used to be one of the greatest Go players in the world. They planned to play only five games, but the neural network managed to defeat the champion three times in a row.
The third match between the Korean professional player Lee Se-dol and AlphaGo was finished with the victory of artificial intelligence. Thus, AlphaGo defeated the best player of the most popular board game.
Go is an ancient Chinese game, which appeared about 2-5 thousand years ago. It is the game for two players. Each of them receives a set of white or black stones, which have to be placed on a board in a certain order. The aim of each player is to cover a larger area of the board with the stones. However, the rules of the game are quite sophisticated and have various nuances.
The game between Lee Se-dol and AlphaGo was held in Seoul from 9 to 15 March. The champion won a $1 million prize.
The creators of AlphaGo were faced with the task to teach the artificial intelligence to play the game without explaining its rules. This process required a lot of time and technical capabilities. However, they finally achieved the desired result and the neural network has beaten the champion.
Before the start of a series of games, the Korean master was sure in his absolute victory. According to him, he would play with AlphaGo in a couple of years if it became a bit "smarter".
However, Se-dol was wrong as AlphaGo surpassed his skills.
An expert in artificial intelligence and computer "vision" Alexander Kraynov noted that Lee Se-dol made hasty conclusions after watching the match between the neural network and European champion Feng Hui. In addition, AlphaGo had enough time to prepare for its next serious match more carefully. Now it can be undoubtedly stated that the neural network hasn’t wasted time.
The neural network’s victory means a lot. However, one of the most important things is that in order to win over a man in such a complicated game, AlphaGo had no need to look through all the possible moves. It learned to play in a different way. The game doesn’t have a protracted strategy, which, for example, can be found in special textbooks for chess players. Go players have to solve certain specific problems. That is what AlphaGo did by discovering the best possible moves, as well as using Monte Carlo method and some predictive ways.