Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm by David Silver, et al.
Abstract:
The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.
The achievements by the AlphaZero team and their algorithm merit joyous celebration.
Joyous celebration recognizing AlphaZero masters unambiguous, low-dimensional data governed by deterministic rules that define the outcomes for any state, more quickly and completely than any human.
Chess, Shogi and Go appear complex to humans due to the large number of potential outcomes. But every outcome is the result of the application of deterministic rules to unambiguous, low-dimensional data. Something that AlphaZero excels at doing.
What hasn’t been shown is equivalent performance on ambiguous, high-dimensional data, governed by partially (if that) known rules, for a limited set of sub-cases. For those cases, well, you need a human being.
That’s not to take anything away from the AlphaZero team, but to recognize the strengths of AlphaZero and to avoid its application where it is weak.