Volume 11, Number 5

Learning Chess and NIM with Transformers

  Authors

Michael DeLeo and Erhan Guven, Johns Hopkins University, USA

  Abstract

Representing a board game’s state space, actions, and transition model by text-based notation enables a wide variety of NLP applications suited to the strengths of language models. These few shot language models can help gain insight into a variety of interesting problems such as learning the rules of a game, detecting player behavior patterns, player attribution, and ultimately learning the game in an unsupervised manner. In this study, we firstly applied the BERT model to the simple combinatorial Nim game to analyze BERT’s performance in the varying presence of noise. We analyzed the model’s performance versus three agents, namely Nim Guru, a Random player, and a Q-learner. Secondly, we applied the BERT model to the game of chess through a large set of high ELO stockfish games with exhaustive encyclopedia openings. Finally, we have shown that model practically learns the rules of the Nim and chess, and have shown that it can competently play against its opponent and in some interesting conditions win.

  Keywords

Natural Language Processing, Chess, BERT, Sequence Learning.