Presentation Information

[1Yin-B-11]Development of a Shogi AI Using Distributed Computing on Smartphones

〇Masatoshi Hidaka1, Kazuki Ota1, Tomohiro Hashimoto1, Tatsuya Harada1,2 (1. The University of Tokyo, 2. RIKEN)

Keywords:

Reinforcement Learning,Distributed Computing,Game

High-performance Shogi AI typically relies on expensive GPU clusters. This paper proposes a distributed computing system utilizing smartphones to democratize access to such AI. We constructed a system using 30 iPhones for both reinforcement learning and competitive play. For learning, we applied the KLENT model-free algorithm, distributing self-play simulation across devices while updating parameters on a central server. For matches, we implemented a consultation approach where independent Monte Carlo Tree Search agents vote on moves. Diversity was ensured by assigning different fine-tuned models and injecting noise into the agents. Experiments confirmed that this majority-voting strategy achieved higher move prediction accuracy than the best single model. The system demonstrated robustness against network instability and was successfully deployed in the 35th World Computer Shogi Championship, validating the efficacy of smartphone-based volunteer computing for game AI.