• sign in
  • join
Issues
International Society of Go Studies

Go Players Should Not Trust AI Win Rate

Quentin Rendu /December 7, 2023

How to cite this article:
Rendu, Q. (2023). Go Players Should Not Trust AI Win Rate. Journal of Go Studies, 17(2), 61-88. doi: 10.62578/476299

Abstract
      The advent of artificial intelligence (AI) has transformed the landscape of various strategic games, including Go. In 2016, the AI-powered engine AlphaGo defeated one of the world’s strongest players. Since then, Go engines have routinely been used by amateur and professional Go players to analyse theirgames. In the early stages of AI analysis, Go players relied solely on the AI win rate, the only available indicator. However, the AI win rate does not accurately reflect the win rate of human Go players and might be misleading.
      Katago, first released in 2019, is the first engine to provide score predictions in addition to win rates. While it is now possible to evaluate board positions with a score, it remains unclear how this score translates into human win rates. In this work, a large database of online and professional games is analysed to extract the win rate of a human player based on their strength and the stage of the game. As expected, the human win rate is significantly lower than the AI win rate, even for 9dan professional players. A general formula is provided to compute the win rate based on player strength and move number. This feature offers new insights into the relative importance of mistakes and can assist players in making improved decisions during games.

Keywords: Go, Baduk, Weiqi, AI, Katago, Win rate, Statistics


References

Baker, L. and Hui, F. (2017), 'Innovations of alphago'. URL: https://github.com/featurecat/godataset

Bouzy, B. and Cazenave, T. (2001), 'Computer go: an ai oriented survey',Artificial Intelligence 132(1), 39-103.
doi: 10.1016/S0004-3702(01)00127-8

Campbell, M., Hoane, A. and hsiung Hsu, F. (2002), ' Deep blue',Artificial Intelligence 134(1), 57-83.
doi: 10.1016/S0004-3702(01)00129-1

Egri Nagy, A. and Törmänen, A. (2020), Derived metrics for the game of go-intrinsic network strength assessment and cheatdetection, in '2020 Eighth International Symposium on Computing and Net working (CANDAR)', IEEE, pp. 9-18.
doi: 10.1109/CANDAR51075.2020.00010

Featurecat (2019), 'Go dataset'. URL: https://github.com/featurecat/godataset

Kasparov, G. and Friedel, F. (2017), ' Reconstructing turing's " paper machine"', EasyChair Preprint 3.
doi: 10.29007/g4bq

Li, X., Lv, Z., Wang, S., Wei, Z., Zhang, X. and Wu, L. (2019), 'A middle game search algorithm appli cable to lowcost personal computer for go', IEEE Access 7, 121719-121727.
doi: 10.1109/ACCESS.2019.2937943

Morandin, F., Amato, G., Gini, R., Metta, C., Parton, M. and Pascutto, G.C. (2019), Sai a sensible artifi cial intelligence that plays go, in '2019 International Joint Conference on Neural Networks (IJCNN)', pp. 1-8.
doi: 10.1109/IJCNN.2019.8852266

Müller, M. (2002), 'Position evaluation in computer go', ICGA Journal 25(4), 219-228.
doi: 10.3233/ICG-2002-25405

Pascutto, G.C. (2017), 'Leela zero'. URL:https://github.com/leelazero/leelazero

Rendu, Q. (2023), 'Analysed kifu database'. URL: https://gitlab.com/qrendu/analysed kifudatabase

Sadler, M. and Regan, N. (2019), ' Game changer', AlphaZero's Groundbreaking Chess Strategies and the Promise of AI. The Netherlands. New in Chess .

Shin, M., Kim, J. and Kim, M. (2021), Human learning from artificial intelligence: evidence from human go players' decisions after alphago, in 'Proceedings of the Annual Meeting of the Cognitive Science Society', Vol. 43.

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M. et al. (2016), 'Mastering the game of go with deep neural networks and tree search', nature 529(7587), 484-489.
doi: 10.1038/nature16961

Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Ku maran, D., Graepel, T. et al. (2018), 'A general reinforcement learning algorithm that masters chess, shogi, and go through selfplay', Science 362(6419), 1140-1144.
doi: 10.1126/science.aar6404

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A. et al. (2017), 'Mastering the game of go without human knowledge', nature 550(7676), 354-359.
doi: 10.1038/nature24270

Takeuchi, S., Kaneko, T., Yamaguchi, K. and Kawai, S. (2007), Visualization and adjustment of evalu ation functions based on evaluation values and win probability, in ' Proceedings of the national con ference on Artificial Intelligence', Vol. 22, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, p. 858.

Tesauro, G. et al. (1995), 'Temporal difference learning and tdgammon',Communications of the ACM 38(3), 58-68.
doi: 10.1145/203330.203343

Teuber, B., Ouchterlony, E. and Dohme, M. (2023), 'Ai sensei'. URL: https://aisensei.com/

Wu, D. J. (2019). Accelerating self-play learning in go. arXiv preprint arXiv:1902.10565. doi: 10.48550/arXiv.1902.10565