• sign in
  • join
Issues
International Society of Go Studies

Explaining Go: Challenges in Achieving Explainability in AI Go Programs

Zack Garrett /December 7, 2023

How to cite this article:
Garrett, Z. (2023). Explaining Go: Challenges in Achieving Explainability in AI Go Programs. Journal of Go Studies, 17(2), 29-60. doi: 10.62578/464862

Abstract
      There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder they are to explain. Go playing AIs like AlphaGo and KataGo provide fantastic examples of this phenomenon. In this paper, I discuss a non-exhaustive list of the leading theories of explanation and what each of these theories would say about the explainability of AI-played moves of Go. Finally, I consider the possibility of ever explaining AI-played Go moves in a way that meets the four principles of XAI. I conclude, somewhat pessimistically, that Go is not as imminently explainable as other domains. As such, the probability of having an XAI for Go that meets the four principles is low.