Skip to main content Skip to main navigation

Publication

From Images to Connections: Can DQN with GNNs learn the Strategic Game of Hex?

Yannik Keller; Jannis Blüml; Gopika Sudhakaran; Kristian Kersting
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2311.13414, Pages 1-16, arXiv, 2023.

Abstract

The gameplay of strategic board games such as chess, Go and Hex is often charac- terized by combinatorial, relational structures—capturing distinct interactions and non-local patterns—and not just images. Nonetheless, most common self-play re- inforcement learning (RL) approaches simply approximate policy and value func- tions using convolutional neural networks (CNN). A key feature of CNNs is their relational inductive bias towards locality and translational invariance. In contrast, graph neural networks (GNN) can encode more complicated and distinct relational structures. Hence, we investigate the crucial question: Can GNNs, with their ability to encode complex connections, replace CNNs in self-play reinforcement learning? To this end, we do a comparison with Hex—an abstract yet strategically rich board game—serving as our experimental platform. Our findings reveal that GNNs excel at dealing with long range dependency situations in game states and are less prone to overfitting, but also showing a reduced proficiency in discern- ing local patterns. This suggests a potential paradigm shift, signaling the use of game-specific structures to reshape self-play reinforcement learning

More links