Skip to main content Skip to main navigation

Publikation

EXPIL: Explanatory Predicate Invention for Learning in Games

Jingyuan Sha; Hikaru Shindo; Quentin Delfosse; Kristian Kersting; Devendra Singh Dhami
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2406.06107, Pages 1-11, arXiv, 2024.

Zusammenfassung

Reinforcement learning (RL) has proven to be a powerful tool for training agents that excel in various games. However, the black-box nature of neural network models often hinders our ability to understand the reasoning behind the agent’s actions. Recent research has attempted to address this issue by us- ing the guidance of pretrained neural agents to encode logic- based policies, allowing for interpretable decisions. A draw- back of such approaches is the requirement of large amounts of predefined background knowledge in the form of predi- cates, limiting its applicability and scalability. In this work, we propose a novel approach, Explanatory Predicate Inven- tion for Learning in Games (EXPIL), that identifies and ex- tracts predicates from a pretrained neural agent, later used in the logic-based agents, reducing the dependency on prede- fined background knowledge. Our experimental evaluation on various games demonstrate the effectiveness of EXPIL in achieving explainable behavior in logic agents while requir- ing less background knowledge.

Weitere Links