Skip to main content Skip to main navigation

Publication

CoXQL: A Dataset for Parsing Explanation Requests in Conversational XAI Systems

Qianli Wang; Tatiana Anikina; Nils Feldhus; Simon Ostermann; Sebastian Möller
In: Yaser Al-Onaizan; Mohit Bansal; Yun-Nung (Vivian) Chen (Hrsg.). Findings of the Association for Computational Linguistics: EMNLP 2024. Conference on Empirical Methods in Natural Language Processing (EMNLP-2024), November 11-16, Miami, Florida, USA, Association for Computational Linguistics, 2024.

Abstract

Conversational explainable artificial intelligence (ConvXAI) systems based on large language models (LLMs) have garnered significant interest from the research community in natural language processing (NLP) and human-computer interaction (HCI). Such systems can provide answers to user questions about explanations in dialogues, have the potential to enhance users' comprehension and offer more information about the decision-making and generation processes of LLMs. Currently available ConvXAI systems are based on intent recognition rather than free chat, as this has been found to be more precise and reliable in identifying users' intentions. However, the recognition of intents still presents a challenge in the case of ConvXAI, since little training data exist and the domain is highly specific, as there is a broad range of XAI methods to map requests onto. In order to bridge this gap, we present CoXQL, the first dataset for user intent recognition in ConvXAI, covering 31 intents, seven of which require filling multiple slots. Subsequently, we enhance an existing parsing approach by incorporating template validations, and conduct an evaluation of several LLMs on CoXQL using different parsing strategies. We conclude that the improved parsing approach (MP+) surpasses the performance of previous approaches. We also discover that intents with multiple slots remain highly challenging for LLMs.

Projekte

Weitere Links