Skip to main content Skip to main navigation

Publikation

Evaluating Human-Centered AI Explanations: Introduction of an XAI Evaluation Framework for Fact-Checking

Vera Schmitt; Balazs Csomor; Joachim Meyer; Luis-Felipe Villa-Arenas; Charlott Jakob; Tim Polzehl; Sebastian Möller
In: Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation. ACM International Workshop on Multimedia AI against Disinformation (MAD-24), ISBN 979-8-4007-05526, Association for Computing Machinery, New York, NY, USA, 2024.

Zusammenfassung

The rapidly increasing amount of online information and the ad- vent of Generative Artificial Intelligence (GenAI) make the manual verification of information impractical. Consequently, AI systems are deployed to detect disinformation and deepfakes. Prior studies have indicated that combining AI and human capabilities yields enhanced performance in detecting disinformation. Furthermore, the European Union (EU) AI Act mandates human supervision for AI applications in areas impacting essential human rights, like free- dom of speech, necessitating that AI systems be transparent and provide adequate explanations to ensure comprehensibility. Exten- sive research has been conducted on incorporating explainability (XAI) attributes to augment AI transparency, yet these often miss a human-centric assessment. The effectiveness of such explanations also varies with the user’s prior knowledge and personal attributes. Therefore, we developed a framework for validating XAI features for the collaborative human-AI fact-checking task. The framework allows the testing of XAI features with objective and subjective evaluation dimensions and follows human-centric design principles when displaying information about the AI system to the users. The framework was tested in a crowdsourcing experiment with 433 participants, including 406 crowdworkers and 27 journalists for the collaborative disinformation detection task. The tested XAI features increase the AI system’s perceived usefulness, understandability, and trust. With this publication, the XAI evaluation framework is made open source.

Projekte