Skip to main content Skip to main navigation

Publication

A User Interface for Explaining Machine Learning Model Explanations

Md Abdul Kadir; Abdulrahman Mohamed Selim; Michael Barz; Daniel Sonntag
In: Companion Proceedings of the 28th International Conference on Intelligent User Interfaces. International Conference on Intelligent User Interfaces (IUI-2023), March 27-31, Sydney, NSW, Australia, Pages 59-63, IUI'23 Companion, Vol. Companion Proceedings of the 28th International Conference on Intelligent User Interfaces, ISBN 9798400701078, Association for Computing Machinery, New York, NY, United States, 3/2023.

Abstract

Explainable Artificial Intelligence (XAI) is an emerging subdiscipline of Machine Learning (ML) and human-computer interaction. Discriminative models need to be understood. An explanation of such ML models is vital when an AI system makes decisions that have significant consequences, such as in healthcare or finance. By providing an input-specific explanation, users can gain confidence in an AI system’s decisions and be more willing to trust and rely on it. One problem is that interpreting example-based explanations for discriminative models, such as saliency maps, can be difficult because it is not always clear how the highlighted features contribute to the model’s overall prediction or decisions. Moreover, saliency maps, which are state-of-the-art visual explanation methods, do not provide concrete information on the influence of particular features. We propose an interactive visualisation tool called EMILE-UI that allows users to evaluate the provided explanations of an image-based classification task, specifically those provided by saliency maps. This tool allows users to evaluate the accuracy of a saliency map by reflecting the true attention or focus of the corresponding model. It visualises the relationship between the ML model and its explanation of input images, making it easier to interpret saliency maps and understand how the ML model actually predicts. Our tool supports a wide range of deep learning image classification models and image data as inputs.

Projekte

Weitere Links