Skip to main content Skip to main navigation

Publikation

Integrating permutation feature importance with conformal prediction for robust Explainable Artificial Intelligence in predictive process monitorning

Nijat Mehdiyev; Maxim Majlatow; Peter Fettke
In: Engineering Applications of Artificial Intelligence, Vol. 149, Pages 0-0, Science Direct, 6/2025.

Zusammenfassung

As artificial intelligence (AI) systems are increasingly deployed in high-stakes environments, the need for explanations that convey uncertain information has become evident. Conventional explainable AI (XAI) methods often overlook uncertainty, focusing solely on point predictions. To address this gap, we propose using permutation feature importance (PFI) combined with predictive uncertainty evaluation measures. This novel approach examines the significance of features by relating them to the model’s confidence in its predictions. By using split conformal prediction (SCP) to quantify predictive uncertainty and integrating the outcomes to PFI, we aim to enhance the robustness and interpretability of machine learning (ML) algorithms. More importantly, we examine three scenarios for conformal prediction-based PFI explanations: permuting feature values in the test data, the calibration data, and both. These scenarios assess the impact of feature permutations from different perspectives, revealing feature sensitivity and the importance of features in various settings. We also perform a series of sensitivity analyses, particularly exploring calibration data size and computational efficiency, to demonstrate the robustness and scalability of our approach for industrial applications. Our comprehensive evaluation offers insights into feature impact on predictions and their associated confidence levels. We validate our proposed approach through a real-world predictive process monitoring use case in manufacturing.