Pattern-Guided Integrated Gradients

Robert Schwarzenberg, Steffen Castle

In: Proceedings of the ICML 2020 Workshop on Human Interpretability in Machine Learning (WHI). International Conference on Machine Learning (ICML-2020) Vienna, Austria (online) ICML 2020.


Integrated Gradients (IG) and PatternAttribution (PA) are two established explainability methods for neural networks. Both methods are theoretically well-founded. However, they were designed to overcome different challenges. In this work, we combine the two methods into a new method, Pattern-Guided Integrated Gradients (PGIG). PGIG inherits important properties from both parent methods and passes stress tests that the originals fail. In addition, we benchmark PGIG against nine alternative explainability approaches (including its parent methods) in a large-scale image degradation experiment and find that it outperforms all of them.


schwarzenberg-castle-2020-icml-whi-v2.pdf (pdf, 965 KB )

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz