Skip to main content Skip to main navigation

Publikation

Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling

David Harbecke; Christoph Alt
In: Proceedings of ACL 2020, Student Research Workshop. Annual Meeting of the Association for Computational Linguistics (ACL-2020), July 5-10, Seattle, WA, USA, Association for Computational Linguistics, 6/2020.

Zusammenfassung

Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions. Occlusion is a well established method that provides explanations on discrete language data, e.g. by removing a language unit from an input and measuring the impact on a model's decision. We argue that current occlusion-based methods often produce invalid or syntactically incorrect language data, neglecting the improved abilities of recent NLP models. Furthermore, gradient-based explanation methods disregard the discrete distribution of data in NLP. Thus, we propose OLM: a novel explanation method that combines occlusion and language models to sample valid and syntactically correct replacements with high likelihood, given the context of the original input. We lay out a theoretical foundation that alleviates these weaknesses of other explanation methods in NLP and provide results that underline the importance of considering data likelihood in occlusion-based explanation.

Projekte

Weitere Links