Skip to main content Skip to main navigation

Publication

DIN SPEC 92001-3 Artificial Intelligence-Life Cycle Processes and Quality Requirements-Part 3: Explainability

Detlef Olschewski; André Bluhm; Joerg Firnkorn; Adriano Lucieri; Sebastian Palacio; Yeji Streppel; Carlos Zednik; Rebekka Görge; Maximilian Poretschkin; Nikolas Becker; Thomas Zielke; Christian Kruschel; Matthias Neumann-Brosig; Stephen Bäuerle; Marton Eifert; Erik Martori López; Felix Assion; Annegrit Seyerlein-Klug; Ute Schmid; Stefan Haufe; Antoine Gautier; Lukas Bieringer; Tarek R. Besold; Armin B. Cremers
Deutsches Institut für Normung (DIN), DIN SPEC, Vol. 92001-3, 2023.

Abstract

The discipline of Artificial Intelligence (AI) aims to develop systems capable of performing tasks that would otherwise require human intelligence. Although the principles governing these systems’ operation continues to be the subject of ongoing research, their increasingly high performance has led to their proliferation in many different application domains. In order to ensure that AI systems are used efficiently, responsibly, and in a trustworthy way, their development and deployment should satisfy relevant quality criteria. Beyond the quality criteria for other kinds of computing systems as defined in ISO/IEC/IEEE 12207:2017, AI systems call for special-purpose criteria. This is the third document in a series. DIN SPEC 92001-1 establishes an AI quality meta-model and life cycle to highlight the general AI quality criteria of robustness, explainability, and performance. DIN SPEC 92001-2 specifies criteria to ensure AI system robustness. This document, DIN SPEC 92001-3, specifies criteria to promote explainability.

More links