Skip to main content Skip to main navigation


Non-Discrimination-by-Design: Handlungsempfehlungen für die Entwicklung von vertrauenswürdigen KI-Services

Jonas Rebstadt; Henrik Kortum-Landwehr; Laura Gravemeier; Birgid Eberhardt; Oliver Thomas
In: HMD - Praxis der Wirtschaftsinformatik (HMD), Pages 1-17, Springer, 2022.


In addition to human-induced discrimination of groups or individuals, more and more AI systems have also shown discriminatory behavior in the recent past. Examples include AI systems in recruiting that discriminate against female candidates, chatbots with racist tendencies, or the object recognition used in autonomous vehicles that shows a worse performance in recognizing black than white people. The behavior of AI systems here arises from the intentional or unintentional reproduction of pre-existing biases in the training data, but also the development teams. As AI systems increasingly establish themselves as an integral part of both private and economic spheres of life, science and practice must address the ethical framework for their use. Therefore, in the context of this work, an economically and scientifically relevant contribution to this discourse will be made, using the example of the Smart Living ecosystem to argue with a very private reference to a diverse population. In this paper, requirements for AI systems in the Smart Living ecosystem with respect to non-discrimination were collected both in the literature and through expert interviews in order to derive recommendations for action for the development of AI services. The recommendations for action are primarily intended to support practitioners in adding ethical factors to their procedural models for the development of AI systems, thus advancing the development of non-discriminatory AI services.


Weitere Links