Skip to main content Skip to main navigation

Publication

Modeling for Explainability: Ethical Decision-Making in Automated Resource Allocation

Christina Cociancig; Christoph Lüth; Rolf Drechsler
In: Proceedings of the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2021). Upper-Rhine Artificial Intelligence Symposium (UR-AI-2021), October 27, Kaiserslautern, Germany, 2021.

Abstract

Decisions delegated to artificial intelligence face an alignment problem: humans expect the algorithm to make fast and well-informed decisions aligning with human morals. In the design and engineering process of algorithms, ethical principles enter the black box explicitly and implicitly as functional or non-functional properties, much to the detriment of explainability and transparency. Previous work has established surrogate modeling to promote explainability and transparency of the decision-making process. We extend on this, model in lower complexity decision trees and as labeled transition systems, which is a method inherent to bisimulation theory, as well as evaluate on synthetic data with a rulebased algorithm. As a case study, we analyze the triage processes in German and Austrian hospitals during the COVID-19 pandemic, based on official guidelines that regulate the allocation of intensive care unit beds. We discovered that the decision processes are similar, however, the systems do not behave in the same manner. The diverging behavior equates to a discrepant ratio of patients treated in intensive care in contrast to the general ward. Our insight leads us to the conclusion that our approach ensures ethical decision-making in healthcare and should be considered due to its explainability and transparency.