Mathematical models and their computer-aided simulation have played a fundamental role for scientific and technological progress for many decades. They often describe mechanistic relationships of a real-world system and are based on hypotheses about its functioning. But the complexity of such models is limited as they are designed by a modeler and strongly simplifying assumptions are required to make them manageable. On the contrary, models for data-driven learning, in particular artificial neural networks, are able to detect complex patterns in the data and exploit them for regression or classification tasks. But their highly specialised and complex decision rules are difficult to interpret and require enormous amounts of data, also because existing world knowledge can not or only hardly be integrated.
The research department on Neuro-Mechanistic Modeling focuses on hybrid approaches that combine mechanistic and AI-based models. In our projects we will develop methods that leverage the high complexity of neural networks but also integrate interpretable mechanistic descriptions and thus combine the best of both worlds. In contrast to purely neural models, neuro-mechanistic models allow to integrate domain knowledge and they are more effective even if only moderate amounts of data are available as it is common, for instance, in life science applications. They are easier to interpret and generalise better to unknown inputs.
The research group Responsible AI and Machine Ethics (RAIME) is dedicated to the complex ethical and generally normative challenges that arise in the development and deployment of AI systems. The focus is on the numerous necessary trade-offs regarding conflicting objectives, such as, for example, between fairness and accuracy, transparency and efficiency, or individual and collective benefit. The central research question is how these challenges can be addressed in the context of normative or moral uncertainty, meaning in the absence of universally accepted criteria of correctness.
The research group Causal Models and Representations (CaMoRe) is dedicated to the question of how causal knowledge and modern AI systems can be combined. Under what conditions can the algorithm itself learn to distinguish between causation and non-causal correlation, and when is this distinction impossible and the algorithm unreliable? What data would we need to collect to support causal decision-making? CaMoRe explores these and other questions with a particular focus on methods of reinforcement learning, as well as applications in medicine, ecology, and climate science.
Head:
Prof. Dr. Verena Wolf
Verena.Wolf@dfki.de
Phone: +49 681 302 5586
Deputy Head:
Kevin Baum M.A. M.SC.
kevin.baum@dfki.de
Phone: +49 681 85775 5251
Timo Philipp Gros, M.Sc.
Timo_Philipp.Gros@dfki.de
Office:
Tatjana Bungert
Tatjana.Bungert@dfki.de
Phone: +49 681 85775 5357
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
Gebäude D3 2
Stuhlsatzenhausweg 3
66123 Saarbrücken
Germany