The increasing performance of AI systems and especially their actual use in the everyday lives of many people has made the ethical consideration of AI systems a socially relevant topic, which is now also perceived and discussed by the general public.
DFKI systematically includes ethical issues in its research activities. The extent to which ethical issues can and must be taken into account in a project’s context depends very much on the respective project’s focus. In the projects, DFKI also regularly collaborates with external experts in the respective field of ethics, complemented when needed for example by legal or psychological expertise.
With the guiding principle “Human centric AI”, the DFKI would like to emphasize, among other things, that research here should always serve the good of people. This also includes considering the use of AI systems from an ethical point of view.
At DFKI, there is an appointed ethics team that can serve as a first point of contact for employees in all ethics-related questions, whether related to a specific project or generally concerning the work at DFKI. The ethics team has developed a handout that should serve as a first orientation. It can be found here.
The complexity of AI technologies is continuously reaching unprecedented levels, while their integration into countless societal sectors is constantly expanding.
In this context, at European level, ethics represents a vital component in ensuring that AI systems are developed in a trustworthy manner, building trust in the AI technology and its applications, making it more likely to be widely adopted.
By placing ethics at the forefront of AI development and deployment, I strongly endorse we can ensure not only that AI systems are developed with a clear ethical framework, but also that the general approach of AI technology will become effective and responsible on any level.
Artificial Intelligence is experiencing an impressive uptake, creating problem-solving capabilities, and becoming increasingly integrated in all societal sectors. For me, the accompaniment of the current AI expansion by constant calls for applied ethics reveals that AI ethics is not only a requirement imposed by various authorities, but a vital component to the responsible development of AI-driven technologies, and an effective instrument in generating added value for AI systems.
Humankind will not accept an AI as support as long as it is unclear whether the AI could pose a danger. Therefore, ethics must be "programmed" into the development of AI systems.
What is possible and feasible, what actually happens or what consequences something can have, are questions that are as interesting as they are relevant. But answers to them do not tell us what we should do. Instead, we have to ask what should and may be. Ethics, as the science of morality, provides us with answers in this regard for at any rate a weighty and primary source of the normative. That is why it is important and why it is important to me.
AI systems interact with the world and are increasingly perceived by humans as a kind of actor. In order to design and evaluate the systems well, interdisciplinary discourse is needed about standards, expectations, role models, fears and much more. This discourse gains from ethical reflection and becomes possible in part only through it.
My research interests are focused on the development of neuro-explicit AI for autonomous driving, which involves creating intelligent systems that can perceive, reason, and make decisions in complex driving scenarios. This is achieved by combining the power of symbolic reasoning and other forms of explicit knowledge with neural networks, which enables these systems to learn from data while also reasoning about the driving situation. By developing these algorithms, my team and I aim to improve the safety and efficiency of autonomous driving, ultimately making it a reality for everyday use. My research involves exploring the latest advancements in machine learning and AI, as well as developing novel algorithms that can accurately perceive the environment, understand traffic rules, and make decisions based on this information.
While ethical concerns related to dilemmas and biases are critical considerations in the development and deployment of autonomous driving systems, I am interested in exploring how ethics can be integrated into the entire lifecycle of these systems. This involves going beyond the traditional ethical concerns of dilemmas and biases to consider the ethical implications of data collection, algorithmic decision-making, and deployment. By examining these broader ethical implications, I aim to develop and promote ethical frameworks that ensure the responsible and equitable use of autonomous driving technology.
AI is not just normal software that is programmed and then it runs. With AI, there is a kind of "intermediate layer" - the programmer writes program code, so to speak, which then generates program code again, and this is then executed. This "intermediate layer" creates ethical challenges that do not exist with normal software.
Artificial Intelligence will move the world forward. In order for us to realise the potential of AI in many areas that affect our society, such as health, changes in the labour market, value creation in the economy, mobility, the energy transition and climate change, we need transparent and trustworthy AI. The development of AI systems is also a great responsibility. We need a design approach that incorporates ethical considerations and principles throughout the entire cycle from idea to product. But it is also clear that AI should support us humans, not replace them. As with all major technological leaps, AI applications harbour risks as well as opportunities. In order to make the best possible use of the opportunities offered by AI, we need to set legally binding rules for AI applications in addition to ethical principles.