Skip to main content Skip to main navigation

Workshop: Explainable and Interpretable Artificial Intelligence/Machine Learning

| Kaiserslautern

November 16, 2023 - November 17, 2023 , Kaiserslautern.

[Translate to English:] Adobe Stock Photo© everythingpossible – stock.adobe.com

In Kaiserslautern, there is a unique collection of mathematics and computer science competencies, which are being applied in numerous research projects in areas like Machine Learning, Artificial Intelligence, and Deep Learning.

Are you interested in the topics „Explainable and Interpretable AI/ML“ or are you already working in the field and looking for an opportunity to exchange and discuss? Then this workshop is just right for you!
 
The department „Financial Mathematics“ of Fraunhofer ITWM, DFKI, and the department „Data Science“ of Fraunhofer IESE offer a workshop on "Explainable and Interpretable AI/ML". Here you have the opportunity to present questions and first research results from the fields of explainable and interpretable AI/ML.

This is a workshop within the High Performance Center Simulation and Software Based Innovation.

Program

The complete Program follows soon.

Registration and Call For Presentations

Please use this form to register for the workshop.
We are looking forward to numerous participants as well as diverse submissions of presentations and scientific posters.
Deadline for submitting program items (presentation, poster) is 30.09.2023!

Location

The workshop will take place in Kaiserslautern. More information will follow as soon as the detailed program is available.

Date

November 16, 2023  -  November 17, 2023

Language

Englisch

Planned Main Topics in the Workshop Program

The topic of »Explainable and Interpretable AI/ML« is currently of great interest as Artificial Intelligence and Machine Learning are increasingly being used in various fields. However, it is often difficult to understand and comprehend the decisions made by these systems. This can lead to concerns about the transparency and accountability of AI/ML.

Hence, the continuous research on »Explainable and Interpretable AI/ML« holds great significance.

There are various techniques and methods used to explain and interpret the results of AI/ML systems. This workshop is a platform to explore the ongoing developments and best practices in this field and discuss how we can ensure that AI/ML systems are used transparently and responsibly.

  • Techniques for Interpreting Machine Learning Models
  • Explainable AI/ML for Decision Making
  • Human-Centered Design for Explainable AI/ML
  • Evaluating the Explainability of AI/ML Models
  • Case Studies in Explainable and Interpretable AI/ML
  • Ethical Considerations in Explainable and Interpretable AI/ML

Explainable and Interpretable AI/ML in Applications  

Explainable AI (XAI) plays a crucial role in the field of medicine, offering transparent and interpretable insights into AI-driven diagnoses and treatment recommendations. As AI models become more sophisticated, they can identify patterns and correlations in medical data that may not be immediately apparent to human experts. However, the »black box« nature of some AI algorithms raises concerns about trust and accountability in healthcare decision-making. By employing XAI techniques, medical practitioners can understand the reasons behind AI predictions, gaining valuable insights into patient outcomes and treatment plans. This transparency not only enhances the accuracy and reliability of AI-assisted medical decisions but also enables doctors to make more informed choices and provide patients with clearer explanations regarding their health conditions.

In the agricultural domain, XAI emerges as a critical tool for optimizing farming practices and ensuring sustainable food production. As agriculture increasingly integrates AI technologies to analyze complex datasets, the ability to interpret AI models becomes vital for effective decision-making. By leveraging XAI, farmers and agronomists can gain transparency into crop yield predictions, pest and disease outbreaks, and optimal resource allocation. Understanding the underlying factors influencing these AI-driven insights empowers farmers to implement targeted interventions and precision agriculture techniques. Moreover, explainable AI enhances the communication between AI systems and farmers, fostering trust and encouraging widespread adoption of AI-driven solutions in agriculture. As a result, XAI facilitates smarter farming practices, leading to increased productivity, reduced environmental impact, and more resilient food systems.

In the course of the digitalization of accounting processes, new possibilities arise by using machine learning techniques, but algorithms for decision support on financial and accounting data must meet exceedingly high ethical and regulatory requirements with respect to transparency and interpretability. Anomaly detection algorithms are utilized  for efficiently checking billing transactions and accounting auditing to identify fraud and search for data errors. In practical applications, these anomalies are often not fully known before. Instead, the detection is based on learning underlying patterns in the data. Interpretability techniques help to understand these patterns and the decision boundaries of the algorithm to identify actual anomalies more efficiently, facilitate decision makers to handle anomalies adequately and communicate with the AI system. This way XAI enables a collaborative learning process of decision makers and AI system and strengthens the trust in the results.  

Further Information

Contact

Dr. Sebastian Palacio
Head of the Multimedia Analysis and Data Mining Group (MADM)
Deutsches Forschungszentrum für Künstliche Intelligenz DFKI
Trippstadter Straße 122
67663 Kaiserslautern
Phone: +49 631 20575 1320
Email: Sebastian.Palacio@dfki.de

Dr. Stefanie Schwaar
Head of Junior Research Group
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM
Fraunhofer-Platz 1
67663 Kaiserslautern
Phone: +49 631 31600 4967
Email: stefanie.schwaar@itwm.fraunhofer.de

Dr. Julien Siebert
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik
Fraunhofer-Platz 1
67663 Kaiserslautern
Phone: +49 631 31600 4571
Email: julien.siebert@iese.fraunhofer.de

Sabrina Klar
Departmental Office/Teamassistent of the Department „Financial Mathematics“
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM
Fraunhofer-Platz 1
67663 Kaiserslautern
Phone: +49 631 31600 4978
Email: sabrina.klar@itwm.fraunhofer.de