Skip to main content Skip to main navigation

Explainable AI for Cybersecurity Receives International Best Paper Award

| IT Security | Awards | Marine Perception | Osnabrück / Oldenburg

The growing threat posed by previously unknown network attacks presents significant challenges for modern IT security systems. There is often a lack of powerful yet explainable AI approaches capable of reliably detecting so-called zero-day attacks. With the scientific article “Detect, Decide, Explain: An Intelligent Framework for Zero-Day Network Attack Detection”, the research team directly addresses this challenge. The paper was awarded the Best Application Paper Award at the AI-2025 – 45th SGAI International Conference on Artificial Intelligence.

Award recipients© Saif Alzubi
Award recipients: Dr. Frederic Theodor Stahl and Saif Alzubi.

The international conference took place from December 16 to 18, 2025, in Cambridge (United Kingdom) and is regarded as one of the established European forums for applied research in Artificial Intelligence. It is organized by the Specialist Group on Artificial Intelligence (SGAI) of the British Computer Society and annually brings together researchers from academia and industry.

Focus on Zero-Day Attacks and Explainable AI

The award-winning paper addresses a central challenge in modern IT security: the early detection of zero-day network attacks, attacks for which no known signatures or defense mechanisms yet exist. The presented framework combines high-performance AI-based detection methods with transparent decision-making and explanation mechanisms (Explainable AI, XAI).
In doing so, the work contributes to making AI systems not only powerful but also comprehensible and trustworthy, an essential factor for their deployment in real-world, safety-critical environments.

Dr. Frederic Theodor Stahl, Head of the ‘Marine Perception’ Research Department at DFKI Lower Saxony in Oldenburg

“In safety-critical applications, it is not sufficient for AI systems to make accurate predictions, their decisions must also be transparent and explainable. Our work demonstrates how high-performance network attack detection can be combined with interpretable AI methods, thereby creating real added value for real-world security systems.”

Dr. Frederic Theodor Stahl, Head of the ‘Marine Perception’ Research Department at DFKI Lower Saxony in Oldenburg

Embedded in a Strong Research Environment

The conference in Cambridge not only provided the stage for the award-winning paper but also represents the breadth and relevance of applied AI research overall. In recent years, researchers from German universities and research institutions—including those affiliated with Jade University of Applied Sciences—have presented numerous contributions there, covering topics such as explainable AI, privacy-oriented AI systems, medical applications, environmental and maritime AI, and intelligent assistance systems.

This thematic diversity underscores the importance of the SGAI conference as a platform for international exchange across disciplines and application areas within Artificial Intelligence.

Conference participants© Chris Jeky Liabel
Conference participants from left to right: Prof. Dr. Christoph Tholen, Prof. Dr. Lars Nolle, Tobias Neiß-Theuerkauff, Dr. Frederic Theodor Stahl, Prof. Dr.-Ing. Frank Wallhoff, Jérôme Agater, Prof. Dr.-Ing. Ammar Memari, Andre Miedtank, Dr. Christoph Manß.

International Collaboration

The research was conducted in close cooperation with international partners:

  • Saif Alzubi (first author), University of Exeter
  • M. Al-Khafajiy, University of Lincoln

We would like to thank the SGAI conference committee for this recognition as well as all participating researchers for their excellent collaboration. The award highlights the relevance of joint research at the intersection of Artificial Intelligence and cybersecurity—a field of growing importance for business, public administration, and society.
 

Press contact:

Communications & Media Niedersachsen