The research department Agents and Simulated Reality (ASR), led by Prof. Dr.-Ing. Philipp Slusallek, focuses on making Artificial Intelligence (AI) more trustworthy for people. The primary concerns are ensuring the functional reliability of AI systems, clarifying their working mechanisms, and the quality and control of their training data. ASR investigates the creation of synthetic training data (Digital Reality) through simulations and generative models.
For real data (in-vivo), the Dataspace approach with clear governance and semantic description is used. Functional aspects include plausibility checks and hybrid systems. Robustness in AI is enhanced by techniques for detecting and adjusting to data anomalies. For explainability, they rely on visual representations of neural networks and inherently more explainable, symbolic models. The goal is to foster trust in AI systems through transparency, reliability, and control.
Intelligent agents are software modules which are able to accomplish situationally optimal solutions of complex problems based on independent, reactive and goal-oriented application of appropriate methods of artificial intelligence (AI). Multiagent systems are modelling, simulating and optimising complex systems through ways of interaction between such agents that go beyond the capabilities of individual agents. Semantic technologies are concerned with the automated reasoning on resources in the Web based on machine-understandable descriptions of their semantics in formal standard ontology languages like OWL2 and RDFS. Intelligent (software) agents can leverage semantic technologies to perform, for example, high-precision search, composition planning, simulation and rational negotiation of semantically relevant data and services in the Internet.
Our research on intelligent agents and semantic technologies focuses on scalable, distributed and hybrid action planning, model-driven development of multiagent systems, and intelligent coordination of data and web services. Research results are leveraged for the development of intelligent information systems and applications in a variety of domains including virtual (3D) worlds, 3D-Web, Web 2.0, E-Business, Transport and Logistics, Renewable Energy, and E-Health.
The Safe and Secure Systems research group develops methods for the verification and evaluation of safety and security features in IT. In particular, the development methodology is based on scientific research, allowing the application of formally sound tools. The developed technologies include formal modelling techniques, interactive methods of deduction and the management of the artifacts of developments. Hybrid verification extends the methodology to continuous systems with e.g. spatial or temporal properties, so that determinations concerning safety about the entire system - including the control software - can be made. With its recognized IT Security Evaluation Laboratory, the DFKI is offering independent assessments of the security of information technology in compliance with internationally standardised and accepted criteria (CC - common criteria).
Visual Computing und interactive Visualization of complex 3D-Models not only provides an intuitive and insightful presentation, it also extends the simulation with respect to visual re liability (predictive Visualization) by accurately computing material properties, light distribution, and model details. With XML3D we develop key components for realizing interactive 3D environments for current Web browsers and thus for the future 3D Internet. “Displays as a Service (DaaS)” virtualizes physical display devices and can distribute them to arbitrary devices across the Internet. Highly-optimized software platforms for Multi-and many-core systems enable not only physically-correct real-time visualizations but also the dynamic interaction of the user with complex simulations. The user and its interaction with the system are in the center of our research.
An application example of the research is the simulation and visualization of production processes: for the increasing individualization of products and shorter product cycles, potential cost reduction lies within the optimization of setup and turnaround times. With multi-agent systems, the processes of individual process steps are recorded and simulated in 3D. The formal modeling of systems and their processes as hybrid systems allows guaranteed statements, which at the same time take into account the discrete control software as the spatio-temporal behavior of plants. By running a visual inspection of the facilities and their operations based on the three-dimensional representation of the simulation and verification, a rapid assessment of the planning is possible. Interactive and immersive training scenarios on a virtual 3D model can be made during the conversion of the plant.
Such applications will be especially important in the Future Internet. Dual- or Mixed-Reality, i.e. applications that establish a direct coupling of a virtual and a real plant, as well as their simulation, will allow direct access to individual machines for maintenance and control even over long distances. These single applications are embedded in distributed, interactive, virtual environments that simulate entire plants, factories, production facilities, municipalities or companies and provide intuitive access thanks to the sculptural metaphor of a three-dimensional world.
Trusted AI aims to create a new generation of AI systems that guarantee functionality, allowing use even in critical applications. Developers, users, and regulators can rely on performance and reliability even for complex socio-technical systems. Trusted AI is characterised by a high degree of robustness, transparency, fairness, and verifiability where the functionality of existing systems is in no way compromised, but actually enhanced.
Some of the current problems related to a lack of trust in AI systems are a direct result of the massive use of black-box methods that depend solely on data. Instead, the new AI generation has its foundation built on hybrid AI systems (also known as neuro-symbolic or neuro-explicit). These hybrids do not rely solely on data-driven approaches but on the full range of AI technologies ("All of AI"), which includes symbolic AI methods, search, reasoning, planning, and other operations. "Trust by Design" is achieved through the combination of Machine Learning with symbolic conclusions and the explicit representation of knowledge in hybrid AI systems. Knowledge no longer needs to be machine learned when it is represented by semantic and other explicit models, which can also guide the learning process in a direction that improves generalisation, robustness, and interpretability. This hybrid approach is popularly called the third wave of AI ("3rd Wave AI").
Hybrid AI approaches are studied and applied, where a newly developed system of possible combinations is helping to assess the advantages and disadvantages in different areas of application. Current research is focused on the area of safety engineering as well as various aspects of validation and certification of AI systems and decision making in human and AI agent teams (Human Empowerment).
The IT Security Evaluation Facility of DFKI offers the independent assessment of the security of information technology. All evaluations are based on the internationally recognized Common Criteria (CC) for Information Technology Security Evaluation. The objective evaluation of the security quality of an IT product is establishing confidence in the stated security properties.
For its evaluation activities the IT Security Evaluation Facility operates a quality management system which complies with the requirements of the International Standard DIN EN ISO/IEC 17025 and therefore also meets the principles of ISO 9001. Every evaluation will be performed impartially, independently, conscientiously, free from any undue pressures and on a pure professional basis.
Based on the confirmation and recognition of its competence the IT Security Evaluation Facility has obtained an accreditation for more than 10 years from the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik – BSI) and is licensed for performing evaluations according to the mentioned criteria.
A successfully completed evaluation is the prerequisite for the issuance of an internationally accepted CC certificate on the part of BSI for the product under examination.
The evaluation technical reports of the IT Security Evaluation Facility are also accepted by BSI and other approved confirmation bodies – datenschutz cert GmbH und T-Systems GEI GmbH – as the basis for the confirmation of products for qualified electronic signatures according to the requirements of the German Signature Act.
Services offered by IT Security Evaluation Facility
Clear and comprehensive information regarding all aspects of Common Criteria (CC) as well as the evaluation, certification and confirmation schemes.
Head:
Prof. Dr.-Ing. Philipp Slusallek
Philipp.Slusallek@dfki.de
Phone: +49 681 85775 5377
Deputy Head:
Dr.-Ing. Christian Müller
Christian.Mueller@dfki.de
Phone: +49 681 85775 4823
Office:
Léa Yvonne Basters
Phone: +49 681 85775 5276
Denise Cucchiara
Phone: +49 681 85775 5315
asr-office@dfki.de
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
Saarland Informatics Campus D 3_2
Stuhlsatzenhausweg 3
66123 Saarbruecken
Germany
Dr. Oliver Keller
oliver.keller@dfki.de
Phone: +49 681 85775 327
Roland Vogt
roland.vogt@dfki.de
Phone: +49 681 85775 4131