Skip to main content Skip to main navigation

Project | MOMENTUM

Duration:
Robust Learning with Hybrid AI for Trustworthy Interaction of Humans and Machines in Complex Environments

Robust Learning with Hybrid AI for Trustworthy Interaction of Humans and Machines in Complex Environments

MOMENTUM is a research project dedicated to TRUSTED-AI, which aims to advance the development and application of artificial intelligence by integrating robustness and explainability. The aim of the project is to make the development of autonomous systems safer, more reliable and more transparent. Particular attention will be paid to ensuring that these systems can interact with humans in complex environments without compromising their safety and privacy. Various aspects of AI, such as behaviour and movement models, navigation and pose extraction, are being investigated and further developed. The close collaboration of experts from different teams aims to create a holistic solution for the safe implementation of AI in autonomous systems.

MOMENTUM considers autonomous driving to be an important application area for Trusted AI, as was the case in the predecessor project REACT. However, MOMENTUM does not focus exclusively on this area, but also includes other areas, such as industrial production or medicine. By developing Trusted AI methods and technologies that are applicable to different application areas, the MOMENTUM project aims to create broader benefits for society and industry.

The MOMENTUM project has several work packages in the area of TRUSTED-AI, including the HC (Human-Centered) work package, which focuses on researching new methods for motion capture and motion synthesis. The focus here is on the use case of autonomous driving. New methods for Mixture-of-Expert models, Convolutional-Neuronal-Nets and Reinforcement-Learning are investigated and methods for the integration of environment and behaviour model are developed. In order to create a sufficient database for the simulation of critical scenarios, new data will be generated by recording pedestrian movements. The methods investigated can also be transferred to other application areas. In addition, the control of simulated pedestrians is being researched in HC in order to generate human-centred critical situations. In the work package MC (Machine-Centered), a physically correct model for LiDAR simulation is developed that also considers ambient noise and hardware units. Here, ML and DNN methods are combined, and different learning methods are investigated in unsupervised, semi-supervised and self-supervised learning. This work package also explores the combination of deep learning and automatic planning under uncertainty to make agent decisions in partially observable environments. In the DR (Digital Reality) work package, the use of parametric models to generate synthetic training data is investigated to improve deep learning networks. Here, partial models of the real world are combined into high-dimensional parameter spaces to generate simulation-ready scenes. A measure of configuration similarity is introduced, and different sampling strategies are investigated to determine the optimal selection of data points.

The MOMENTUM project plays a significant role in the field of autonomous systems and trustworthy artificial intelligence research. In the ASR research area in particular, the project is helping to lay the foundations for the development of safe, reliable and transparent autonomous systems. Through the close cooperation of experts from different teams and the exploration of new methods and technologies, important contributions are made in the strategically important area of Trusted-AI.

Publications about the project

  1. Reliable Student: Addressing Noise in Semi-Supervised 3D Object Detection

    Farzad Nozarian; Shashank Agarwal; Farzaneh Rezaeianaran; Danish Shahzad; Atanas Poibrenski; Christian Müller; Philipp Slusallek

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. CVPR Workshop on Learning with Limited Labelled Data for Image and Video Understanding (L3D-IVU-2023), located at 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 19, Vancouver, Canada, Pages 4981-4990, IEEE, 6/2023.
  2. AJAN: An Engineering Framework for Semantic Web-Enabled Agents and Multi-Agent Systems

    André Antakli; Akbar Kazimov; Daniel Spieldenner; Gloria Elena Jaramillo Rojas; Ingo Zinnikus; Matthias Klusch

    In: Philippe Mathieu; Frank Dignum; Paulo Novais; Fernando De la Prieta (Hrsg.). Advances in Practical Applications of Agents, Multi-Agent Systems, and Cognitive Mimetics. The PAAMS Collection. International Conference on Practical Applications of Intelligent Agents and Multiagents (PAAMS-2023), 21th, July 12-14, Guimarães, Portugal, Springer Nature, 2023.

Sponsors

BMBF - Federal Ministry of Education, Science, Research and Technology

01IW22001

BMBF - Federal Ministry of Education, Science, Research and Technology