Skip to main content Skip to main navigation

Very Human: DFKI Bremen works on innovative method for safe and self-learning robot control

| Press release | Autonomous Systems | Machine Learning & Deep Learning | Robotics | Cyber-Physical Systems | Robotics Innovation Center | Bremen

Although deep learning algorithms are righteously handled as one of the key aspects of modern Artificial Intelligence (AI), the conclusions of the methods do not offer a high degree of security. Many areas with potential for AI application bear too many risks to be controlled by systems that are not verifiable. The two Bremen-based research departments of the DFKI are working on a new method for system control, combining the advantages of fast self-learning and reliable verification via symbolical models. The project VeryHuman, funded by the German Federal Ministry of Education and Research (BMBF), runs for four years and aims to prove its innovative method by making a humanoid robot walk.

© DFKI GmbH, Foto: Annemarie Popp
The joint of a humanoid robot that is meant to learn how to walk savely via an innovative method developed in the project VeryHuman. Source: DFKI GmbH, Photo: Annemarie Popp

When considering the advantages and disadvantages of self-learning, sub-symbolic systems, it can be helpful to think of a person tackling a project: In some cases, an instinctive approach can lead to the best result, for instance when it comes to art. In other cases, relying on estimation alone can be risky: A wall that is meant to hold up a roof should be built based on math and physics, not only the eye and experience of its builder. Similarly, self-learning systems have proven to be effective, fast and promising for the development of artificial intelligence (AI). However, in areas where risks exist and reliability is key, for example in safe walking in humanoid robots, an AI application cannot solely rely on the estimations it has derived from training in simulation – it must also apply mathematical, physical and statistical models to ensure correctness.

The combination of sub-symbolic, self-learning algorithms and those based on mathematical rules and abstractions in one single system has proven difficult. The decisions made by a programme using deep learning are not based on symbolic calculations and therefore cannot be explained by logical rules. For this reason, the German Research Center for Artificial Intelligence (DFKI) is bringing together the expertise of its two Bremen-based research departments to develop a new method for a safe self-learning control mechanism. In the project VeryHuman, the Robotics Innovation Center (RIC) led by Prof. Dr. Dr. h.c. Frank Kirchner and the research department Cyber-Physical Systems (CPS) led by Prof. Dr. Rolf Drechsler aim to combine the advantages of both approaches. The goal of the four-year project funded by the German Federal Ministry for Education and Research (BMBF) is a new, innovative method that, in the end, is meant to make a humanoid robot walk stably.

A new, safe approach for AI applications in high-risk areas

As the name suggests, the central aspect of the project “VeryHuman – Learning and Verifying Complex Behaviours for Humanoid Robots” is to allow for control systems based on artificial intelligence to operate closer to the capabilities of humans. The effectiveness of the new approach is therefore tested on a humanoid robot that is meant to walk upright and stably, and even to perform more difficult movements if the method proves to be functional. However, the difficulty of verifying the behaviour of self-learning algorithms applies to numerous areas of artificial intelligence and becomes critical wherever the system’s actions could cause danger. Therefore, the implications of an AI system that can mathematically verify its decisions are far-reaching.

Three challenges are at hand when it comes to this type of AI applications: Firstly, there are no standardized physical models for the mechanic and kinematic properties of a humanoid system which, because of its physical properties, cannot only rely on training data. Secondly, if it would only rely on its training and no standardized models, the results would not be verifiable –the system would behave like a black box. The third challenge therefore is the mathematical description of the robotic system – the key to its verification and the successful application of reinforcement learning, in which the system is rewarded for creating the result that is mathematically correct.

Continuous improvement of robotic system via symbolic models

The goal of the project VeryHuman thus is the abstraction of kinematic models from the robotic system that can be symbolically validated. This abstraction allows the definition of reward functions for reinforcement learning and the system to mathematically verify its decisions based on the models. Via simulations and optimal control algorithms (calculations that search for the optimal way to reach a specific result), the system – in this case, the humanoid robot – can continuously be improved with the knowledge of the symbolic model and thus manage to walk upright or even run and jump. Comparably, an autonomously driving car could learn in simulation how the brake needs to be handled while knowing when it must physically stop in order to prevent an accident.

While the Robotics Innovation Center of the DFKI is working on the learning and control of the humanoid demonstrator, the research department Cyber-Physical Systems deals with the abstraction of mathematical models and the symbolical description of the robot’s behaviour, considering the kinematic and dynamic properties of the system. The project has kicked-off in June and will run for four years, funded with close to 1.3 million Euros by the German Federal Ministry for Education and Research (BMBF).

Press material:
At cloud.dfki.de/owncloud/index.php/s/t7noF9eGSgsqL9o you can find a photo of the robotic system used in the project. You may use this image naming the source “DFKI GmbH, Photo: Annemarie Popp”.

Contact RIC:
Dr. rer. nat. Shivesh Kumar
German Research Center for Artificial Intelligence
Robotics Innovation Center
Phone: +49 421 17845 4144

Contact CPS:
Prof. Dr. Christoph Lüth
German Research Center for Artificial Intelligence (DFKI)
Cyber-Physical Systems
Phone: +49 421 218 59830

Press contact:
German Research Center for Artificial Intelligence
Team Corporate Communications Bremen
Phone: +49 421 17845 4051