Skip to main content Skip to main navigation

2026 International AI Safety Report published

| Autonomous Systems | Image Recognition & Understanding | IT Security | Machine Learning & Deep Learning | Language & Text Understanding | Saarbrücken | Press release

In the annual International AI Safety Report, a panel of international scientists led by Yoshua Bengio highlights the current state of development of general-purpose AI, the associated risks, and methods for safe AI. The report, published today, shows that the capabilities of AI have continued to increase. AI agents are becoming more autonomous. Companies and governments are increasingly concerned with AI risk management. Prof. Antonio Krüger, CEO of the German Research Center for Artificial Intelligence (DFKI), is the German representative on the report's Expert Advisory Panel.

© DFKI

The AI Safety Report focuses on generative AI systems that can perform a wide range of tasks, known as general-purpose AI. The improved performance of AI compared to last year is due, among other things, to new techniques that come into play after the initial training of the models. During use, these can call up additional computing power. AI outputs are divided into individual response steps what is known as "reasoning." Today, systems deliver more accurate results, particularly in mathematics and in the field of software development.

What AI is capable of brings advantages but also potential dangers for society. Antonio Krüger: "AI safety is becoming increasingly important. Now we need to institutionalise on a national level what the AI Safety Report is already indicating. Other major AI nations are one step ahead of us in this regard. We need a German AI Safety Institute that keeps an eye on the dynamic development of AI and its risks and continuously advises the federal government."

© DFKI, Oliver Dietze
Prof. Antonio Krüger (CEO, DFKI) contributed to the creation of the International AI Safety Report 2026 as part of the Expert Advisory Panel.

One focus of AI development is currently on AI agents. AI systems are given access to external tools, such as web browsers, and perform tasks independently in the real world. Since humans have less control over the actions of AI agents, a lack of reliability in AI can become a major problem, according to experts. Despite the progress made and the greater availability of agents on the market, however, the systems still fail when it comes to very specific tasks or those involving many steps.

In addition to AI malfunctions, the report lists the misuse of AI as one of the risks. Deepfakes have become better, are more difficult to identify as such, and there are more frequent reports of incidents in which AI-generated content is used for criminal purposes. AI systems are also increasingly being used for cyberattacks. However, these are not yet fully autonomous attacks. Since AI systems now provide expert-level information in some areas, there are legitimate concerns that this could be exploited for the development of biological and chemical weapons.

Systemic risks are also back on the agenda. The report points to evidence that AI systems can restrict human autonomy. For example, after using AI assistance, doctors became less attentive in the performance of their duties and were less able to detect tumors.

The report notes that it remains difficult for policymakers, organizations, and developers to address AI safety. Safety measures must be multi-layered. "defence-in-depth" methods, which combine different measures to compensate for the weaknesses of individual measures, have become more widespread since the last report. Twice as many companies as last year developed "Frontier AI Safety Frameworks", voluntary plans to minimize the risks of their AI. The technical precautions that AI developers take throughout the AI development cycle have become more sophisticated, but they remain vulnerable. Governments have also launched new initiatives to standardize AI risk management.

"What we don't have yet is an airbag for AI. A reliable protection mechanism that protects us before major damage occurs. I am sure that research into trustworthy AI will continue to provide important insights into how we can achieve safe AI in the future. But this will only happen if the EU's much-vaunted AI sovereignty does not remain lip service but is also backed up by relevant investments in AI infrastructure and AI research," explains Krüger.