AI Developers Lack Clear AGI Security Strategies

ШІ-модель від Google за 48 годин вирішила «проблему десятиліття» супербактерій

Key developers of artificial intelligence systems, including OpenAI, Google DeepMind, Anthropic, Meta, xAI, as well as Chinese companies Zhipu AI and DeepSeek, have been found unprepared for the potential existential risks posed by artificial general intelligence (AGI), according to an assessment by the Future of Life Institute (FLI). None of the companies received a rating higher than D for their existential safety planning, and in some areas, the results were even lower.

This is reported by Business • Media

Security Ratings and Major Company Shortcomings

FLI tested the seven largest developers of large language models (LLMs) across six key areas – from current threats to readiness for emergencies related to AI going out of control. The highest score in the index was achieved by the startup Anthropic – a rating of C, while OpenAI and Google DeepMind received scores of C and C- respectively. Other companies demonstrated even weaker results.

“None of the organizations have anything resembling a coherent, implemented plan for controlling AI.”

Experts estimate that the situation is comparable to launching a nuclear power plant without any action plan to prevent a catastrophe. OpenAI and DeepMind received particularly low scores in the area of risk management.

AI Development Pace and Industry Response

Max Tegmark, co-founder of FLI and MIT professor, stated that leading companies are creating systems whose dangers are comparable to nuclear risks, yet they do not disclose even basic plans to prevent possible disasters. He emphasized that the pace of AI development has significantly accelerated, and whereas it was previously thought that AGI would emerge in decades, it is now a matter of a few years.

The FLI report also highlights that since the February AI summit, progress in this field has only accelerated. New models, such as Grok 4, Gemini 2.5, and the video generator Veo3, are already showing significant increases in capabilities.

In parallel with the FLI report, the organization SaferAI published its own analytical report, reaching similar conclusions regarding the “unacceptably weak” risk management in leading companies in the industry. Experts stress the need for a swift revision of approaches to AGI development.

In response to the accusations, a representative from Google DeepMind noted that the FLI report does not take into account the full range of safety measures implemented within the company. Other firms refrained from commenting at the time of publication.

It was previously reported that Meta has recruited several leading scientists from OpenAI to collaborate on the development of superintelligent systems.

AI Security Index. Data: FLI.