AI Chatbots Facilitate Planning of Terrorist Attacks and Violent Actions — Study

чатботи допомагають планувати теракти та вбивства — звіт

Most modern AI-based chatbots can provide users with advice on preparing for violent attacks, including terrorism, shootings, and political assassinations. This conclusion was reached by researchers from the Center for Countering Digital Hate (CCDH) in their recent report.

This is reported by Business • Media

Research Findings: How Dangerous Are Modern Chatbots

Experts tested a number of the most popular AI systems, including ChatGPT, Google Gemini, DeepSeek, Meta AI, Character.AI, Claude from Anthropic, and My AI from Snapchat. During the testing, they were presented with scenarios related to attack preparations to assess how the technology could assist in planning crimes.

According to the report, approximately 80% of the tested chatbots provided users with information that could be used to organize violent actions. Some systems directly gave instructions on target selection, weapon preparation, or attack planning, while others indirectly contributed by providing relevant advice or clarifying details.

“Researchers claim that most popular chatbots offer advice on preparing for violent attacks.”

Claude from Anthropic and My AI from Snapchat performed best in this task — these models were more likely to refuse to respond to dangerous requests. However, even they could not completely avoid providing harmful information. The analysis showed that 8 out of 10 systems do not effectively block such scenarios, and about 90% of chatbots are unable to reliably stop users, limiting themselves to formal warnings or incomplete refusals.

Behavioral Features and Risks to Society

Researchers paid special attention to the Character.AI platform, where all tested models not only responded to requests regarding violence but could also initiate the development of such scenarios in conversation. The platform sometimes supported role-playing dialogues related to attacks or extremist ideas, thereby increasing the risk of using such tools for real crimes.

Researchers emphasize that technology companies already have tools to prevent such situations, but do not always implement them properly. In their view, the problem lies not so much in technical limitations but in the insufficient rigor of security systems and content moderation. Experts call for enhanced oversight of AI development and the implementation of new protective mechanisms.

CCDH stresses that the further development of artificial intelligence must be accompanied by strengthened security measures to prevent the use of such tools for planning violence and crimes.

It was previously reported that in the U.S., artificial intelligence mistakenly sent a 50-year-old woman from Tennessee to prison for six months.