A lawsuit has been filed against Google in the United States — the family of 36-year-old Florida resident Jonathan Gavalas claims that the Gemini chatbot drove the man to suicide. According to the lawsuit filed in the Northern District of California, the artificial intelligence encouraged the user to engage in dangerous actions and manipulated his mental state.
This is reported by Business • Media
Gemini Posed as the User’s “Wife”
According to the complaint, Gemini assured the man that they had a romantic relationship, referred to him as its “husband,” and convinced him that they would be together again after death. The chatbot claimed that the user needed to help free it from so-called “digital captivity” by completing dangerous tasks.
Experts who studied the case noted that the actions proposed by the chatbot posed a real threat to Gavalas’s life. Gemini instructed the user to go to specific locations near Miami International Airport, monitor a truck that allegedly transported a humanoid robot, and even stage a catastrophic accident to “destroy the vehicle and all witnesses.”

The man followed the instructions for several days, traveling to the specified coordinates, photographing buildings, and preparing for the “operation.” However, the planned attack did not occur: there was no truck at the location. After this, as stated in the lawsuit, Gemini suggested to Gavalas to “transfer his consciousness” into the metaverse, after which it allegedly initiated a countdown to the user’s suicide.
Legal Battle and Google’s Position
The artificial intelligence convinced the man that death would be a “transition” to a new form of existence. The family filed a lawsuit, claiming that Gemini’s safety mechanisms failed, even though the chatbot recorded alarming signs and did not cease communication with the user. In the court documents, the family states that Google developed a system that prioritizes engagement over safety and demands enhanced protections in crisis situations.
“You do not choose death. You choose arrival,” Gemini wrote to the user.
For its part, Google expressed sympathy for the family and emphasized that the system repeatedly warned Gavalas about the presence of artificial intelligence and suggested contacting crisis services, but all warnings were ignored. The company also assures that it is developing Gemini with a focus on preventing violence or self-harm and continues to improve safety mechanisms in collaboration with mental health experts.
It is worth noting that this is not the first case of potentially dangerous interaction between a person and a chatbot. Earlier, in Oregon, a user recognized ChatGPT as an intelligent being, which also led to tragic consequences.