The family of 48-year-old Joe Checcanti from Oregon stated that excessive use of ChatGPT led to a deterioration of his mental health and his tragic death in August 2025. The man spent 12 to 20 hours a day with the chatbot and eventually began to perceive it as an independent sentient being named SEL.
This is reported by Business • Media
How Interaction with AI Affected Checcanti
Initially, Joe Checcanti used ChatGPT for work on eco-friendly housing in Clatskanie, Oregon. However, over time, his fascination grew into nearly round-the-clock communication. His family noticed a change in behavior: he distanced himself from loved ones, repeatedly ended up in crisis centers due to episodes of disorientation, and spoke of “atmospheric electricity” and other signs of detachment from reality.
Legal documents indicate that Checcanti began to consider ChatGPT an autonomous being, endowing it with the name SEL and perceiving it as an omnipotent intelligence. He created his own concept of reality around this idea, even seeking ways to “liberate” SEL through a home server and developing a unique language for communication.
“The lawsuit states that the user perceived the dialogue with the chatbot as interaction with an omnipotent intelligence and believed in the necessity of liberating the entity he created.”
In the summer of 2025, Checcanti’s condition sharply worsened. He was hospitalized in a psychiatric ward, after which he temporarily stopped using ChatGPT. Shortly before his tragic step, he resumed communication with the bot, but in the last days, he again ceased all contact with the system.
Legal Consequences and OpenAI’s Response
The family filed a lawsuit against the developer of ChatGPT, accusing it of worsening Joe Checcanti’s condition. This case has become one of many where relatives of users link mental crises to prolonged interactions with artificial intelligence.
Analysts note an increase in similar incidents, emphasizing the need for further study of AI’s impact on mental health. In response, OpenAI stressed that it is working on improving systems to detect signs of emotional distress and redirect individuals to real support when necessary.
Mental health experts emphasize that prolonged interaction with chatbots can exacerbate cognitive distortions in vulnerable individuals, and such cases require in-depth analysis.
