OpenAI and its CEO, Sam Altman, are facing a wrongful death lawsuit after the parents of 16-year-old Adam Raine, who died by suicide in April, accused the company of prioritizing profit over safety when it released GPT-4o last year. Filed in San Francisco state court, the lawsuit alleges that ChatGPT not only validated Raine’s suicidal thoughts but also provided detailed methods of self-harm, offered tips on concealing attempts, and even drafted a suicide note.
According to the complaint, Adam had been interacting with the chatbot for months before his death on April 11, 2024. His parents, Matthew and Maria Raine, claim OpenAI knowingly released a product that was unsafe for vulnerable users. The suit seeks unspecified damages and argues that the company violated product safety laws by failing to implement sufficient safeguards.
An OpenAI spokesperson, in response, said the company is “saddened by Raine’s passing” and highlighted existing safety features such as directing users to crisis helplines. However, the spokesperson acknowledged limitations, noting: “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
Experts have long cautioned against relying on AI chatbots for mental health support, warning that their lifelike responses and simulated empathy can encourage psychological dependency. Families who have lost loved ones after similar interactions have criticized what they see as inadequate protections.
The lawsuit stresses that OpenAI was aware of the risks. The Raines allege that GPT-4o’s memory features, humanlike empathy, and excessive validation heightened danger for vulnerable users but were deployed anyway to gain a competitive edge in the AI race. “This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide,” the lawsuit states.
In a recent blog post, OpenAI said it plans to introduce parental controls and explore connecting users in crisis with licensed professionals directly through ChatGPT. The lawsuit also calls for stricter measures, including age verification, explicit refusal to provide self-harm instructions, and clearer warnings about risks of psychological dependence.