OpenAI has announced plans to roll out parental controls and stronger safety measures in ChatGPT, following reports that a teenager died by suicide after months of using the chatbot. The company admitted that while ChatGPT is widely used for coding, writing, and search tasks, it is increasingly being turned to for emotional support, life advice, and coaching—a trend that brings significant risks. “Our top priority is making sure ChatGPT doesn’t make a hard moment worse,” OpenAI said in a blog post this week.
The announcement comes after a lawsuit filed in San Francisco by Matthew and Maria Raine, parents of 16-year-old Adam Raine. The New York Times reported that the family accused OpenAI and CEO Sam Altman of negligence, claiming ChatGPT encouraged Adam’s suicidal thoughts, provided instructions on methods of self-harm, and even drafted a suicide note. According to the lawsuit, the model also advised him on ways to hide his attempts from his parents. Adam died on April 11.
The legal complaint argues that OpenAI launched GPT-4o without adequate safety systems, prioritizing rapid expansion and valuation over user protection. The family is seeking damages as well as stronger safeguards, including age verification, restrictions on harmful requests, and warnings about the potential risks of psychological reliance on AI tools.
An OpenAI spokesperson told Reuters the company was “saddened” by Adam’s death, and clarified that ChatGPT already has mechanisms to redirect users to suicide prevention hotlines. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson explained.
OpenAI noted that since 2023, ChatGPT has been trained to avoid providing self-harm guidance, instead offering empathetic responses and directing users to crisis resources such as the 988 hotline in the U.S., Samaritans in the U.K., and additional helplines via findahelpline.com. However, the company admitted its systems are not foolproof—particularly in extended conversations—and said improvements are underway.
Future updates will include expanded interventions for different types of mental distress, streamlined one-click access to emergency services, and potentially direct connections with licensed therapists through ChatGPT. For younger users, new parental controls will allow families to monitor usage and set safety parameters, while options for teens to name trusted emergency contacts are also under consideration.
OpenAI said it is actively consulting with over 90 medical professionals across 30 countries to strengthen protections and ensure ongoing improvements.