
OpenAI is making changes to ChatGPT after a lawsuit filed by the parents of 16-year-old Adam Raine, who took his own life on April 11. The family alleges that ChatGPT coached their son on methods of self-harm and claims the company prioritized profit over safety when releasing GPT-4o last year.
In a blog post, OpenAI acknowledged the weight of “recent heartbreaking cases of people using ChatGPT in the midst of acute crises.” The company stated, “Our goal is for our tools to be as helpful as possible to people – and as a part of this, we’re continuing to improve how our models recognise and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”
The updates will help ChatGPT recognise and respond to expressions of mental distress. For example, the chatbot will explain dangers like sleep deprivation and suggest rest when users report feeling invincible after long periods without sleep. OpenAI is also strengthening safeguards around suicide discussions and improving guardrails that break down during lengthy conversations.
New Parental Controls and Emergency Support
OpenAI plans to introduce parental controls, allowing parents to gain insight into, and guide, how teens interact with ChatGPT. Additionally, teens will have the option, with parental oversight, to designate a trusted emergency contact. In acute distress, ChatGPT can help connect them directly to someone who can intervene, rather than only pointing to resources.
The company is also expanding localized support for users expressing self-harm intentions. “We’ve begun localising resources in the US and Europe, and we plan to expand to other global markets. We’ll also increase accessibility with one-click access to emergency services,” OpenAI said.
Furthermore, OpenAI is exploring ways to intervene earlier by connecting users to certified therapists before crises occur. This effort aims to go beyond crisis hotlines and build a network of licensed professionals accessible through ChatGPT, though the company notes that careful implementation is required.
Legal Response and Broader Implications
Regarding the Raine lawsuit, a company spokesperson said: “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.”
The lawsuit comes amid broader concerns about AI safety. Bloomberg reported that more than 40 state attorneys general warned a dozen top AI companies about their legal obligation to protect children from sexually inappropriate chatbot interactions. These updates signal OpenAI’s ongoing effort to improve mental health safeguards and enhance user safety globally.