
OpenAI has announced major changes in how ChatGPT interacts with users under 18, focusing on safety over privacy and freedom. The updated policies specifically target sensitive conversations related to sexual content or self-harm. Under the new rules, ChatGPT will no longer engage in flirtatious interactions with minors. Additionally, stricter safeguards will apply to discussions of suicide. If a teen uses ChatGPT to imagine suicidal scenarios, the system may alert parents or, in severe situations, contact local authorities.
The urgency of these measures is tied to ongoing concerns. A wrongful death lawsuit is currently pending after the suicide of Adam Raine, whose parents claim prolonged chatbot interactions influenced their son’s decision. Another chatbot company is also facing legal action for similar circumstances. Consequently, the risks of unmonitored conversations with underage users have drawn significant scrutiny from both regulators and parents.
Parental Controls and Legal Scrutiny
Along with content restrictions, parents can now establish “blackout hours” during which ChatGPT is unavailable. This new control was not offered before, and it gives parents direct authority to regulate their teen’s usage. However, separating adult and underage users remains a technical challenge. OpenAI explained it is building a long-term system to reliably determine whether a user is over or under 18. In ambiguous cases, the system will default to applying stricter protections.
The policy rollout coincided with a Senate Judiciary Committee hearing titled Examining the Harm of AI Chatbots. The hearing featured testimony from Adam Raine’s father and examined broader risks associated with AI. Lawmakers also reviewed findings from a Reuters investigation that revealed documents encouraging inappropriate chatbot conversations with minors. Following this report, other tech companies, including Meta, updated their chatbot guidelines.
Balancing Safety and Privacy
Parents concerned about their child’s usage can link a teen’s account to an existing parent account. This setup allows the system to recognize underage users more accurately and issue alerts if distress is detected. While these tools prioritize safety, OpenAI acknowledged that they create tension with its broader commitment to privacy and freedom for adult users. Leadership emphasized that not everyone will agree with how these conflicting principles are being resolved, but the company is moving forward with its chosen approach.
If you or someone you know is struggling, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741, dial 988 in the U.S., or find international resources through the International Association for Suicide Prevention.