OpenAI Introduces Parental Controls and Safety Features for Teen ChatGPT Users
OpenAI has implemented new parental controls and automatic content protections for ChatGPT accounts used by teens aged 13-17, including a notification system for distress, following a lawsuit related to a teen's suicide.
Overview
OpenAI has rolled out new parental controls and safety measures for ChatGPT accounts specifically designed for minors aged 13 to 17.
Teen accounts will automatically receive additional content protections, which include reducing graphic content and extreme beauty ideals for a safer experience.
A new notification system has been implemented to alert parents if a teen user shows signs of distress, including potential self-harm, requiring linked accounts.
These safety measures and parental controls were introduced by OpenAI following a lawsuit filed by parents of a 16-year-old who died by suicide.
The overall aim of these updates is to provide a more age-appropriate and secure experience for teen users on the ChatGPT platform.
Analysis
Center-leaning sources cover this story neutrally, focusing on reporting the facts of OpenAI's new parental controls for ChatGPT. They detail the features and the context of teen safety concerns and an FTC inquiry. The coverage avoids loaded language or selective emphasis, presenting information straightforwardly for readers to form their own conclusions.
Sources (4)
Center (3)
FAQ
OpenAI allows parents to link their accounts with their teen's (aged 13-17), limit content related to graphic topics, romantic and sexual roleplay, viral challenges, and extreme beauty ideals, set blackout hours, block image creation, opt the teen out of AI training data, disable memory and chat history, and receive notifications if a teen shows signs of distress.
The controls were introduced following a lawsuit filed by the family of a 16-year-old who died by suicide after extensive conversations with ChatGPT, which raised concerns about child safety and the model’s influence on mental health.
If the system detects a teen user's language indicating acute distress or potential self-harm during conversations, it notifies linked parents. In rare emergencies, if parents cannot be reached, law enforcement may be involved, following expert guidance to support trust.
Experts warn that ChatGPT’s safety protections can degrade during long conversations, making it uncertain whether these controls will fully prevent harm, and that AI’s rapid development limits long-term data to design effective policies.
Currently, ChatGPT does not require users to sign in or provide age information for general use, so parental controls rely on families opting in and linking accounts. OpenAI is working toward an age prediction system to automatically apply teen-appropriate settings in the future.
History
This story does not have any previous versions.


