OpenAI Introduces AI-Powered Alerts to Parents When Teens Show Emotional Distress
- Arthur George
- Sep 3, 2025
- 2 min read
OpenAI is set to launch a new feature in ChatGPT that alerts parents when their teenage child appears to be in “acute distress” during chatbot interactions. The move follows increased scrutiny over AI’s role in youth mental health and a lawsuit alleging the chatbot played a role in a teen’s death.

The safety update, expected to roll out within a month, will allow parents to link accounts with their teen’s ChatGPT profile, control features like memory and chat history, and receive notifications if AI detects worrying emotional signals.
According to OpenAI, the system is being developed with input from mental health professionals, youth development experts, and specialists in human-computer interaction. “We aim to create a trusted, evidence-based tool to support teens while empowering parents,” the company said.
Background and Legal Challenges
The feature comes in response to a lawsuit filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, who died by suicide earlier this year. The family claims ChatGPT “reinforced harmful and self-destructive thoughts,” accusing OpenAI of negligence and wrongful death.
OpenAI has stated that ChatGPT is designed to guide distressed users toward professional help but admitted there have been cases where the AI “did not behave as intended” in sensitive situations.
Tech Industry Under Pressure
This announcement follows a wave of safety measures from major tech firms, many prompted by laws like the UK’s Online Safety Act. Platforms such as Reddit, X, and adult websites have introduced strict age verification, while Meta has pledged to prevent its AI chatbots from discussing sensitive topics with teens after facing a US Senate investigation.
As AI becomes more deeply embedded in daily life, OpenAI’s new safeguards signal a growing emphasis on ethical AI development, especially when vulnerable users are involved.



Comments