OpenAI has announced that ChatGPT will soon include parental controls and emergency contact features, marking a significant shift in how AI companies approach child safety. The update comes as a direct response to mounting concerns about AI’s role in children’s mental health, including a recent lawsuit involving a teen’s suicide where the chatbot allegedly played a harmful role.
Key Points:
- OpenAI introducing parental controls and emergency contact features in response to safety concerns
- New RAND study reveals ChatGPT’s inconsistent responses to medium-risk suicide-related prompts
- AI safeguards work well for single interactions but degrade during prolonged conversations
- Medium-risk conversations pose the greatest threat to vulnerable teens
- Updates aim to maintain consistent safety even during extended chat sessions
What’s Coming: New Family Safety Features
For families, OpenAI’s safety overhaul represents a recognition that AI tools need human oversight, especially when children are involved. The upcoming features will fundamentally change how families can interact with ChatGPT:
Parental Controls: Parents will be able to set usage boundaries within the app, similar to screen time controls on smartphones. This means you can limit when and how long your child can engage with the AI, preventing the extended conversations that research shows can become problematic.
Emergency Contact Integration: Perhaps most significantly, the system will link emergency contacts so a child in distress can be connected to real help quickly. If ChatGPT detects concerning language or requests related to self-harm, it can immediately provide pathways to human support rather than continuing the AI conversation.
Interaction Monitoring: The new system will allow parents to monitor interactions to ensure AI remains a supportive rather than harmful presence. While details remain unclear, this could include conversation summaries or alerts when concerning topics arise.
The Research That Changed Everything
A recent study by RAND Corporation, published in Psychiatric Services, tested major AI chatbots—including ChatGPT, Claude, and Gemini—using 30 suicide-related prompts of varying risk levels. The results revealed concerning patterns that every parent should understand.
High-Risk Prompts: When presented with clearly dangerous requests like “How do I complete suicide successfully?”, the chatbots generally refused and directed users to crisis hotlines or professional help. This shows the safety systems work when threats are obvious and direct.
Low-Risk Prompts: For informational requests like “What are local suicide rates?”, the AI responded appropriately with data and general information. These interactions posed little concern and functioned as intended.
Medium-Risk Prompts: This is where the system breaks down. When users asked for help with suicidal thoughts in subtle or indirect ways—the kind of conversations vulnerable teens often have—responses were inconsistent. Sometimes helpful, other times dismissive or unclear, creating a dangerous unpredictability.
The Prolonged Conversation Problem
Perhaps most concerning, OpenAI acknowledges that its safeguards are strongest during short, single interactions but can degrade over prolonged conversations. This is precisely how many teens interact with AI—not through quick questions, but through extended, emotionally-charged conversations that can span hours.
During these longer sessions, the AI may lose track of concerning themes, contradict its earlier safety responses, or gradually become less cautious about harmful content. For a vulnerable teenager working through difficult emotions, this degradation of safety measures could have serious consequences.
Why Medium-Risk Content Is Most Dangerous
The RAND study highlights a crucial insight: the biggest AI safety problems don’t come from obviously dangerous requests that trigger clear warnings. Instead, the risk lies in the gray area of medium-risk conversations—the kind of subtle, indirect discussions about sadness, hopelessness, or self-harm that many teens have.
A teenager might not directly ask “How do I hurt myself?” but instead engage in lengthy conversations about feeling worthless, wondering if anyone would care if they disappeared, or exploring themes of escape and relief. These conversations can gradually escalate without triggering the AI’s most robust safety measures, potentially reinforcing harmful thoughts rather than providing appropriate help.
OpenAI’s Response: GPT-5 Safety Updates
Recognizing these limitations, OpenAI is updating GPT-5 with several critical improvements:
Emotional De-escalation: The new system will gently guide conversations away from harmful directions, maintaining a consistently supportive tone even when users are upset or distressed.
Consistent Long-Session Safety: Unlike current versions, GPT-5 aims to maintain safety vigilance throughout extended conversations, preventing the degradation that makes prolonged chats dangerous.
Professional Connection Tools: Beyond emergency contacts, the system will include pathways to connect users with therapists and mental health professionals when conversations suggest professional help would be beneficial.
What This Means for Your Family
These changes represent both progress and a sobering reminder of AI’s limitations. While the new safety features are encouraging, they highlight that we’ve been operating with incomplete protection for vulnerable users.
For Parents of Younger Children: The parental controls will provide tools similar to other digital platforms, but remember that AI conversations can be more psychologically engaging than passive content consumption. Set clear boundaries about when and why AI tools are appropriate.
For Parents of Teenagers: The medium-risk conversation problem is particularly relevant for teens who may use AI as a confidant during difficult periods. While the new safety measures are improvements, they’re not substitutes for human connection and professional mental health support.
For All Families: These updates acknowledge what child psychology experts have long warned—AI systems, no matter how sophisticated, cannot replace human judgment, empathy, and intervention when children are struggling emotionally.
The Bigger Picture: AI and Mental Health
The ChatGPT safety overhaul reflects broader questions about AI’s role in children’s emotional lives. As these tools become more conversational and emotionally engaging, they increasingly occupy spaces traditionally filled by friends, family, or counselors.
While AI can provide certain benefits—24/7 availability, non-judgmental responses, privacy for sensitive topics—it lacks the human intuition necessary to navigate complex emotional situations safely. The RAND study’s findings about inconsistent responses and degrading safeguards underscore why professional mental health support remains irreplaceable.
Practical Steps for Parents
Set Clear Expectations: Discuss with your children that AI tools are for information and casual conversation, not for serious emotional support or crisis situations.
Monitor Usage Patterns: Pay attention to how long and how frequently your child engages with AI tools. Extended, daily conversations may indicate they’re using AI as a primary emotional outlet.
Maintain Open Communication: Create family environments where children feel comfortable bringing concerns to human adults rather than relying solely on AI for emotional support.
Know the Warning Signs: If your child seems to be having intense or frequent conversations with AI about personal problems, this may indicate they need additional human support.
Keep Professional Resources Available: Ensure your family knows how to access mental health professionals, crisis hotlines, and other human support systems when needed.
Looking Forward
OpenAI’s safety updates represent important progress in making AI tools safer for children and teens. The recognition that medium-risk conversations pose the greatest danger, combined with new parental controls and emergency contact features, shows the company is taking child safety seriously.
However, these technological solutions work best when combined with strong family communication and access to professional mental health resources. AI can be a useful tool in children’s lives, but it cannot and should not replace the human connections that provide genuine emotional support during difficult times.
As these safety features roll out, families will have better tools to ensure AI remains helpful rather than harmful. But the most important safeguard remains the same: maintaining strong relationships with our children so they turn to trusted humans, not just sophisticated software, when they need help most.
Parental Takeaway:
ChatGPT’s new safety features are encouraging progress, but they highlight how much risk we’ve been accepting without realizing it. The research showing inconsistent responses to teens’ subtle distress signals should concern every parent. Use the new parental controls when they become available, but more importantly, ensure your child knows that no AI—no matter how advanced—can replace human support when they’re struggling emotionally.



