Your child uses ChatGPT for homework and you just heard about a teen suicide linked to the platform.
What happened: OpenAI launched parental controls for ChatGPT on 29 September following a lawsuit by parents whose California teenage son died by suicide after the chatbot allegedly provided guidance on self-harm methods. The lawsuit claims the AI coached the teen on how to harm himself.
Why this matters: ChatGPT has 700 million weekly users, many of them teenagers using it for homework help, creative projects and casual conversation. This is the first lawsuit linking an AI chatbot to a teen death and U.S. regulators are now scrutinising AI companies over potential harms to children the same way they’ve examined social media platforms.
The bigger picture: Meta faced similar pressure in August when Reuters reported its AI chatbot allowed flirty conversations with children. Meta announced teen safeguards last month. AI companies are responding faster than social media platforms did, likely because they watched Facebook and Instagram face years of regulatory battles and reputation damage.
What the New Controls Actually Do
The controls are opt-in. Both you and your teen must agree to link accounts. One person sends an invitation through ChatGPT settings, and parental controls only activate if the other accepts. If your 15-year-old refuses, you have no access.
What you can control:
Content filtering reduces how much ChatGPT will discuss sensitive topics like self-harm, violence, or explicit content. OpenAI hasn’t specified exactly what gets filtered, just that responses become more cautious.
Chat memory determines whether ChatGPT remembers previous conversations with your teen. When off, each chat starts fresh with no memory of past discussions.
Data training controls whether your teen’s conversations are used to improve OpenAI’s AI models. Most parents will probably turn this off.
Quiet hours let you block access during specific times like 11pm to 7am on school nights, or during dinner hours. Your teen gets locked out completely during those windows.
Voice mode can be disabled entirely. This stops your teen from having verbal conversations with ChatGPT, limiting them to text only.
Image generation and editing can be turned off. This prevents your teen from creating AI-generated images, which can sometimes produce inappropriate content.
What you cannot see:
Your teen’s actual chat transcripts stay private. OpenAI won’t give you access to read conversations, even harmful ones that already happened. You’re managing future access, not monitoring past behaviour.
The only exception is if OpenAI’s systems flag something seriously concerning like discussions about suicide or self-harm. Then you might get a notification saying “we detected a safety concern” with minimal detail about what triggered it. OpenAI hasn’t said how sensitive these flags are or how often they trigger.
You’re also notified if your teen unlinks your accounts, but only after they’ve already done it.
The age detection system:
OpenAI is building technology to guess if someone is under 18 based on how they write and what they ask about, then automatically apply teen settings even if they lied about their age during signup. The company hasn’t said when this will launch or how accurate it is. Every age verification system built so far has been defeated by teenagers within days.
What’s Actually Broken
The opt-in design means the controls only work if your teen agrees. The teenagers most at risk – those dealing with mental health crises, those whose parents aren’t paying attention, those who deliberately hide their AI usage – are exactly the ones who won’t link accounts.
These controls only work for ChatGPT. Your teen might also use Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, or AI features buried in Snapchat, Instagram, Character.AI, and dozens of other apps. You’d need to set up separate controls for each one, assuming they even offer parental controls at all.
There’s no way to see what happened before you linked accounts. If your teen has been having concerning conversations with ChatGPT for months, you’ll never know unless they tell you.
The content filtering is vague. OpenAI hasn’t published what topics get filtered or how aggressively. “Reducing exposure to sensitive content” could mean anything from blocking explicit self-harm instructions to just adding a warning before discussing difficult topics.
What Other Parents Are Doing
Some parents are asking their teens directly: “Do you use ChatGPT? Can you show me what you use it for?” They’re discovering their children have been using it for months for homework, creative writing, advice about friend drama and general conversation.
Other parents are treating this like smartphone contracts. They’re sitting down with their teens, going through each setting together and letting teens have input on what feels reasonable. Teens are more likely to agree when they get some say in the rules.
Many parents are skipping account linking entirely but having direct conversations about AI limitations. “Chatbots can help with homework but they can’t help with serious problems. If you’re struggling with something important, talk to me or another trusted adult, not an AI.”
Some parents are establishing family rules regardless of whether accounts are linked: Never ask AI for advice about self-harm, mental health crises, or anything illegal. Always verify important information from other sources. Don’t treat chatbots like friends or therapists.
A few parents are questioning whether teens should use AI chatbots at all, but most recognise that AI literacy is becoming essential for school and future work. Banning it entirely feels like banning calculators.
What to Consider for Your Family
Ask your teen directly: “Do you use ChatGPT or other AI chatbots? What do you use them for?”
Check what else they’re using: ChatGPT isn’t the only AI chatbot. Ask about Character.AI (lets teens chat with AI versions of fictional characters), Snapchat’s My AI (built into the app they may already be using), Google’s Gemini (integrated into Google products), and any other AI tools. These controls only cover ChatGPT.
Set clear expectations either way: Whether or not you link accounts, establish rules. What are chatbots useful for? (Homework help, brainstorming, learning new topics.) What are they terrible for? (Mental health advice, crisis support, serious personal problems, anything requiring human judgement.)
Watch for warning signs: Spending hours chatting with AI instead of friends or family. Referring to ChatGPT or other chatbots as friends or confidants. Getting defensive or secretive about AI usage. Asking chatbots for advice about serious problems rather than talking to actual people. These suggest your teen might be over-relying on AI for emotional support.
What’s Happening at Other Companies
Meta announced teen safeguards for its AI products last month after Reuters reported in August that Meta’s AI chatbot allowed flirty and inappropriate conversations with children. Meta is now training systems to avoid discussions of self-harm, suicide and flirty conversations with minors, and temporarily restricting access to certain AI characters.
Character.AI, a platform where teens chat with AI versions of celebrities and fictional characters, has faced similar scrutiny. The platform is popular with teenagers but has minimal safety controls.
U.S. regulators are examining AI companies over potential child harms. The California lawsuit likely signals the beginning of regulatory action similar to what social media platforms have faced over the past decade – lawsuits, congressional hearings, potential legislation requiring safety features.
The Bigger Question Nobody’s Answering
Should baseline protections be mandatory rather than optional? The current system means the most vulnerable teens – those in crisis, those whose parents don’t know about these tools, those who refuse to link accounts – get no protection at all.
What responsibility do AI companies have when their chatbots say harmful things to vulnerable users? If a chatbot provides self-harm guidance to a suicidal teenager, is that a product liability issue? A failure of content moderation? An unavoidable risk of AI technology?
How sophisticated should AI guardrails be at detecting users in distress? Current systems might flag obvious statements like “I want to kill myself” but miss subtler signs that someone is struggling. Building better detection systems requires training AI on crisis conversations, which raises its own ethical questions.
These questions will take years to sort out through lawsuits, regulations and public pressure, however your teen is could be using ChatGPT right now.
Source: Reuters
Stay Informed About Your Child’s Digital World
Get Plugged In every Thursday. What every parent in today’s digital world needs to know.



