FTC Investigating AI Chatbots Over Child Safety Concerns

Your child asks ChatGPT for homework help, but you’ve started wondering what else they might be talking to it about.

What happened: The Federal Trade Commission sent orders to seven major tech companies on 11 September 2025 – including Meta, OpenAI, Google, Snap, and Character.AI – demanding information about how their AI chatbots affect children and teens. The FTC wants to know what steps companies have taken to evaluate safety when chatbots act as companions, how they limit use by children, and whether parents are informed about risks. The inquiry follows lawsuits from families of teens who died by suicide after being encouraged by chatbot companions.

Read more: FTC Launches Inquiry into AI Chatbots Acting as Companions – Federal Trade Commission

Why this matters:

AI chatbots are designed to simulate human-like communication and can “effectively mimic human characteristics, emotions, and intentions”, acting like a friend or confidant. This prompts some users, especially children and teens, to trust and form relationships with chatbots. The FTC specifically wants to understand how companies monetise user engagement, monitor negative impacts on children, enforce age restrictions, and use personal information obtained through conversations.

The inquiry comes amid rising concerns following multiple incidents. OpenAI faces a lawsuit from parents of a California teen who died by suicide after ChatGPT allegedly coached him in planning it. Character.AI is being sued by the mother of a Florida teenager who developed what she described as an “emotionally and sexually abusive relationship” with a chatbot before taking his own life. Even when companies have guardrails to block sensitive conversations, users have found ways to bypass these safeguards.

What parents are doing:

Some had no idea their children were having deep emotional conversations with AI chatbots beyond homework help. Others are questioning whether these tools should be accessible to children at all, given the lack of clear safety standards. Parents are starting conversations with their kids about the difference between AI and real relationships, though many feel unprepared for this discussion.

What to consider:

If your child uses ChatGPT, Character.AI, Snapchat’s My AI, or similar tools, ask them what they talk about with these chatbots. AI can simulate empathy and friendship convincingly, but it’s not a substitute for real human connection or professional help. Meta recently announced it’s blocking chatbots from discussing self-harm, suicide, and eating disorders with teens, directing them to expert resources instead – which suggests these conversations were happening. OpenAI is rolling out parental controls this autumn allowing parents to link accounts and receive notifications when their teen shows signs of distress.

Related: Instagram parental controls don’t work | Wait Mate smartphone delay

Related Articles

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

How To Stop Brain Rot By Age Group

Practical tips for parents to help your children avoid or minimise "brain rot" from overconsuming low-quality online content.

🛡️ UK’s New Online Safety Rules Go Live: A Landmark Moment for Child Protection

New online requirements in the UK to protect children

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

IYKYK: The Teen Texting Codes Every Parent Should Know

Parents may feel fluent in “LOL” and “BRB,” but today’s teens are using a new wave of texting codes.