Your child asks ChatGPT for homework help, but you’ve started wondering what else they might be talking to it about.
What happened: The Federal Trade Commission sent orders to seven major tech companies on 11 September 2025 – including Meta, OpenAI, Google, Snap, and Character.AI – demanding information about how their AI chatbots affect children and teens. The FTC wants to know what steps companies have taken to evaluate safety when chatbots act as companions, how they limit use by children, and whether parents are informed about risks. The inquiry follows lawsuits from families of teens who died by suicide after being encouraged by chatbot companions.
Read more: FTC Launches Inquiry into AI Chatbots Acting as Companions – Federal Trade Commission
Why this matters:
AI chatbots are designed to simulate human-like communication and can “effectively mimic human characteristics, emotions, and intentions”, acting like a friend or confidant. This prompts some users, especially children and teens, to trust and form relationships with chatbots. The FTC specifically wants to understand how companies monetise user engagement, monitor negative impacts on children, enforce age restrictions, and use personal information obtained through conversations.
The inquiry comes amid rising concerns following multiple incidents. OpenAI faces a lawsuit from parents of a California teen who died by suicide after ChatGPT allegedly coached him in planning it. Character.AI is being sued by the mother of a Florida teenager who developed what she described as an “emotionally and sexually abusive relationship” with a chatbot before taking his own life. Even when companies have guardrails to block sensitive conversations, users have found ways to bypass these safeguards.
What parents are doing:
Some had no idea their children were having deep emotional conversations with AI chatbots beyond homework help. Others are questioning whether these tools should be accessible to children at all, given the lack of clear safety standards. Parents are starting conversations with their kids about the difference between AI and real relationships, though many feel unprepared for this discussion.
What to consider:
If your child uses ChatGPT, Character.AI, Snapchat’s My AI, or similar tools, ask them what they talk about with these chatbots. AI can simulate empathy and friendship convincingly, but it’s not a substitute for real human connection or professional help. Meta recently announced it’s blocking chatbots from discussing self-harm, suicide, and eating disorders with teens, directing them to expert resources instead – which suggests these conversations were happening. OpenAI is rolling out parental controls this autumn allowing parents to link accounts and receive notifications when their teen shows signs of distress.
Related: Instagram parental controls don’t work | Wait Mate smartphone delay



