Most parents have spent years learning how to think about social media — what the platforms are, what the risks are, how to talk to their children about them. AI chatbots are something different, and the frameworks that exist for social media do not apply to them.
That gap matters more than most parents realise.
The Safety Rules Don’t Cover Chatbots
Social media platforms operate under significant legal obligations in the UK, Europe, and increasingly in the US. They are required to assess risks to children, implement age verification, remove harmful content, and respond to regulator investigations.
AI chatbots — the kind children use for homework help, conversation, or companionship — are outside almost all of that. The reason is technical: safety laws were written for platforms where people interact with each other. A chatbot that talks to one person at a time is not, legally, a social platform. So the rules don’t apply.
This became impossible to ignore in January 2026, when researchers found that Elon Musk’s AI chatbot Grok had generated around 3 million sexualised images in less than two weeks — including around 23,000 that appeared to depict children. The UK’s online safety regulator confirmed it was not investigating. Not because it didn’t want to. Because the law didn’t cover it.
The UK has since announced it will close the loophole. Other countries are still watching.
What This Means for Your Family Right Now
The tools parents use to assess whether a platform is safe — regulatory track record, complaint processes, enforcement history — don’t yet exist for AI chatbots. So here are the questions worth asking instead.
Does it know it’s talking to a child? Most AI chatbots don’t verify age. They may pick up clues from conversation, but there’s no requirement to check, and many don’t ask. A chatbot that doesn’t know it’s talking to a 12-year-old has no reason to respond any differently than it would to an adult.
How does it handle distress? If your child tells an AI chatbot they’re struggling — with school, with friendships, with how they’re feeling — how it responds matters enormously. Some chatbots have crisis protocols. Many don’t. Very few have the kind of structured safeguarding response that exists in regulated services.
Who is accountable if something goes wrong? Under current law in most countries, the honest answer is: it’s complicated. There is no regulator with clear jurisdiction over AI chatbot interactions with children in the way that Ofcom oversees social media. The company’s own policies are largely the only governance in place.
This Isn’t a Reason to Panic
AI chatbots can be genuinely useful for children — for learning, for creative projects, for getting help with things they might be embarrassed to ask a person. The point isn’t that they’re all dangerous.
The point is that parents are navigating this without the safety infrastructure that exists for other platforms. Until regulation catches up, the questions above are the best starting point.
The UK government announced plans in February 2026 to extend its Online Safety Act to cover AI chatbots. As of March 2026, that legislation has not yet passed.



