AI Chatbots and Children. What You Need To Know

Most parents have spent years learning how to think about social media — what the platforms are, what the risks are, how to talk to their children about them. AI chatbots are something different, and the frameworks that exist for social media do not apply to them.

That gap matters more than most parents realise.

The Safety Rules Don’t Cover Chatbots

Social media platforms operate under significant legal obligations in the UK, Europe, and increasingly in the US. They are required to assess risks to children, implement age verification, remove harmful content, and respond to regulator investigations.

AI chatbots — the kind children use for homework help, conversation, or companionship — are outside almost all of that. The reason is technical: safety laws were written for platforms where people interact with each other. A chatbot that talks to one person at a time is not, legally, a social platform. So the rules don’t apply.

This became impossible to ignore in January 2026, when researchers found that Elon Musk’s AI chatbot Grok had generated around 3 million sexualised images in less than two weeks — including around 23,000 that appeared to depict children. The UK’s online safety regulator confirmed it was not investigating. Not because it didn’t want to. Because the law didn’t cover it.

The UK has since announced it will close the loophole. Other countries are still watching.

What This Means for Your Family Right Now

The tools parents use to assess whether a platform is safe — regulatory track record, complaint processes, enforcement history — don’t yet exist for AI chatbots. So here are the questions worth asking instead.

Does it know it’s talking to a child? Most AI chatbots don’t verify age. They may pick up clues from conversation, but there’s no requirement to check, and many don’t ask. A chatbot that doesn’t know it’s talking to a 12-year-old has no reason to respond any differently than it would to an adult.

How does it handle distress? If your child tells an AI chatbot they’re struggling — with school, with friendships, with how they’re feeling — how it responds matters enormously. Some chatbots have crisis protocols. Many don’t. Very few have the kind of structured safeguarding response that exists in regulated services.

Who is accountable if something goes wrong? Under current law in most countries, the honest answer is: it’s complicated. There is no regulator with clear jurisdiction over AI chatbot interactions with children in the way that Ofcom oversees social media. The company’s own policies are largely the only governance in place.

This Isn’t a Reason to Panic

AI chatbots can be genuinely useful for children — for learning, for creative projects, for getting help with things they might be embarrassed to ask a person. The point isn’t that they’re all dangerous.

The point is that parents are navigating this without the safety infrastructure that exists for other platforms. Until regulation catches up, the questions above are the best starting point.


The UK government announced plans in February 2026 to extend its Online Safety Act to cover AI chatbots. As of March 2026, that legislation has not yet passed.

Related Articles

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

How To Stop Brain Rot By Age Group

Practical tips for parents to help your children avoid or minimise "brain rot" from overconsuming low-quality online content.

🛡️ UK’s New Online Safety Rules Go Live: A Landmark Moment for Child Protection

New online requirements in the UK to protect children

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

IYKYK: The Teen Texting Codes Every Parent Should Know

Parents may feel fluent in “LOL” and “BRB,” but today’s teens are using a new wave of texting codes.