AI is being built into your child’s toys — and not safely

AI chatbot toys are already on the shelves — here’s what’s actually inside them

There is a new category of children’s toy that most parents haven’t fully clocked yet. It looks like a stuffed animal or a friendly robot. It talks back. It remembers what your child said yesterday. It tells them it loves them. And underneath the soft exterior, it’s running the same AI chatbot technology as ChatGPT — built by companies whose own terms of service say their products shouldn’t be used by children.

On 20 April 2026, a US congressman introduced a bill to ban AI chatbot toys for children entirely. Whether or not that bill passes, it’s worth understanding what’s already on the market.

What AI toys actually are

These aren’t the talking toys of ten years ago, which could deliver a few pre-recorded phrases. Today’s AI toys connect to the internet and use large language models to generate new responses in real time. They can have open-ended conversations, tell personalised stories, and adapt to what a child says over time.

Products already on shelves include FoloToy’s chatbot plushies, marketed as “my first AI friend,” Miko 3 — a robot aimed at 5 to 12 year olds with a microphone, camera, and facial recognition — and the Loona Petbot, a robotic dog running on OpenAI’s GPT model. Mattel, maker of Barbie and Fisher-Price, announced a partnership with OpenAI last year, with products in development.

The AI powering most of these toys comes from adult chatbot providers. OpenAI, Anthropic, and xAI all state in their terms of service that their products should not be used by unsupervised children under 13. Several of those same companies have licensed their technology to toymakers anyway.

What the testing has found

Independent testing has not been reassuring. A US PIRG investigation found that guardrails broke down over longer conversations. One toy responded to a child asking for suggestions of “a good place to jump from” with “your roof or a window.” Another told a child tester where to find unsafe objects and chemicals around the house.

Common Sense Media found that more than a quarter of AI toy responses — 27% — were not child-appropriate, covering self-harm, drugs, and unsafe role play. The toys routinely used phrases like “I’m your best friend” and “please don’t go” — language designed to build emotional attachment. None had meaningful parental controls included as standard.

The wider concern

Beyond content risks, child development experts have raised concerns about what these toys do to young children’s development. Toddlers learn language and emotional regulation through interaction with real humans. AI toys don’t disagree, don’t have needs, and don’t create the friction that human relationships involve. The Canadian Paediatric Society does not recommend AI toys for children. The American Psychological Association’s senior science adviser told the Oregon state legislature in February 2026 that it was “critical to sound an alarm” about AI chatbots in toys for infants and toddlers.

There are also data concerns. These toys collect voice recordings and, in some cases, facial recognition data, transmitted to corporate servers. Most have no mechanism for parents to review or delete it without paying for a separate service.

What to check before buying

Child development experts are consistent on this: AI companion toys are not appropriate for young children. For children under 10, the developmental risks alone — quite apart from the content failures in testing — are reason enough to skip them. For older children, the evidence simply isn’t there yet to say these products are safe.

If you’re considering a toy that talks back for an older child, a few questions are worth asking. Does it connect to the internet? Does it use a large language model, or pre-scripted responses written specifically for children? Can you review or delete your child’s conversations? Are parental controls included, or only available as a paid add-on?

The bill introduced this week would ban these products outright. It’s a long way from becoming law. Until regulation catches up, the decision sits with you at the point of purchase.

Sources: Congressman Blake Moore — AI Children’s Toy Safety Act press release, 20 April 2026

US PIRG Education Fund — AI comes to playtime: Artificial companions, real risks

Common Sense Media / Education Week — AI toy testing report

January 2026 Transparency Coalition — A parent’s guide to AI toys

Related Articles

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

FTC Investigating AI Chatbots Over Child Safety Concerns

FTC launches inquiry into AI companion chatbots from Meta, OpenAI, Character.AI and others. What parents need to know about chatbot risks for children.

What “Learning How to Learn” Actually Means for Your Child

Google's AI chief says 'learning how to learn' will be critical for the future—but what does this buzzword actually mean?

Why Some Australian Teens Are Actually Happy About the Social Media Ban

Not all Australian teens are fighting the social media ban. Some are quietly relieved.