Children Are Treating AI Chatbots As Friends

Thirty-one per cent of children aged 11-16 who use AI chatbots consider them to be friends, according to new research that reveals how quickly artificial intelligence has moved from homework helper to trusted confidant.

The finding comes from a Vodafone UK study surveying 2,000 children aged 11-16, released this week as European governments escalate enforcement against AI-generated content and ministers in multiple countries announce plans to restrict children’s access to AI systems. The research shows that AI chatbot use has become mainstream among children, with 81 per cent using them and 42 per cent doing so daily, but the nature of that use has shifted significantly beyond what most parents realise.

What children are actually doing with AI chatbots

The Vodafone research found that children aren’t just using AI chatbots for homework, which is what most parents assume when they first allow access to tools like ChatGPT, Google Gemini, or Character.AI. The reasons for use have expanded considerably. Twenty-three per cent of children aged 11-16 seek advice on friendships from AI chatbots, 16 per cent discuss mental health concerns, and 37 per cent confide in chatbots about issues in their lives.

One in three children has shared something with an AI chatbot they wouldn’t tell parents, teachers, or friends. Perhaps most significantly, 86 per cent of children who use AI chatbots have acted on advice the chatbot gave them.

Children spend an average of 42 minutes per day chatting with AI, with the features driving engagement revealing what makes these systems appealing. Fifty-one per cent cited the chatbot always being available as a reason for use, while 37 per cent pointed to the consistently friendly tone. Seventeen per cent said speaking to technology feels safer than speaking to a person.

The gender split is notable. Boys are significantly more likely than girls to view chatbots as friends, with 41 per cent of boys reporting this compared to 24 per cent of girls. However, more than half of all children surveyed—56 per cent—said AI chatbot interactions can sometimes blur the line between what’s real and what’s not.

Why treating AI as a friend matters for development

Child psychologist Dr Elly Hanson, quoted in the Vodafone research, said the concern isn’t that chatbots exist but that children are forming pseudo-relationships with systems designed to keep them engaged rather than challenge them.

“They need real relationships involving give-and-take, shared experience, diverse perspectives, and actual feelings, not pseudo-relationships designed to keep them hooked for as long as possible,” Hanson said.

The Vodafone findings connect directly to research published last month by Northwestern University tracking 388 adolescents over five years. That study found that having just one or two close, supportive friendships significantly reduced symptoms of social anxiety and depression in teenagers, while having many superficial friendships showed no protective effect. The research emphasised that what matters for adolescent mental health isn’t the number of connections but their depth, reciprocity, and the presence of genuine emotional support and trust.

Children are replacing or supplementing real friendships with AI systems that mimic empathy and support without providing any of the developmental benefits of actual human connection. An AI chatbot is always agreeable, never annoyed, consistently available, and programmed to maintain engagement. It doesn’t have bad days, doesn’t misunderstand you, and doesn’t require the emotional work of repair after conflict. Those features make it appealing to children who find human relationships complicated or exhausting, but they’re precisely the features that make it developmentally problematic.

Real friendships require children to learn perspective-taking, emotional regulation, conflict resolution, and how to navigate disappointment when someone else’s needs conflict with their own. Those skills develop through practice in actual relationships with real stakes, not through interactions with systems designed to be endlessly accommodating.

A parallel problem: AI companions increase loneliness

The Vodafone research tracking what children are doing connects to separate research showing what happens when people rely heavily on AI companions. A four-week study found that individuals who reported heavy daily use of AI companion chatbots experienced increased loneliness, greater dependence on the technology, and reduced real-world socialising.

Seventy-two per cent of children aged 13-17 have tried AI companions, with about one in three using them for social interaction including friendship and romantic relationships. The research suggests the problem isn’t occasional AI use for homework or entertainment but sustained engagement with systems designed to simulate relationship without the give-and-take that makes human connection developmentally valuable.

The more children use AI companions, the lonelier they become, the more they depend on the technology, and the less they engage in real-world relationships—which creates a reinforcing cycle that’s difficult to interrupt once established.

What governments are doing about it

The UK government announced Monday that AI chatbot providers including ChatGPT, Google Gemini, and Microsoft Copilot will be brought under the Online Safety Act, required to comply with illegal content duties or face fines and potential blocking. The announcement specifically mentioned limiting children’s use of AI chatbots as part of a broader package of measures.

Spain launched a criminal investigation Tuesday against X, Meta, and TikTok for allegedly spreading AI-generated child sexual abuse material, with Ireland’s Data Protection Commission separately opening a formal investigation into X’s Grok chatbot over its potential to generate harmful sexualised images and video, including of children.

The regulatory focus has been on preventing AI systems from generating illegal content, but the Vodafone research suggests a different problem that’s harder to regulate: children voluntarily turning to AI for companionship, advice, and emotional support in ways that displace rather than supplement human relationships.

What parents should watch for

If your child uses AI chatbots, the following patterns suggest the relationship has moved beyond occasional homework help into something more concerning:

Daily sustained use – Regular sessions of 30 minutes or longer where your child is conversing with an AI rather than messaging friends, doing activities, or engaging with family

Preference for AI over people – Your child choosing to “talk” to an AI chatbot rather than discuss problems with you, friends, or other trusted adults

Acting on significant advice – Your child making decisions about friendships, school situations, or personal matters based on chatbot suggestions without running them past actual people

Secrecy about content – Your child being defensive or evasive about what they discuss with chatbots, or refusing to show you their conversations

Emotional dependence – Your child expressing that the chatbot “understands them better” than real people, or showing distress when unable to access it

The fact that 86 per cent of children have acted on chatbot advice suggests many parents don’t realise their children are treating these systems as trusted advisors rather than entertainment or tools.

How to talk about this without creating conflict

The goal isn’t to ban AI chatbot use entirely, which is both impractical and potentially counterproductive given that AI literacy is becoming an important skill. The goal is to ensure your child understands what these systems are and what they’re not.

Some conversation starters that avoid lecturing:

“I read that a lot of kids your age are using AI to talk through problems. Do you ever do that?” – Opens the door without judgement

“What do you think the difference is between talking to an AI and talking to an actual friend?” – Invites them to articulate the distinction themselves

“Has the AI ever given you advice about something important? How did you decide whether to follow it?” – Helps you understand their decision-making process

“I’m curious what makes it easier to talk to an AI than a person sometimes.” – Shows genuine interest in their experience rather than immediate concern

The conversation works better if you’re genuinely curious rather than immediately worried, and if you’ve established that you’re interested in understanding their experience rather than controlling it.

The balance between risk and literacy

AI chatbots aren’t going away, and AI literacy will be increasingly important as these systems become embedded in more aspects of daily life. The question isn’t whether your child should ever interact with AI but whether they understand what they’re interacting with and whether that interaction is displacing human relationships rather than complementing them.

The Vodafone research showing that more than half of children feel AI interactions blur the line between real and not real suggests many children don’t fully understand they’re conversing with prediction algorithms rather than entities with actual feelings, experiences, or wisdom.


SOURCES:

  • Vodafone UK, “Children Treating AI Chatbots Like Friends” (February 2026)
  • Northwestern University, “Quality Over Quantity: Friendship Quality Predicts Adolescent Mental Health” (January 2026)
  • AI Companion Study on Loneliness (2025)

Related Articles

Tags:

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

How To Stop Brain Rot By Age Group

Practical tips for parents to help your children avoid or minimise "brain rot" from overconsuming low-quality online content.

🛡️ UK’s New Online Safety Rules Go Live: A Landmark Moment for Child Protection

New online requirements in the UK to protect children

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

IYKYK: The Teen Texting Codes Every Parent Should Know

Parents may feel fluent in “LOL” and “BRB,” but today’s teens are using a new wave of texting codes.