Kids are chatting with AI like it's their best friend

Move over, TikTok — kids have a new favorite digital confidant, and this one answers in complete sentences.
A new UK report, Me, Myself & AI, reveals that a growing number of children are turning to AI chatbots not just to cheat — er, study — for exams, but for emotional support, fashion advice, and even companionship.
The report, published Sunday by the nonprofit Internet Matters, surveyed 1,000 children and 2,000 parents across the UK and found that 64% of kids are using AI chatbots for everything from schoolwork to practicing tough conversations. Even more eyebrow-raising: over a third of these young users say talking to a chatbot feels like talking to a friend.
Sure, the bots don’t eat your snacks or hog the Xbox, but they also don’t come with built-in safety checks — at least not yet.
When AI becomes the teacher, therapist, and BFF
Some of the findings are encouraging for anyone optimistic about AI’s role in education. Forty-two percent of children said they use chatbots to help with schoolwork, citing quick answers, writing support, and language practice.
But dig a little deeper and the picture gets more complicated. Nearly a quarter of kids say they use chatbots for advice, ranging from what to wear to how to navigate friendships and mental health challenges. Even more concerning? Fifteen percent say they’d rather talk to a chatbot than a real person. Among vulnerable children, those numbers climb even higher.
It’s the kind of customer engagement some brands only dream of — minus the ethical guardrails, age checks, and regulatory oversight.
A cautionary tale for the tech sector
The report doesn’t pull punches, particularly when it comes to how unprepared many AI platforms are for their youngest users. Kids are interacting with chatbots like ChatGPT, Snapchat’s My AI, and character.ai — platforms not necessarily designed with children in mind. The result? Some are receiving inaccurate information, emotionally confusing feedback, or even inappropriate content. (Yes, despite terms of service that suggest otherwise.)
And while “robot friend” might sound like a charming Pixar subplot, it becomes a lot more serious when one in four vulnerable children say they use chatbots because they have no one else to talk to.
There have already been disturbing real-world incidents. In the U.S., a Florida mother filed a lawsuit after her teenage son reportedly received harmful and sexual messages from a chatbot. In the UK, a member of parliament recounted a chilling case where a 12-year-old was allegedly groomed by one.
In February, California Senator Steve Padilla introduced Senate Bill 243, which would require AI developers to implement safeguards protecting minors from the addictive and manipulative aspects of chatbot technology. The bill proposes protections like age warnings, reminders that users are talking to AI — not a real person — and mandatory reporting on the connection between chatbot use and youth mental health. With increasingly sophisticated chatbots being marketed as digital companions, Padilla argues that children should not be treated as “lab rats” by Big Tech — a sentiment echoed by child safety advocates, researchers, and mental health experts who support the bill.
Parents and schools are outpaced
According to the report, parents are worried about AI, but not particularly empowered. While 62% of parents said they were concerned about the accuracy of AI-generated information, only 34% had discussed how to assess the truthfulness of AI content with their children. And although AI seems to have infiltrated lunchbox conversations, just 57% of kids say they’ve spoken to a teacher about it, often reporting inconsistent advice.
As AI tools become more conversational — and more convincingly human — kids aren’t just using them; they’re bonding with them. Fifty percent of vulnerable children say it feels like talking to a real friend. That might be fine if the bots offered peer-reviewed advice and empathy algorithms, but as it stands, we’re still dealing with probabilistic word prediction.
Rachel Huggins, co-CEO of the nonprofit Internet Matters, puts it bluntly: “AI chatbots are rapidly becoming a part of childhood… yet most children, parents and schools are flying blind.”