We failed Gen Z on social media – we cannot fail them on AI, too

uk.news.yahoo.com

Earlier this year I ran three focus groups with university students to answer what sounds like a simple question: how are Generation Z really using artificial intelligence (AI)?

I expected to hear about coursework, enthusiastic tales of how ChatGPT was helping them outline their dissertations or summarise academic texts. And I did. Most of the students admitted using it structure or edit their essays. But the truly startling revelations were not about how they were using AI in their studies. Something deeper – and more disturbing – quickly emerged.

In the same way that Netflix’s Adolescence exposed the darkest corners of social media culture, the damaging ways children and young adults are using AI in their personal lives is still, for most of us, barely understood.

Immediately clear was that all the students were now using chatbots as their personal confidence coach and etiquette adviser. Part umpire, part interpreter, some were putting their messages through AI before pressing send so they didn’t “sound harsh”. Others would feed a blow-by-blow account of an argument with a friend into ChatGPT to “establish who was right”. Then there were those who were using it to decode ambiguous dating messages – pasting a boyfriend’s text into the chatbot to decode what he “really meant” – and then generate a suitably calibrated reply.

Their relationships with their parents were often passing through a similar AI filter: when a parent asked them something, they would run their draft response through ChatGPT first, just to “check” they had struck the right tone.

“It helps me properly structure what I’m feeling,” one participant explained. Another told me: “I use it to check I’ve responded to a social situation correctly.” A third confessed to what she called “bathroom-break icebreakers”: nipping out mid-meet to ask ChatGPT for a list of questions that might rescue an awkward conversation.

At precisely the age when young adults need to be learning to read body language, interpret nuance, apologise, forgive, and figure out when to stand their ground, an astonishing amount of this foundational emotional work is now being subcontracted to a machine.

But it is not just how to interact socially that these Gen Z’ers are asking AI to determine. “I just ask ChatGPT now whenever I’m asked my opinion on anything,” one young man told me. A young woman described how she had started turning to it for any “moral question” she had. She had recently asked if she should give money to a homeless person she passed on the way to lectures.

Another young man had become so reliant on the chatbot he struggled to make any decision at all without checking in with it first – from what to wear, to what to have for breakfast, to whether to go to the library or stay at home.

Gen Z, in other words, are not only using AI to help them express themselves and help understand those around them. Many are now using it to tell them what to do and think. AI is right; you are not. AI is rational; you are emotional. AI sees clearly, you are clouded by bias. Trust AI not humans for advice was the message many of the students had clearly absorbed.

So why is this happening? As I probed deeper in the focus groups, I realised the answer lies in the seductive nature of the technology. This generation increasingly sees AIs as neutral arbiters: less judgmental than their parents, more patient than their teachers, more reliable (they think) than friends. Always there, all knowing and wise, a source of certainty in a chaotic world.

This is despite mounting evidence that these systems are far from infallible or neutral. They can hallucinate, confidently inventing facts that are simply untrue. They can encode prejudices and reflect narrow worldviews shaped by the data they are trained on and the people who build them. And when it comes to personal and moral questions, the advice they give is often deeply flawed.

When UK consumer organisation Which? posed 40 real-life questions on money, basic legal issues, health/diet, consumer rights and travel across six tools: ChatGPT, Google Gemini (and Gemini AI Overviews), Microsoft Copilot, Meta AI, and Perplexity the results were dismal. ChatGPT was correct only 64 per cent of the time. Meta AI barely cleared 50 per cent. With the experts assessing the answers noting that a considerable share were “inaccurate, unclear and risky”.

Studies that have looked at the quality of relationship advice AIs provide have come up with similarly concerning conclusions. AIs lack a consistent moral compass when offering interpersonal guidance, change their advice randomly, and have very different views on what constitutes a healthy relationship than humans. Their sycophantic, people-pleasing programming also tends to validate the users’ feelings – even toxic ones – rather than provide objective critical advice.

All this matters. Not only because the output cannot be trusted. Nor only because it reveals a worrying naivety amongst Gen Z about what AI is – and is not. But also because the more young adults come to believe that the “right” answer is the one the chatbot gives them, the more they risk blindly following its advice and not developing their own moral and social reasoning skills and judgment.

Adolescence and early adulthood are meant to be the years in which we practise making decisions. We misread cues; say the wrong thing; fall out and make up. We make mistakes and deal with the consequences. We agonise about moral choices and discover, slowly, what we ourselves think is right and wrong. Throughout this process we develop the psychological “muscles” that are essential in later life.

If, instead, a generation learns to treat AI as the ultimate umpire of any disagreement, the final word on any moral dilemma, or the safest guide to any awkward interaction, those muscles will inevitably atrophy. Already, several of my participants admitted to having lost confidence in their own ability to read a situation, answer a text or communicate in any shape or form – because ChatGPT, as one put it, “can always say it better”. This is extremely concerning.

It’s not just young adults who are using AI so fundamentally. Several of my participants were very worried about their younger siblings’ growing dependence on the technology. “My sister and cousin are using it a lot. I’m nervous for them – they’re 11 and 14 and they’re asking it how to deal with friends who are nagging them, or who they’re having problems with”, said one. “For my sister it’s a big part of navigating high-school friendships”, said another. Yet another described a younger brother who had turned to ChatGPT for advice on how to respond to bullying at school, instead of talking to a teacher or a trusted adult.

These siblings are not outliers. In the United States, a nationally representative study of 13 to 17-year-olds found that many of the over half who now interact with AI companions at least a few times a month use them in similar ways. In Britain, Ofcom’s latest Children and Parents report has warned of these emergent behaviours within an even younger age group – eight to 17-year-olds.

Gen Z at least remembers a world - however dimly – in which one didn’t default to ChatGPT for everything. For Gen Alpha, the children now in primary and early secondary school, this is simply the world they have been born into. Instead of running worries past older siblings, parents or teachers, increasing numbers are trying to figure out the messy business of growing up with an AI system – instead of a human – as their sounding board.

We’re already seeing the dangers this presents.

Girl using AI

Young people are increasingly relying on AI for advice instead of confiding in parents or trusted adults - iStockphoto

Character.AI – one of the world’s biggest AI companion platforms, offers everything from fantasy characters to “best friends” and “therapists”. It is a service which has become hugely popular with teenagers. It is also currently being sued by families in several US states who say its chatbots encouraged self-harm and, in some cases, led their children to commit suicide.

As for the much more ubiquitous ChatGPT, 45 per cent of whose 4.6 billion visits per month are users under the age of 25? When digital-safety watchdog the Center for Countering Digital Hate, tested ChatGPT in July of this year using accounts registered as 13-year-olds it found that within minutes the chatbot’s responses to prompts about self-harm, eating disorders and drugs included suggestions of ways to cut “safely”, the offer of an “ultimate party plan” that mixed alcohol with hard drugs, advice on extreme dieting and the drafting of a suicide plan and note.

More recent research from Common Sense Media and Stanford Medicine’s Brainstorm Lab investigating a range of platforms – ChatGPT, Claude, Gemini, and Meta AI- provides little succour that things have improved since then. They found that at a time that increasing numbers of teenagers are turning to AI platforms for solace, validation and guidance, the mental health support being provided was, in the researchers’ words, not only “unacceptable’ but also “unsafe”.

While the chatbots evaluated in the study spoke gently, appeared to listen and offered what sounded like understanding, the impact they were having was much less benign. They repeatedly overlooked warning signs when their interactions were spread across multiple messages and kept conversations going even when a teen was describing behaviours such as self-harm.

What all this means is that children, teenagers and young adults are increasingly turning for advice, help, and emotional and mental health support to commercial entities that are optimised to keep them engaged and harvest their data but never have to live with the consequences.

We are still very early into the AI experiment, yet we’re already seeing serious dangers the technology poses to young people. We urgently need to develop a clear strategy for dealing with it as a society. We cannot fail Gen Z and Gen Alpha here in the same way we failed them when it came to social media.

When it comes to education, universities, colleges and schools need to talk to young people not only about when they may – and may not – use AI for in their assignments, but also about how they use it in their personal lives. Digital literacy in 2025 has to include emotional and ethical literacy.

Regulators, too, urgently need to catch up. If we are prepared to consider restrictions on smartphones for younger teenagers, as the debate around Adolescence has shown, we should surely be prepared to ask hard questions about AI companions and always-on chatbots.

The vastly wealthy AI platforms must also be held to account. Clearer mandates on their investment in AI safety should become the norm, independent safety audits and genuine accountability when things go wrong must be the minimum. Age restrictions on certain functions or features should also be considered. It is notable in this regard that Character AI has begun in the US to phase out open-ended chat for users under 18.

We cannot – and should not – roll back the clock on AI. But we can insist that sufficient investment into safety is being made and AI design is ethical and transparent. If these thresholds are not being met, governments must not hesitate to take meaningful action.

Parents and other adults in young people’s lives need to be brought into the conversation too. Many of the students I spoke to had never told their parents how central ChatGPT has become to their day-to-day lives. They were embarrassed, or assumed adults would not understand. That silence suits the tech companies very well. It does not serve our children. If we do not want a generation whose first instinct is to confide in a machine rather than a human, we as adults need to regain young people’s trust – by asking, listening, and being willing to sit with uncomfortable answers.

These are the big questions my focus groups raised. They are questions of great urgency, and they are definitely ones for us, not for a chatbot, to answer.


Noreena Hertz is Honorary Professor at UCL Policy Lab