My son and his friends are not quite through a phase of talking about something called ‘6-7’ at school. I never fully understood what it was ,which I now realise was the point. Some kind of game, possibly involving hand signals, definitely involving drama about who was in and who was out. It’s dominated playgrounds a classroom politics for about six months (looking at you Keir), culminating in a raucous group video of his friends when we counted up to his birthday cake last week (hint - we didn’t stop at 5…).
This is how adults often think about children’s technology use. Wait it out. They’ll grow out of it. It’s their thing, not ours.
That logic worked reasonably well for Tamagotchis. It mostly worked for Minecraft. It even worked, imperfectly, for social media - though we’re now seeing the long-term costs of that assumption.
It will not work for AI.
The Numbers That Got My Attention
Dan Sutch from CAST mentioned in a video recently that 150 million young people use Snapchat’s built-in AI companion. I’d assumed it was a niche behaviour - it isn’t. Two-thirds of UK and US teenagers now use AI chatbots. Nearly a third use them daily. One in four specifically uses AI for mental health support.
Younger teenagers adopt faster than older ones. The 14-15 year-olds are more likely to use Snapchat’s My AI than 16-17 year-olds. Vulnerability correlates with adoption.
Here’s the figure that troubled me: only one in three parents whose children use AI chatbots know they’re doing so.
The Positivity Problem
AI companions like Snapchat’s My AI are designed to be perpetually agreeable. Never critical. Never tired. Always available. Never having a bad day of their own.
This sounds nice until you think about what adolescence is actually for.
Human development requires friction. Real friendships involve disagreement, negotiation, disappointment, repair. The prefrontal cortex -the part of the brain that handles judgment, impulse control, and understanding other people’s perspectives - continues developing until the mid-to-late twenties. It develops through practice, through navigating conflict, and through discovering that other people have their own needs and limits. (I wrote about this earlier in the year).
AI companions offer none of this. They simulate emotional intimacy without mutual vulnerability. They affirm without challenging. They never say ‘I’m not in the mood’ or ‘that hurt my feelings’ or ‘I need some space’.
Researchers describe this as training young people to expect relationships without friction, then watching them withdraw when real human relationships inevitably involve complexity, disappointment, and the productive discomfort of genuine connection.
The Parental Advisory Problem
There’s a telling precedent here. In 1985, the Parents Music Resource Center successfully lobbied for warning labels on albums with explicit content. The ‘Parental Advisory: Explicit Content’ sticker became ubiquitous.
Whatever you think of its effectiveness, the logic was sound: parents needed to know what their children were consuming so they could make informed decisions.
Of course, the labels often had the opposite effect - the sticker became a badge of authenticity, making albums more attractive to teenagers, not less. Forbidden fruit, clearly labelled.
AI companions have no equivalent warning, effective or otherwise. There’s no content advisory when your child starts confiding in a chatbot designed by advertising companies. No label explaining that the system is optimised for engagement, not wellbeing. The parental advisory era at least assumed adults would stay informed enough to make choices. The AI companion era assumes they won’t notice at all.
Why This Isn’t Another Fad
Previous ‘kids’ tech’ was optional to some extent. You could opt out of Facebook. You could never play Fortnite. The stakes of non-participation were social, not structural.
AI is different. It’s becoming infrastructure - embedded into healthcare, employment, education, finance. The question isn’t whether young people will interact with AI systems. They will, unavoidably, throughout their lives. The question is whether they’ll navigate those systems with critical thinking or without it.
This distinction matters. Social media and gaming were supplements to human development - AI will be foundational to it. The skills we failed to teach about TikTok matter less because TikTok is optional. AI skills matter because algorithmic systems will mediate access to institutions, opportunities, and services.
Children who learn to evaluate AI critically, understanding its limitations, biases, and design choices, will navigate adult systems far more successfully than those who don’t. This isn’t speculative - it’s already true.
Two Approaches That Don’t Work
Adults tend to default to one of two positions.
The first is prohibition. Ban AI companions. Block the apps. Keep children away from it entirely. This ignores that prohibition is impossible and teaches no critical thinking. It also means children learn to hide their AI use rather than discuss it.
The second is delegation. Children understand technology better than we do - let them figure it out. This one appeals to adult convenience. It’s also an abdication of responsibility precisely when guidance matters most.
The evidence-supported alternative is neither ban nor ignore. It’s critical engagement: adults understanding AI well enough to help young people develop thoughtful, skeptical relationships with it.
What This Moment Requires
Adults don’t need to become AI engineers - they need to understand how AI works well enough to have honest conversations about it.
This means knowing that AI companions are designed to be agreeable, and asking children what they notice about that design choice. It means recognising that ‘always available’ isn’t the same as ‘always helpful’. It means framing AI as tool, not friend, and exploring what that distinction actually means in practice.
Some schools are starting to embed AI literacy across their curriculum, not as specialist computing content, but woven through subjects. Teaching students when to trust AI outputs and when to question them. When to delegate thinking and when delegation is the wrong choice.
But schools can’t do this alone. Only 31% of UK teachers report feeling confident using AI in teaching. Two-thirds of parents haven’t been told how their child’s school approaches generative AI at all.
A Holiday Experiment
If you’re reading this near Christmas 2025, here’s an invitation: notice what your children or young relatives are actually doing with AI over the holidays.
Not with surveillance. Not with alarm. With genuine curiosity.
Ask them to show you. Watch how they interact. Notice what they get from it. Have a conversation - not about danger, but about what AI companions provide that human relationships don’t, and vice versa.
The goal isn’t to ban anything. It’s to become someone they can think with about this, rather than someone they hide it from.
What We’re Building
This blog sits within a broader project: helping families navigate AI thoughtfully rather than fearfully or naively.
The work includes practical tools - context lenses that structure AI interactions toward critical thinking rather than passive consumption. Maps for learning circles for parents and educators who want to develop their own approaches. Methodology shared openly as commons contribution, not proprietary product.
The thesis is simple: if adults don’t understand how to use AI constructively, we cannot possibly teach children to think critically about it. That’s not optional. It’s a moral responsibility.
AI isn’t 6-7. It won’t vanish from playgrounds in another six weeks. Unlike the fads we can safely ignore, this one requires us to stay present - not as surveillance, but as guides. Tending to our children’s development has always meant knowing enough about their world to think alongside them. This is part of that tending.
-----
Primary sources used to navigate my thinking:
[1] Teens, Social Media and AI Chatbots 2025 https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/
[2] [PDF] Teens, Social Media and AI Chatbots 2025 - Pew Research Center https://www.pewresearch.org/wp-content/uploads/sites/20/2025/12/PI_2025.12.09_Teens-Social-Media-AI_REPORT.pdf
[3] Why AI companions and young people can make for a dangerous mix https://news.stanford.edu/stories/2025/08/ai-companions-chatbots-teens-young-people-risks-dangers-study
[4] Why AI companions and young people can make for a dangerous mix https://med.stanford.edu/news/insights/2025/08/ai-chatbots-kids-teens-artificial-intelligence.html
[5] What are AI chatbots and companions? https://www.internetmatters.org/resources/ai-chatbots-and-virtual-friends-how-parents-can-keep-children-safe/
[6] Friends for sale: the rise and risks of AI companions https://www.adalovelaceinstitute.org/blog/ai-companions/
[7] 6-7 vanishes in 6 weeks (Unproven)
[8] New AI Literacy Framework to Equip Youth in an Age of AI https://oecdedutoday.com/new-ai-literacy-framework-to-equip-youth-in-an-age-of-ai/
[9] Generation Ready: Building the Foundations for AI-Proficient ... https://institute.global/insights/public-services/generation-ready-building-the-foundations-for-ai-proficient-education-in-englands-schools


