Build the internet young people are asking for — instead of simply banning them from it
Current public policy debates on young people and the internet often feature loud voices focused on banning access. However, these loud voices seldom belong to teenagers themselves.
That is why a Google-commissioned study, delivered by youth specialists Livity, is useful. It places at its core the voices of over 7,000 teenagers from across Europe on what actually helps them learn, connect and stay well when navigating online spaces.
The topline is simple but equally challenging: adolescents see technology as a force for good when it is designed with people in mind. They want tools that are accessible, empowering and take a “human-first” approach, and not features that replace people or create harm.
Very importantly, they want a say in how those tools are built, and not to be cut off from online spaces. In the report’s words, teens are “not just users… but its future architects.” As a psychologist and online safety professional, that really resonates with me — that safety is not just flipping a switch, but a design choice and a shared responsibility.
On the topic of AI, the report is clear. AI is already part of how young people learn, so guidance matters more than gatekeeping. Across the sample, the majority of teenagers who have used AI report using it at least weekly for schoolwork or creative tasks. They say AI makes learning more engaging (50%), explains difficult topics (47%) and provides instant feedback (47%). In other words, they are not asking adults about the basics of AI; they are asking us to help them use it well. Over a quarter (28%) believe their schools have not approved any AI tools, and another 13% are not sure what is allowed.
The fix is not blanket prohibition. It is clear, age-appropriate guardrails. That means spelling out what is OK, what is not, how to cite, how to verify. That is what the teens are implicitly asking for, and the report’s recommendations explicitly call for through curriculum-level AI and media literacy, age-appropriate experiences and harmonised standards that preserve access to information while protecting younger users.
If you want to understand how teens learn and explore, videos seem to be key. Nearly 84% say they watch educational or how-to videos at least a couple times a week; over a third watch daily. Video helps them pick up new skills, understand news and current events and encounter perspectives beyond their own.
Many teens judge personalized recommendations, which are often criticized (and sometimes rightly), as useful for finding what is genuinely interesting. That’s especially true when combined with active search and content shared by friends. The point is not that feeds are not without their pitfalls and challenges, but that for the majority of teens, feeds are where they learn; they are their classrooms, and they are asking for smarter, safer ones.
Young people are not naive about risk. They worry about misinformation and want help to judge AI-generated content. They find value in recommendations but do not want to be nudged into rabbit holes. Above all, they want clarity and fairness, which can be manifested as privacy settings that are not extremely difficult to use, policies that match their developmental stage, and tools that do not leave anyone behind.
From Save the Children’s perspective, the report’s recommendations align with a rights-based approach: protection and participation travel together. We should be sceptical of one-size-fits-all bans that sever access without addressing the design features that amplify harm.
The evidence here shows teens already use AI and video to learn and create; the real work is making those environments safer by default and raising the floor of digital literacy. There are three practical moves we can make this school year:
- Require clearer, default-on safety and privacy controls for teens across platforms, including straightforward reporting, nudges that interrupt when necessary, and labelling for AI-generated content.
- Bake AI/media literacy into timetables so that young people are learning how to prompt well, to cross-check the returns they get, and also to spot synthetic content.
- Back parents to be the first line of support. National programmes that demystify tools, scams, and reporting — and normalise asking for help — will meet teens where they already turn when things go wrong.
If we listen to young people, the goal is not less internet but a better one.