Series: Can Childhood Survive Social Media? →
Part 3 of 4 | Reading time: 5 minutes
So far:
Week 1: Government bans protect all children but require surveillance infrastructure
Week 2: Parental opt-out protects your child but leaves vulnerable children exposed
This week: What if we’re asking the wrong question? Instead of debating who gets access, what if we made platforms themselves compatible with child development?
The Third Approach
Rather than banning children or relying on parents, we could regulate social media platforms themselves. If platforms are causing harm, change the platforms. Make algorithmic amplification become chronological feeds, infinite scroll become session limits, unrestricted messaging become age-gated DMs.
Target the design, not access.
What Countries and Platforms Are Trying
TikTok launched new features in January 2025: “Time Away” lets parents block access during school hours, teens get sleep reminders, under-16s receive muted notifications overnight, and Family Pairing enables parental oversight.
Roblox implemented mandatory age verification: Facial recognition is required for chat access, parents can block specific friends and games, and detailed time tracking shows which experiences children use. Roblox reported ~13,000 instances of child exploitation in 2023, which makes the urgency clear.
New York’s “Stop Online Predators Act” took effect January 2025: Platforms must use age verification, chat functions default to off for kids, profiles are set to private, and financial transactions require parental approval.
The UK chose a different path: Rather than rushing to ban, the Department for Science, Innovation and Technology commissioned eight universities (York, Cambridge, Bristol, Oxford, and others) to study what actually works. Technology Secretary Peter Kyle says the goal is to build a trusted evidence base for future action, focusing on establishing causal relationships rather than just correlation. The project will review existing research, determine which methods work given how fast technology changes, and plan studies for the next 2-3 years.
The Appeal Is Obvious
Changing the platforms themselves addresses root causes. If algorithmic amplification harms children, ban algorithmic amplification for children. If infinite scroll is addictive, mandate breaks. If predators use DMs, restrict messaging.
It works regardless of parental engagement. Features apply to all child users, not just those with attentive parents, which protects vulnerable children whose parents won’t protect them.
It requires no surveillance infrastructure for access control. You’re regulating what platforms do, not who can use them.
What this promises: Safer social media where children can connect and learn without the most harmful features, platforms respond to regulatory pressure by innovating on safety, market competition drives improvement as platforms that better protect children gain users.
But Here’s the Problem
Remember Bluesky CEO Jay Graber’s argument from Week 1? Meta employs thousands across trust and safety, legal compliance, and policy teams. Bluesky operates with dozens of staff.
When regulations require age verification, content filtering, parental controls, and compliance documentation, the costs work differently for giants and startups. Meta spreads these costs across billions in revenue. A startup with venture funding and no revenue yet cannot.
Platform regulation costs don’t scale with users. Whether you have 10,000 or 10 million users, you need legal teams, technical infrastructure, trust and safety staff, compliance documentation, age verification systems. Fixed costs hit small players hardest.
Why competition matters: Graber argues competition drove Meta to add chronological feeds, something users and regulators requested for years. Meta ignored them until users started migrating to rivals. If only giants can afford to operate, competitive pressure disappears. If parents can’t switch to alternatives, platforms have less incentive to respond to concerns.
The irony: We’re implementing safety regulations to protect children from platforms optimised for engagement, but if compliance costs make alternatives impossible, children remain stuck on the platforms causing the problems we’re trying to solve.
What Regulation Doesn’t Solve
The displacement problem persists. Even with perfect safety features, chronological feeds, no infinite scroll, restricted DMs, and content filtering, social media still displaces reading, outdoor play, face-to-face friendship, boredom, focused attention, sleep.
A 13-year-old spending two hours on “safe” TikTok isn’t spending two hours reading, playing outside, or developing unmonitored social skills. The harm isn’t just what they see, it’s what they’re not doing instead.
The mental health problem remains. Even with chronological feeds and content filtering, platforms still enable constant social comparison, still create pressure to perform for an audience, still expose children to unrealistic beauty standards and curated perfection. Research shows it’s not just what content children see, it’s the fundamental dynamic of performing your life for algorithmic validation.
That study of 10,000+ children showing smartphone ownership at age 12 linked to 31% higher depression rates by age 14? Those harms don’t disappear with chronological feeds.
The business model stays the same. Even “safer” platforms still need to collect data (how else to personalise experience?), maximise engagement (how else to generate revenue?), scale to massive size (network effects require it), keep users returning (otherwise the business fails).
You can regulate away the most egregious harms but you can’t regulate away surveillance capitalism while keeping social media as it currently exists.
The permanence problem continues. Safer platforms still create permanent records of temporary phases. Your 14-year-old’s interests, friendships, mistakes are all archived, potentially discoverable, following them into adulthood. Regulation can protect privacy better but it can’t eliminate that platforms exist to collect and store information.
The Regulation Paradox
Every safety feature requires infrastructure. Age verification needs identity systems, content filtering needs monitoring, parental controls need data collection, compliance needs tracking.
We’re building surveillance to prevent surveillance. Platforms harm children by collecting data and optimising behaviour, and regulations protect children by requiring platforms to collect more data and build better tracking systems.
The infrastructure doesn’t disappear when your child turns 16, it becomes permanent and can be repurposed.
The UK’s Methodical Approach
While Australia, France, and Egypt rush to implement bans, the UK is taking a different path. The Department for Science, Innovation and Technology commissioned eight universities to figure out how to study the relationship between smartphones, social media, and children’s wellbeing.
Not the impacts themselves, but how to research the impacts.
The project will review existing research and identify gaps, determine which methods actually work (given how fast technology changes), plan studies for the next 2-3 years, and focus on vulnerable groups (LGBTQ+ youth, children with special needs).
Technology Secretary Peter Kyle admits current research isn’t robust enough to make policy decisions. This is meta-research about research. It shows the UK is moving carefully rather than rushing to ban, but it also means that while they’re studying how to study the problem, children continue being exposed.
Is methodical caution responsible governance or dangerous delay?
The Tradeoff
What platform regulation does: Reduces specific harms through design restrictions, protects children whose parents won’t act, addresses root causes rather than restricting access, works for all children using the platform regardless of family engagement.
What it doesn’t do: Change the fundamental business model, solve the displacement problem (safe social media still isn’t outdoor play), eliminate competitive barriers that help giants while crushing startups, prevent surveillance infrastructure from becoming permanent, address mental health harms from social comparison and performance anxiety.
Social media platform regulation can reduce specific harms but it can’t change the fundamental business model, it can’t solve the displacement problem, and it might eliminate the competition that drives innovation.
If government bans require surveillance, parental opt-out only works for engaged parents, and platform regulation can’t change the fundamental design, what actually works?
Next week: Part 4 explores why no single approach solves everything and what your options are. Understanding what each approach actually does helps you decide what’s right for your family.
Read Part One: When Governments Ban Social Media for Children →
Read Part Two: Why Saying No to Social Media Only Protects Your Child →
Related stories:
- Bluesky CEO: Age verification laws entrench big tech dominance
- UK commissioning research on how to study smartphone impacts
- TikTok and Roblox roll out new parental control features
- Electronic Frontier Foundation: Congress wants to hand your parenting to Big Tech
- UK Government: Research to understand impact of smartphones on young people



