Series: Can Childhood Survive Social Media? | Part 2 of 4 | Reading time: 5 minutes
Last week we examined government bans, which protect all children but require building surveillance infrastructure that persists beyond age 16 and could be repurposed for other uses. This week: what if the solution doesn’t require government intervention at all?
The Alternative
Don’t give your child social media access. No age verification needed, no surveillance infrastructure built, true privacy preserved. Simple, effective, and completely protective for your child.
Australia’s approach requires everyone to prove their age. Your child, you, every user surrenders identity data to platforms designed to collect it. Parental opt-out requires nothing. No data surrendered, no verification needed, no permanent record created. Your child turns 16, 18, 21 and still has privacy unless they choose otherwise.
Government bans require platforms to build age verification systems that persist, can be repurposed, and provide tools for tracking everyone. Parental choice requires no new infrastructure. You just don’t give them access.
The protection is real, not “protected until 16 then released into surveillance systems” but actually protected for as long as you choose.
What Childhood Actually Needs
Before the internet, childhood had certain protections that social media eliminates by design.
Mistakes disappeared. You did something embarrassing at 13, and by 16 people had forgotten. You could reinvent yourself. Children had privacy, not from parents but from corporations, algorithms, future employers, your own 25-year-old self looking back at 13-year-old you. Social comparison was limited to your immediate peer group. You didn’t see curated highlight reels of thousands of teenagers’ “perfect” lives, and algorithms didn’t learn what made you feel inadequate and serve you more of it.
There was boredom. Actual boredom, not three seconds without stimulus but long, uncomfortable stretches where you had to figure out what to do with yourself. Conversations had no permanent record. Friendships evolved without A/B testing. Identity formation happened without algorithmic interference. You could move to a new school, a new city, and start over because nobody knew your history.
Social media, by design, eliminates all of this. Even the “good” platforms, even with parental controls, even “just for family connections.” The business model requires surveillance, which means you cannot have social media without surrendering what childhood needs.
Age restrictions don’t give your child privacy, they just delay when they surrender it. Platform safety features don’t protect development, they make surveillance slightly less harmful while maintaining it. “Safer” alternatives don’t solve the problem because they’re still designed to monetise attention and collect data. The only way to preserve what childhood needs is to not participate.
The Pushback You’ll Get
“But everyone else has it.”
Not actually true, though it’s what children say. EFF data shows 10% of parents in the US are saying no to social media for under-13s. The pressure is real, but you’re not alone in resisting it.
“They need it for social connection.”
Humans connected for millennia without giving corporations behavioural data through phone calls, face-to-face friendship, and group activities, none of which require surveillance.
“They’ll be left out.”
They’ll also have privacy, mistakes that disappear, identity formation without optimisation, the ability to reinvent themselves, and development without algorithmic interference. Weigh those against being included in group chats.
“It’s not realistic in 2026.”
Neither is privacy if you require social media participation. You’re choosing between two difficult options: privacy AND social media (impossible) or privacy WITHOUT social media (difficult but achievable).
The Limitations
The problem: it only works for your child.
You protect your 13-year-old, but their classmate’s parents don’t. That child, equally deserving of protection, remains exposed. Even with 10% of US parents saying no, that means 90% are giving their children social media access with parental permission. Most parents aren’t restricting access, they’re actively granting it.
Your child doesn’t have Instagram, but the five other kids on the bus do. Your child sees the content anyway, sees the interactions, experiences the social dynamics, feels the exclusion. You’ve protected them from direct access, but you haven’t protected them from living in a world where social media exists.
The children most needing protection—those whose parents are overwhelmed, under-resourced, or simply make different choices—are precisely the children whose parents won’t opt out.
The collective action problem: Social media has network effects. Platforms only work at scale, and the value comes from everyone else being there, which creates pressure when children say “everyone I know is on Instagram.” Individual families can resist, but the children whose parents won’t or can’t opt out remain completely unprotected, which is why governments intervene. Not because individual action is impossible, but because relying on it leaves too many children exposed.
Parental opt-out works perfectly for children with parents who are informed about digital harms, have time to research and implement boundaries, can withstand social pressure, and will maintain boundaries as children age. Potentially parents like you, who are reading this instead of scrolling on social media. For everyone else’s children: nothing.
What Support Exists
You don’t have to do this alone. Parent groups exist specifically to build collective support for delaying smartphones and social media, reducing the “everyone else has it” pressure by helping families coordinate.
In the US: Wait Until 8th encourages parents to pledge not to give smartphones until year 8 (around age 13-14), creating community support so children aren’t isolated by the choice.
In the UK: Smartphone Free Childhood coordinates parent groups delaying smartphones until secondary school, with thousands of families joining local WhatsApp groups to make the decision collectively. Some UK schools coordinate parents at the school level, where entire year groups delay smartphones together.
In Australia: Waitmate connects parents making similar choices.
These groups don’t prescribe what’s right for every family, but they help parents find each other and make coordinated choices. Even if you’re not in these countries, seeing how parents are organizing collectively might give you ideas for your own community.
What This Approach Does
Preserves your child’s privacy completely and protects them from documented mental health harms without building surveillance systems.
Practical options include:
Basic phones (call and text only, no internet or apps). Availability varies by country, but standard feature phones from Nokia, Alcatel, and other manufacturers exist in most markets. Dumb phones designed for children typically allow calls, texts, and sometimes GPS tracking for safety, but no social media or browser. Minimalist phones offer basic features without social media or algorithmic content. Smartphones without social media apps provide communication and emergency functions only.
The specific devices available vary significantly by country. What works in the US or UK may not be available in other countries. Research options in your specific market, focusing on the category of device (basic, minimalist, restricted smartphone) rather than specific brands.
Your child gets communication without permanent records, entertainment that isn’t algorithmically optimised, actual boredom, and privacy as the default.
What It Doesn’t Do
Change anything for children whose parents make different choices. You’re protecting your child from direct access, but you’re not protecting them from exposure via other children’s devices, you’re not protecting children whose parents won’t make the same choice, and you’re not solving the structural problem that makes social media incompatible with child development.
The Dilemma
If parental choice worked at scale, we wouldn’t need government intervention. But even with parent groups organising collective support, 90% of under-13s in the US have parental permission to be on platforms, which raises the question: maybe platform regulation is the answer? Make social media itself safer, regardless of who chooses to access it?
Next Week
Instead of debating who gets access, what if we made platforms themselves compatible with child development? Chronological feeds, no infinite scroll, restricted DMs, content filtering, parental oversight. Can we regulate platforms into being safe for children?
Related stories:



