Australia’s Social Media Ban Results – What’s Actually Working?

When Australia’s government announced in January that platforms had closed 4.7 million accounts belonging to users under 16, Prime Minister Anthony Albanese called it proof the ban was working. Two months into the world’s first comprehensive social media age restriction, the numbers are impressive. Whether they mean what the government claims is another question.

What the numbers show

Since the ban took effect on 10 December 2025, social media companies have deactivated or restricted 4.7 million accounts across Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Twitch and Kick. Meta alone removed over 550,000 accounts. Snapchat locked 415,000. The platforms face fines up to AU$49.5 million if they don’t take “reasonable steps” to prevent under-16s from holding accounts.

There are roughly 2.5 million Australians aged 8 to 15, which means the 4.7 million figure works out to about two accounts per teen. That makes sense—most young people maintain profiles across multiple platforms. What’s less clear is whether closing those accounts has made children safer or simply shifted where they spend time online.

What platforms are warning

The tech companies aren’t celebrating their compliance. They’re warning the ban creates new problems it doesn’t solve.

Snap CEO Evan Spiegel published an op-ed in the Financial Times on 18 February arguing that compliance doesn’t guarantee Australian teens will be safer. “The new law regulates only select platforms while leaving thousands of other apps unregulated,” Spiegel wrote, “meaning it may push teens towards less safe alternatives. When teens lose access to their preferred messaging channel they aren’t going to stop communicating—they are going to find other ways to talk, through lesser-known apps that offer fewer safety protections.”

Meta made similar arguments after removing 550,000 accounts, pointing out that teens use over 40 apps per week, many of which fall outside the ban’s scope. Both companies advocate for age verification at the app store level rather than requiring each individual app to implement its own system, which would create uniform enforcement while reducing how often personal information must be shared.

There’s also a technical reality that the political announcements don’t address. Australia’s own government trial found that age estimation technology is “highly imperfect and often off by two to three years, particularly when it’s applied to younger users.” At scale, some under-16s will slip through while some over-16s get incorrectly locked out.

The migration that didn’t materialise

In the days immediately after the ban took effect, downloads of alternative platforms surged. Lemon8, a photo and video sharing app owned by TikTok’s parent company ByteDance, shot to the top of Apple’s App Store. Downloads of Yope, a private photo-sharing platform, jumped 251%. Critics argued this proved the ban would simply push teens to less regulated platforms.

But the surge didn’t last. eSafety Commissioner Julie Inman Grant reported in January that her office saw “a spike in downloads of alternative apps when the ban was enacted but not a spike in usage.” By late January, Yope had fallen to #250 on the iOS download charts in Australia. Lemon8 showed no sustained increase in activity. The feared mass migration never materialised.

The government moved quickly anyway. The regulator asked ByteDance and Yope to self-assess whether they should fall under the age restrictions. Lemon8 has since confirmed it meets the criteria and now enforces a minimum age of 16. Communications Minister Anika Wells made clear the government will continue monitoring and expanding the list of restricted platforms as needed.

Legal challenges and workarounds

Two separate cases are proceeding in Australia’s High Court challenging the ban on constitutional grounds. The Digital Freedom Project argues the law violates implied freedom of political communication. Reddit is pursuing its own challenge, claiming that “a person under the age of 16 can be more easily protected from online harm if they have an account, being the very thing that is prohibited.”

The government’s response has been unequivocal: “We will not yield to intimidation. We will not be deterred by legal disputes.”

Meanwhile, reports from teenagers on the ground paint a mixed picture. Some under-16s can still access platforms using VPNs to mask their location, borrowing parents’ accounts, or lying about their age where verification isn’t robust. Others report feeling isolated from friends or seeing different algorithmic content when browsing without accounts.

The eSafety Commissioner maintains realistic expectations about enforcement. “We don’t expect safety laws to eliminate every single breach,” Inman Grant said. “If we did, speed limits would have failed because people speed.”

What the world is learning

Countries across Europe and Asia are watching Australia’s experiment closely. France, Malaysia, Indonesia, Germany, Italy, Greece and Spain are all considering similar restrictions. UK Prime Minister Keir Starmer called for an Australian-style ban in January. A Fox News poll found 64% of US voters favour comparable measures.

Australia’s implementation provides data these countries haven’t had: large-scale enforcement is technically feasible. The alternative app surge appears to have been temporary rather than sustained. The legal challenges will test whether democratic constitutions can support age-based restrictions on digital platforms.

But there’s a gap between what the numbers show and what they mean. Research cited by Spiegel, published in JAMA Pediatrics, found that moderate social media use actually supports adolescent wellbeing, especially for Australian teens. The study concluded that “the optimal approach appears to be thoughtful engagement and moderation, not total prohibition.”

Whether removing accounts while leaving thousands of other apps and non-account browsing available constitutes thoughtful engagement is debatable.

Implementation reality

Australia has proven that age restrictions can be enforced at scale. The 4.7 million closed accounts demonstrate platforms will comply when facing substantial fines. The swift regulatory response to alternative apps like Lemon8 shows the government is prepared to expand enforcement as needed.

What remains uncertain is whether the ban achieves its stated goal of protecting children from online harms. The eSafety Commissioner acknowledged in January it’s too early to declare full compliance, and some underage accounts remain active. Whether closing accounts reduces exposure to harmful content, improves mental health outcomes, or changes offline behaviour won’t be known for years.


SOURCES:

Related Articles

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

How To Stop Brain Rot By Age Group

Practical tips for parents to help your children avoid or minimise "brain rot" from overconsuming low-quality online content.

🛡️ UK’s New Online Safety Rules Go Live: A Landmark Moment for Child Protection

New online requirements in the UK to protect children

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

IYKYK: The Teen Texting Codes Every Parent Should Know

Parents may feel fluent in “LOL” and “BRB,” but today’s teens are using a new wave of texting codes.