When Australia banned under-16s from social media on 10 December 2025, the world watched to see what would happen. The headline story since has been about whether the ban is working. But the more useful story is what Australia did next. In March 2026, the rules were amended to put the algorithm itself in the centre of the law. The country that wrote the world’s hardest age ban has now amended the rule to focus on the design features instead.
What the amendment actually did
The March 2026 amendment to Australia’s Online Safety (Age-Restricted Social Media Platforms) Rules sounds technical but the change is significant.
Under the new definition, a platform is age-restricted if it allows users to interact, allows users to post, and has “an account-based recommender feature (algorithms) and/or at least one of the following design features while users are logged in: feedback features (such as displaying the number of ‘likes’ or ‘upvotes’ a user has received), time-limited features (such as disappearing ‘stories’).”
The thing that makes a platform fall under the under-16 ban isn’t social media as a category but the algorithm; the like counters, the disappearing stories feature.
Australia hasn’t loosened the ban per se, it has sharpened what triggers it – a platform with a recommender algorithm is in scope and a platform without one isn’t. The world’s first age ban is now, in practice, a design-features ban.
Why this matters
Every country watching Australia is moving the same way. The UK consultation closing on 26 May asks specifically about algorithmic restrictions and addictive design features. The New Mexico trial that began on 4 May is asking a court to order Meta to remove infinite scroll and push notifications for children. The European Parliament voted in November 2025 to ban infinite scroll, autoplay and the commercial exploitation of minors. France passed similar provisions in January. Denmark, Norway, Spain and Slovenia are drafting legislation in the same shape.
All of them are converging on the same set of features. Recommender algorithms. Infinite scroll. Autoplay. Like counters. Push notifications. Banning a 14 year old from Instagram is hard to enforce. Banning Instagram from showing a 14 year old an algorithmic feed is something a regulator can actually check.
The story this tells
The shift Australia made in March points to where the debate is actually going. Children aren’t harmed by the existence of a social media account, they’re harmed by what’s inside it: the feed that never ends, the algorithm that learns what keeps them watching, the notification that pulls them back when they put the phone down, the like count that turns peer feedback into a slot machine. Take those out and what’s left is closer to the social media adults remember from a decade ago.
What this means for you right now
The next chapter of social media regulation is going to be about features, not age limits.
You can’t usually remove the algorithm. You can remove the features that sit alongside it. Turn off all push notifications for social apps on your child’s phone. Settings → Notifications.
Use the screen time controls to set a hard daily limit on the apps that use infinite scroll. YouTube now lets parents set the Shorts feed to zero minutes on supervised teen accounts. Instagram’s Teen Account protections include time alerts. The features the regulators are targeting are the same features you can quietly switch off tonight.
The regulators are moving in the same direction the evidence on design features has pointed for years. The shift is happening.
Sources: eSafety Commissioner — Social media age restrictions guidance, amended March 2026 Department of Infrastructure — Social media minimum age definition TechPolicy.Press — Early lessons from Australia’s teen social media ban, April 2026 UK Government — Growing Up in the Online World consultation



