🔑 Key Points
- Industry codes tied to Australia’s Online Safety Act will require age checks from Dec 2025.
- Platforms must use facial age estimation, ID verification, or account history checks.
- Under-16s will face social media bans; search and chat apps must filter for minors.
- Critics worry about privacy, big-tech dominance, loss of anonymity.
- Penalties for non-compliance could reach AU$49.5 million.
Australia has announced landmark changes mandating age verification for digital platforms, to take effect from December 2025. Under new codes of conduct linked to the Online Safety Act, social media, search engines, app stores, messaging services, and AI chatbots must use facial age analysis, ID checking, or account history to confirm user age. These measures aim to restrict children under 16 from accessing harmful content, including pornography, violence, self-harm, and gambling.
While proponents say this could significantly reduce youth access to dangerous online material, critics argue the approach centralises control in major tech companies, raises privacy concerns, and may erode online anonymity. They also question the feasibility of global implementation and whether such a system is enforceable. With fines reaching AU$49.5 million, enforcement will be strict, but debate continues about whether legislation might offer a more balanced solution. The policy reflects a global trend toward stricter digital age controls but its implementation will reveal challenges for children’s rights and privacy in the digital age.
The Guardian -> Read the article here



