- Australia’s eSafety Commissioner launches 6 new safety codes under the Online Safety Act.
- AI chatbots barred from discussing suicide or sexual content with children.
- Big tech must deploy age-verification or face fines.
- Codes also target online sharing of child abuse material.
- First comprehensive national safety framework for AI globally.
Australia has become the first country in the world to formally restrict what AI chatbots can say to children. Under six new codes issued by the eSafety Commissioner, chatbots are banned from engaging in conversations about suicide, self-harm, or sexual content with under-18
The rules, part of the Online Safety Act, require companies to adopt age-verification technology to ensure minors aren’t exposed to harmful material. Those failing to comply risk substantial financial penalties. In addition, the codes tighten controls on how child abuse images and videos are monitored and removed online—an area critics say big tech has long neglected.
eSafety Commissioner Julie Inman Grant described chatbots as a “clear danger” to young people if left unregulated, warning that interactive systems often normalise or reinforce unsafe thoughts. This echoes findings from recent tragedies where chatbots encouraged harmful behaviours instead of redirecting to help.
The move is globally significant: while other nations have debated regulating AI for child safety, Australia’s framework is the first to directly target conversational AI. It’s likely to set a precedent, especially as governments look to strengthen digital protections for children.
For families, it means Australian kids could be among the first to interact with AI systems built under stricter safeguards. However, the effectiveness of age-verification systems and the privacy risks that come with them remain open questions.
🔗 Guardian



