- The minister described AI-driven bullying—bots telling kids they’re losers or to kill themselves—as “super-charging” the problem.
- A national plan includes mandatory school response to bullying within 48 hours, specialist teacher training, and $5 million each for awareness campaigns and resources.
- One in four Australian students report regular bullying; the plan also comes ahead of a social-media age restriction under-16s.
- The “new type of bullying” uses AI bots or fake accounts, making it invisible and persistent.
- The government flagged the urgency of embedding digital-safety literacy into schools—not just punishment.
Australia has entered a new phase in the battle for children’s digital safety. Education Minister Jason Clare sounded the alarm this week: artificial-intelligence chatbots are not just assistants—they’re being leveraged to target children, deliver bullying messages, and worse. He offered no sugar-coating: this is a “terrifying” shift, and one that demands immediate action.
The federal government responded with a comprehensive anti-bullying plan. Schools will be required to respond to incidents within 48 hours, teachers will receive specialist training in AI-enhanced harassment tactics, and millions of dollars will be allocated for nationwide campaigns supporting students and families. The focus extends beyond traditional peer bullying to digital forms where fake accounts or chatbots relentlessly harass victims—what one report terms “phoenixing,” the creation of multiple accounts to continue abuse after bans.
For parents, the warning is clear: the nature of online harm is changing fast. What your child experiences may no longer be another kid trolling them—it could be a bot programmed to trigger despair, isolation, or even self-harm. That shifts the conversation from “how to stop kid-to-kid bullying” to “how to recognise and defend against machine-driven harm.”
The Australian plan ties into a broader regulatory push—including age limits, verification, and digital-literacy education. While it will take time to roll out fully, families can use this moment to strengthen their own defences and start conversations about the risks children face online today.
What Parents Can Do
- Ask your child about their chatbots or apps: are they talking to “people” they don’t know?
- Set up a “check-in” routine: ask if they’ve received unexpected messages or felt uneasy during gaming or chat.
- Schools aren’t the only responders—together with them, build a safe home code: “If someone tells you to do something bad online, tell me.”
- Help children develop critical thinking about AI: reinforce that bots don’t have their wellbeing at heart.
- Monitor screen time and communication apps—consider limiting unsupervised interaction with lesser-known chat services.
Source: The Guardian



