The UK has become the first country to require social media platforms to actively hunt down and remove self-harm content before users can see it.
What’s happening: On September 8, the UK government announced urgent changes to the Online Safety Act, making content that encourages or assists serious self-harm a “priority offence” for all users, not just children. This means platforms must use cutting-edge technology to proactively seek out and eliminate this content rather than simply reacting after someone has already been exposed to it.
Why this matters to all parents: Whilst platforms already had to protect children from self-harm content, the UK recognised that adults battling mental health challenges are equally at risk. This expansion makes the UK the first country globally to require platforms to proactively remove self-harm content for users of all ages, setting a precedent other countries are likely to follow.
The bigger picture: This represents the most comprehensive approach to self-harm content globally. Other countries have focused mainly on child protection, but the UK’s expansion to all users reflects growing understanding that dangerous content affects people across all ages, particularly during mental health crises.
Here’s what the new rules actually require, how this affects families worldwide, and what other countries are considering similar approaches.
What Parents Need to Know
What the new rules require: Content encouraging or assisting serious self-harm will now be treated as a priority offence, triggering the strongest possible legal protections. Platforms must use advanced technology to actively search for and remove this content before it reaches users, rather than waiting for reports after exposure.
What this means for your family: Social media platforms operating in the UK must now implement the same proactive content detection systems for self-harm material that they already use for the most serious illegal content. This should significantly reduce the chance that anyone in your family encounters content that could trigger a mental health crisis.
How this differs from previous rules: Previously, platforms only had to protect children from self-harm content and could rely on users reporting problematic material. Now they must actively seek out and remove this content for all users before anyone sees it.
The enforcement mechanism: Platforms failing to comply face fines of up to 10% of their global annual revenue or £18 million, whichever is higher. Ofcom, the UK’s communications regulator, will use these powers to ensure platforms implement the required proactive systems.
Why the UK made this change: The government recognised that adults struggling with mental health are equally vulnerable to content that could trigger a crisis or worse. The change aims to protect people during their most vulnerable moments when they might be seeking support online but could instead encounter harmful material.
What Regulators Are Saying
Technology Secretary Peter Kyle emphasised that the changes target content that could “destroy lives and tear families apart,” noting that platforms must now prevent this material from appearing in the first place rather than reacting after harm has occurred.
The Samaritans charity welcomed the changes, stating that whilst the internet can be a source of support for people struggling with mental health, “damaging suicide and self-harm content can cost people their lives.”
Ofcom, which will enforce these rules, has indicated it will use its full powers to hold platforms accountable, with the regulator stating it’s ready to take action against companies that fail to implement adequate protections.
How This Affects Your Family
If you’re in the UK: These protections are now coming into force, meaning platforms operating in the UK must implement proactive systems to remove self-harm content. The regulations will become legally enforceable 21 days after parliamentary approval, expected in autumn 2025.
If you’re elsewhere: Other countries are closely watching the UK’s comprehensive approach. The EU, Australia, and several US states are considering similar expansions of their online safety laws beyond just child protection.
Understanding global trends: This represents a shift from reactive content moderation (removing content after reports) to proactive detection and removal, particularly for content that poses immediate safety risks.
Warning signs to watch for: Changes in behaviour after social media use, increased distress or anxiety following online activity, or withdrawal from family and friends may indicate exposure to harmful content despite these new protections.
Sources:



