On 5 May 2026, Meta announced it is expanding an AI tool that detects suspected under-16s on its platforms, even when those users registered with an adult birthday. The technology launches on Facebook in the United States for the first time and on Instagram across 27 European Union countries and Brazil. UK and EU expansion to Facebook is scheduled for June.
If your child has an Instagram or Facebook account they got onto by entering a fake date of birth, you need to enter their correct date of birth.
What the technology actually does
Meta has spent the past two years building AI that scans entire profiles for clues that an account holder is younger than they claimed when they signed up. The signals it looks for are everyday teenage behaviour: posts about birthdays, mentions of school grades, captions referencing school events, comments wishing someone happy birthday, photos with cake. The AI looks across posts, captions, bios, comments, Reels, Live and Facebook Groups.
When the system flags an account as likely belonging to a teen, that account is automatically moved into Teen Account protections. That means the account becomes private by default, sensitive content filters tighten, messaging limits kick in, and screen-time prompts become harder to dismiss.
Meta first deployed this in the US, UK, Canada and Australia on Instagram. It says millions of accounts have already been moved into Teen Account protections through this method.
What changes for your child
If your child set up an Instagram or Facebook account using a fake adult birthday, three things may happen in the coming weeks.
The account may be automatically restricted. They will not be asked to confirm. They will simply find that messages from strangers are blocked, that their content is now private, and that some of the features they were using are gone or limited.
If they want to override the restriction, they will need a parent’s involvement. Meta is asking suspected teens to confirm their age through a verified process — typically uploading ID or doing a video selfie analysed by a third-party tool called Yoti. There is no longer a “just enter a different birthday” option for accounts the AI has flagged.
Younger children — those Meta believes are under 13 — are removed from the platforms entirely. Instagram and Facebook both require a minimum age of 13. Meta says it is using the same AI tools to find and remove under-13s, not just to restrict teen accounts.
Why now
The honest answer is regulatory pressure. Meta has had this technology for years. It chose this moment to deploy it widely because the alternative is worse for the company.
Australia is currently investigating five major platforms — Facebook, Instagram, Snapchat, TikTok and YouTube — for failing to comply with its under-16 social media ban. Each platform faces fines of up to A$49.5 million per breach. The UK regulator Ofcom is due to publish its own assessment of platform compliance this month, having given the same six platforms until 30 April to demonstrate meaningful action on age verification, grooming, algorithmic feeds and product testing on children. The European Commission ruled in February that TikTok had violated the Digital Services Act for addictive design.
Meta is also publicly pushing for app store-level age verification, where Apple and Google would verify ages once at device setup and pass that information to apps. In its 5 May announcement, the company cited polling that 88% of US parents support this approach. Translation: Meta wants the responsibility off itself and onto the operating system makers. There is a legitimate policy argument for this. There is also a clear corporate interest in shifting accountability.
What to do today
If your child has an Instagram or Facebook account, three practical things this week.
Check what age the account is registered as. Go to Settings, then Account, then check the date of birth. If it shows an adult age and your child is under 18, the account has been operating without the safety defaults that should have been in place. The new Meta system may catch it — but you can fix it now, with your child, rather than waiting for an automated flag they may try to argue their way around.
Have the conversation about what happens next. If Meta’s AI does flag the account, your child may come to you frustrated that features have disappeared. The conversation now is easier than the conversation later. Tell them what is changing on these platforms, why, and that the restrictions are not a punishment.
Decide whether you actually want the account in place at all. This is a moment where the platform itself is admitting, through its actions, that the existing safety protections were not being applied to a significant proportion of teen accounts. If you have been on the fence about whether your under-16 should be on Instagram or Facebook, this announcement is a signal worth taking seriously.
For families who have already decided their child is on these platforms, the new defaults are an improvement. For families still deciding, it is worth asking why a system this effective took until 2026 to roll out — and whether the right time to give your child an account is when the platform is genuinely ready for them, not just when their classmates are on it.
This story is part of a bigger pattern. Last week Roblox accepted an 18% drop in its share price for making child safety changes. This week Meta is shipping AI that finds children who lied about their age. The platforms are moving — not because they want to, but because they are being forced to. Something that isn’t always obvious to parents.
Sources: Meta — New AI-Powered Age Assurance Measures, 5 May 2026 Reuters — Meta Expands Teen Safeguards to 27 EU Countries and US Facebook, 5 May 2026 eSafety Commissioner — March 2026 Compliance Update on the Social Media Minimum Age law Ofcom — Open letter to six platforms on children’s safety, 12 March 2026



