16 apps rated for child safety: the 2026 scorecard

I’ve spent a lot of time being asked the same question in different ways: is this app safe for my child? The honest answer is always: it depends which risks you’re most worried about, and what your child is actually doing on it. So I built something to help answer it more precisely. The Platform Safety Scorecard launched on 17 April 2026, and this post explains how it works and what we found.

Six categories, because that’s what parents actually ask about

Every platform is rated across six categories: content risk, contact risk, privacy, parental controls, data collection, and transparency. Scores run from 1 (lowest risk) to 4 (highest risk), averaged across all six categories.

These six came directly from the questions parents ask most. Content risk covers what a child is likely to encounter. Contact risk covers who can reach them. Privacy covers what the platform knows about your child. Parental controls covers what tools are actually available and whether they work in practice. Data collection covers what is gathered, retained, and shared. Transparency covers whether the platform is honest with users and regulators about how it operates.

A platform can score badly on one category and well on another. That’s the point — a single reputation doesn’t tell you where the actual risks sit.

The platforms that need the most attention

X is the highest-risk mainstream platform on the scorecard, with an average score of 1.5 — “severe risk.” Adult content is explicitly permitted. There are no in-app parental controls — the system requires birth certificates and ID documents submitted via a form. Trust and Safety has been dismantled. If your teenager is on X, there is genuinely nothing meaningful you can do to make it safer from the outside.

Character.AI scores 1.7 — the highest-risk AI platform we assessed. It has been linked to serious harm including at least one teen suicide. Open-ended chat was removed for under-18s in November 2025, but no meaningful parental controls exist. Safety experts and Qustodio recommend against use for anyone under 18. The Drexel University research published earlier this month — where teenagers described addiction markers in their own words — makes this finding harder to ignore.

Snapchat scores 1 on contact risk — the most cited platform in sextortion cases. Disappearing messages make meaningful oversight nearly impossible regardless of what other controls are in place. That single category score tells you something important about the platform’s structural design.

TikTok scores 1 on both privacy and data collection — the most aggressive data collection of any social platform, including biometrics and keystroke patterns, with data accessible to ByteDance.

Facebook scores 1 on data collection — the most extensive of any platform, tracking across Meta properties, third-party sites, and offline behaviour.

The platforms that perform better than their reputation

YouTube has the strongest parental controls of any platform scored, with a 4 in both parental controls and transparency. That surprises a lot of parents who think of YouTube as one of the riskier options. Supervised accounts, three content levels, and — since March 2026 — the ability to set the Shorts feed to zero are genuinely more developed than what most competitors offer. YouTube Kids remains the safest option for younger children.

Instagram made the biggest improvement of any platform following mandatory teen account settings and PG-13 content filtering introduced in April 2026. Teens cannot opt out of the filtering. It is not the same product it was 18 months ago, in ways that matter for families.

How to use the scorecard

The scorecard is at wiredparents.com/tracker. Select your child’s age and the apps they use, and you’ll get a breakdown by category showing exactly where the risks sit. The age filter matters — the relevant risks shift significantly between a 10-year-old and a 15-year-old.

A high overall score doesn’t mean an app is safe in every area. A lower score in one category — contact risk on Snapchat, data collection on TikTok — can matter more than the average suggests. The report shows you the detail, not just the headline number.

If there’s an app you think should be on the scorecard and isn’t, email [email protected].

Sources: Wired Parents Platform Safety Scorecard — full ratings and methodology

Related Articles

Tags:

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

FTC Investigating AI Chatbots Over Child Safety Concerns

FTC launches inquiry into AI companion chatbots from Meta, OpenAI, Character.AI and others. What parents need to know about chatbot risks for children.

What “Learning How to Learn” Actually Means for Your Child

Google's AI chief says 'learning how to learn' will be critical for the future—but what does this buzzword actually mean?

Why Some Australian Teens Are Actually Happy About the Social Media Ban

Not all Australian teens are fighting the social media ban. Some are quietly relieved.