Meta Ray-Ban Smart Glasses – Children’s Privacy Risks

Meta’s Ray-Ban Glasses Were Filming Children. Here Is What Happened.

Meta’s Ray-Ban smart glasses have been one of the most talked-about pieces of consumer technology of the past year. They look like ordinary sunglasses. They have a built-in camera, microphone, and an AI assistant that can answer questions, take photos, and record video. More than 7 million pairs were sold in 2025.

Last week, a joint investigation by two Swedish newspapers — Svenska Dagbladet and Göteborgs-Posten — revealed what was happening to some of that footage.

What the Investigation Found

Contractors hired by Meta to review and improve the glasses’ AI systems, working at a facility in Nairobi, Kenya, were watching first-person video clips captured by wearers. The footage was supposed to be anonymised before review — Meta uses a face-blurring technology to obscure identities. According to workers interviewed by the Swedish journalists, that technology regularly failed.

What the contractors described seeing included children getting changed, people in bathrooms, and private moments in people’s homes that were never intended to be recorded. Financial documents, bank cards, and other sensitive personal information were also visible in clips. Workers who raised concerns were reportedly dismissed.

Meta’s public position is that some interactions with its AI systems are reviewed by contractors to improve the service, and that it takes steps to filter content and protect privacy. The company said it is investigating the specific claims. One spokesperson described the glasses as designed so that “unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.”

The workers’ accounts directly contradict that.

The Regulatory Response

The UK’s Information Commissioner’s Office confirmed it is contacting Meta following the investigation’s publication. A US class-action lawsuit was filed in federal court within days, alleging that Meta marketed the glasses using language like “designed for privacy, controlled by you” while failing to disclose that footage could be reviewed by third parties. The lawsuit describes the glasses as a “surveillance nightmare disguised as fashion.”

Regulators in California and Washington are also examining the glasses under state privacy statutes.

The Part That Is Still Coming

This is not only about what has already happened. Meta has internal plans — reported by the New York Times in February — to add AI facial recognition to the next version of the glasses. The feature, internally called “Name Tag,” would allow wearers to identify strangers in real time simply by looking at them.

That capability does not yet exist in the product. Meta has said it is aware of the safety and privacy risks and that its plans may change. The internal memo describing the project noted that the company intended to launch “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

For families with children, the implication is direct: a product already in millions of households captured footage of children without meaningful consent or oversight. The next version may be able to identify those children by name.

What This Means in Schools

Schools are already responding. Institutions across the US have updated their wearable technology policies to include smart glasses alongside phones and Apple Watches. Some have banned all wearable technology from school premises entirely.

The concern is not primarily about the data review issue — most children’s schools don’t have parents wearing Ray-Bans in the corridor. It is about what the glasses normalise. A device that looks like ordinary eyewear, records continuously, and is carried into everyday social environments makes the boundaries around surveillance genuinely difficult to enforce or even identify.

What Parents Should Know

If your household has a pair of Meta Ray-Ban glasses, the safest position is to treat any footage captured as potentially reviewable by a third party, and to avoid using the AI features in environments where children are present.

If you are a parent whose child attends a school where staff or other parents might wear them, it is reasonable to ask whether your school has a policy covering smart glasses specifically. Many existing phone policies do not.

The broader question this story raises is one that will not go away with a single lawsuit or investigation: as AI-powered wearables become more common and less visually distinctive, the concept of consent in shared physical spaces — schools, playgrounds, children’s birthday parties — becomes significantly harder to uphold.


Sources

Related Articles

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

How To Stop Brain Rot By Age Group

Practical tips for parents to help your children avoid or minimise "brain rot" from overconsuming low-quality online content.

🛡️ UK’s New Online Safety Rules Go Live: A Landmark Moment for Child Protection

New online requirements in the UK to protect children

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

IYKYK: The Teen Texting Codes Every Parent Should Know

Parents may feel fluent in “LOL” and “BRB,” but today’s teens are using a new wave of texting codes.