Meta’s Ray-Ban Glasses Were Filming Children. Here Is What Happened.
Meta’s Ray-Ban smart glasses have been one of the most talked-about pieces of consumer technology of the past year. They look like ordinary sunglasses. They have a built-in camera, microphone, and an AI assistant that can answer questions, take photos, and record video. More than 7 million pairs were sold in 2025.
Last week, a joint investigation by two Swedish newspapers — Svenska Dagbladet and Göteborgs-Posten — revealed what was happening to some of that footage.
What the Investigation Found
Contractors hired by Meta to review and improve the glasses’ AI systems, working at a facility in Nairobi, Kenya, were watching first-person video clips captured by wearers. The footage was supposed to be anonymised before review — Meta uses a face-blurring technology to obscure identities. According to workers interviewed by the Swedish journalists, that technology regularly failed.
What the contractors described seeing included children getting changed, people in bathrooms, and private moments in people’s homes that were never intended to be recorded. Financial documents, bank cards, and other sensitive personal information were also visible in clips. Workers who raised concerns were reportedly dismissed.
Meta’s public position is that some interactions with its AI systems are reviewed by contractors to improve the service, and that it takes steps to filter content and protect privacy. The company said it is investigating the specific claims. One spokesperson described the glasses as designed so that “unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.”
The workers’ accounts directly contradict that.
The Regulatory Response
The UK’s Information Commissioner’s Office confirmed it is contacting Meta following the investigation’s publication. A US class-action lawsuit was filed in federal court within days, alleging that Meta marketed the glasses using language like “designed for privacy, controlled by you” while failing to disclose that footage could be reviewed by third parties. The lawsuit describes the glasses as a “surveillance nightmare disguised as fashion.”
Regulators in California and Washington are also examining the glasses under state privacy statutes.
The Part That Is Still Coming
This is not only about what has already happened. Meta has internal plans — reported by the New York Times in February — to add AI facial recognition to the next version of the glasses. The feature, internally called “Name Tag,” would allow wearers to identify strangers in real time simply by looking at them.
That capability does not yet exist in the product. Meta has said it is aware of the safety and privacy risks and that its plans may change. The internal memo describing the project noted that the company intended to launch “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
For families with children, the implication is direct: a product already in millions of households captured footage of children without meaningful consent or oversight. The next version may be able to identify those children by name.
What This Means in Schools
Schools are already responding. Institutions across the US have updated their wearable technology policies to include smart glasses alongside phones and Apple Watches. Some have banned all wearable technology from school premises entirely.
The concern is not primarily about the data review issue — most children’s schools don’t have parents wearing Ray-Bans in the corridor. It is about what the glasses normalise. A device that looks like ordinary eyewear, records continuously, and is carried into everyday social environments makes the boundaries around surveillance genuinely difficult to enforce or even identify.
What Parents Should Know
If your household has a pair of Meta Ray-Ban glasses, the safest position is to treat any footage captured as potentially reviewable by a third party, and to avoid using the AI features in environments where children are present.
If you are a parent whose child attends a school where staff or other parents might wear them, it is reasonable to ask whether your school has a policy covering smart glasses specifically. Many existing phone policies do not.
The broader question this story raises is one that will not go away with a single lawsuit or investigation: as AI-powered wearables become more common and less visually distinctive, the concept of consent in shared physical spaces — schools, playgrounds, children’s birthday parties — becomes significantly harder to uphold.
Sources
- Svenska Dagbladet / Göteborgs-Posten investigation, published 27 February 2026
- TechCrunch: Meta sued over AI smart glasses’ privacy concerns: https://techcrunch.com/2026/03/05/meta-sued-over-ai-smartglasses-privacy-concerns-after-workers-reviewed-nudity-sex-and-other-footage/
- The Register: Meta smart glasses face UK privacy probe: https://www.theregister.com/2026/03/05/ico_meta_glasses/
- Transparency Coalition: Meta’s next move — AI facial recognition glasses: https://www.transparencycoalition.ai/news/metas-next-move-creepy-ai-facial-recognition-to-scan-kids-in-public
- Slate: Meta’s Smart Glasses Are Wreaking Havoc in Schools: https://slate.com/technology/2026/02/mark-zuckerberg-meta-ai-glasses-school.html
- CPO Magazine: Privacy Lawsuit Brought Against Meta Over Contractor Access to Videos: https://www.cpomagazine.com/data-privacy/privacy-lawsuit-brought-against-meta-over-contractor-access-to-videos-from-ai-smart-glasses/



