What The Meta Trials Reveal About How Instagram Actually Works

Two separate trials playing out in American courtrooms this month are doing something previous lawsuits against social media companies haven’t managed: forcing Meta to show how decisions actually get made when child safety conflicts with business priorities. The internal documents and executive testimony emerging from Los Angeles and New Mexico reveal a company that repeatedly knew about risks to children and chose growth anyway.

This isn’t speculation. It’s what Meta’s own emails, memos, and research show.

Two trials, two different approaches

The Los Angeles case centres on a single plaintiff known as KGM, now 20 years old, who alleges that using Instagram from age nine caused addiction and harmed her mental health. The case is significant because Meta CEO Mark Zuckerberg and Instagram CEO Adam Mosseri both testified in person, facing direct questioning about design choices and safety priorities.

The New Mexico case takes a different approach. Attorney General Raúl Torrez conducted an undercover investigation in which authorities posed as children online and documented what happened next. The state alleges Meta created a “marketplace” and “breeding ground” for predators who target children for sexual exploitation. The most damaging evidence comes from Meta’s own internal documents, newly unsealed, showing the company understood the scale of child safety problems on its platforms.

Both trials are revealing how platforms actually work behind the reassuring public statements about safety.

What Zuckerberg’s testimony revealed

When Zuckerberg took the stand on 18 February in Los Angeles, lawyers presented him with documents showing that 4 million children under the age of 13 were using Instagram in the United States alone. Instagram’s terms of service require users to be at least 13 years old.

Zuckerberg acknowledged that some users lie about their age during sign-up. Instagram didn’t begin requiring birthdates at all until late 2019. When pressed on this, Zuckerberg said there was “some concern around privacy” that delayed implementing the requirement, but he thinks they “eventually landed on the right policy.”

The plaintiff in the LA case, KGM, started using Instagram at age nine. She wasn’t asked for her birthdate. She developed what her lawyers describe as an addiction to the platform, sometimes spending more than 16 hours in a single day on Instagram despite her mother’s attempts to limit use. She alleges this contributed to anxiety, body dysmorphia, and suicidal thoughts.

Lawyers also questioned Zuckerberg about Instagram’s beauty filters—digital effects that alter users’ appearance to make them look thinner, smoother, or more conventionally attractive. Internal emails showed that Margaret Stewart, Facebook’s vice president of product design and responsible innovation, expressed concerns about these filters. In one message, she said she didn’t believe lifting a temporary ban on plastic surgery filters was “the right call given the risks,” and mentioned dealing with a personal family situation that gave her “first-hand knowledge” of alleged harms.

Zuckerberg responded that many Meta employees disagree with company decisions, which the company encourages, but there ultimately wasn’t enough causal evidence to support the assertion of harm. Instagram proceeded to allow user-created beauty filters but chose not to promote them in the app.

The testimony revealed a pattern: safety concerns raised internally, weighed against other priorities, and often resolved in favour of features that drive engagement.

What internal documents show about child exploitation

The New Mexico trial has produced even more troubling revelations through internal Meta documents that were unsealed this week.

According to newly released legal filings, Meta employees discussed how the company’s decision to implement end-to-end encryption on Facebook Messenger would eliminate approximately 7.5 million child sexual abuse material reports annually that Meta currently discloses to authorities.

End-to-end encryption scrambles messages so that even Meta cannot read them, which privacy advocates praise as protecting users from surveillance. Law enforcement and child safety organisations have warned that it prevents companies from detecting illegal content in private messages.

Internal messages from Meta show employees were aware of this trade-off. In a message dated 14 December 2023—the same month Meta announced it would begin rolling out default end-to-end encryption for Messenger—an employee wrote: “There goes our CSER [Community Standards Enforcement Report] numbers next year.”

A 2019 internal note stated: “Without robust mitigations, E2EE on Messenger will mean we are significantly less able to prevent harm against children.” Another internal document from June 2019 said: “We will never find all of the potential harm we do today on Messenger when our security systems can see the messages themselves.”

New Mexico’s attorneys allege that “Meta knew that E2EE would make its platforms less safe by preventing it from detecting and reporting child sexual exploitation and the solicitation and distribution of child exploitation images sent in encrypted messages. Meta further knew that its safety mitigations would be inadequate to address the risks.”

A Meta safety researcher warned internally that approximately 500,000 cases of child exploitation were happening daily on the platform, according to documents cited in the New Mexico case.

Meta has said in response that it continues to develop safety tools and can review and address private encrypted messages if they are reported for child safety-related issues. The company has pushed back on New Mexico’s allegations, saying it is “focused on demonstrating our longstanding commitment to supporting young people.”

The encryption dilemma

The tension between privacy and safety isn’t hypothetical. It’s a genuine trade-off with real consequences.

Encryption prevents governments, hackers, and even the platform itself from reading private messages. This protects political dissidents, journalists, and ordinary people from surveillance. It also prevents companies from detecting illegal content being shared privately, including child sexual abuse material.

What the Meta documents reveal is that the company understood this trade-off and proceeded with encryption knowing it would significantly reduce its ability to detect and report child exploitation. The decision prioritised user privacy over child safety monitoring, with full knowledge of what would be lost.

The question isn’t whether encryption is good or bad. The question is whether Meta adequately communicated this trade-off to parents and the public, and whether the alternative safety measures it developed were sufficient to address the risks it knew it was creating.

What “we’re making it safer” actually means

Both trials expose the gap between Meta’s public statements about child safety and what actually happens inside the company when safety conflicts with other priorities.

Meta has repeatedly announced new safety features: Teen Accounts with built-in restrictions, parental supervision tools, content filters, and mechanisms to report harmful content. The company points to these as evidence of commitment to youth safety.

The internal documents and testimony show something more complicated. Safety features are developed, debated, sometimes implemented with restrictions that limit their effectiveness, and deployed alongside design choices that the company knows carry risks. The beauty filters were allowed but not promoted. Birthdate requirements were delayed due to privacy concerns. Encryption was implemented knowing it would eliminate millions of child abuse reports.

These aren’t binary choices between safety and harm. They’re judgement calls about acceptable risk, made by executives balancing competing priorities including growth, engagement, user experience, and safety.

What the trials reveal is how those judgements actually get made.

Why these trials matter

Previous lawsuits against social media companies have largely been dismissed under Section 230 of the Communications Decency Act, which protects platforms from liability for user-generated content. Both the Los Angeles and New Mexico cases are attempting to work around Section 230 by focusing on product design and company knowledge rather than individual posts.

The LA case argues that Meta’s design choices—infinite scrolling, algorithmic recommendations, features designed to maximise engagement—make the platform addictive. This isn’t about what users post; it’s about how the platform is built.

The New Mexico case uses the state’s Unfair Trade Practices Act, a consumer protection statute, to argue that Meta misled parents about platform safety. The undercover investigation provides evidence of what parents actually encounter when their children use the platform versus what Meta’s public statements suggest.

If either case succeeds, it could open the door to similar lawsuits nationwide, not based on what content appears on platforms but on how platforms are designed and what companies knew about the risks.

What parents should understand

The trials aren’t producing simple answers. They’re revealing how complex and calculated the decisions behind these platforms actually are.

When Meta says it’s committed to teen safety, that’s likely true at some level. The company has teams working on safety features. It invests in content moderation. It responds to criticism by developing new protections.

What the trials show is that safety is one priority among many, and when it conflicts with growth, engagement, or other business objectives, safety doesn’t automatically win. Sometimes it does. Sometimes it doesn’t. The decisions involve trade-offs that the company understands clearly but doesn’t always communicate to users or parents.

If you’re making decisions about whether your child should use Instagram, the relevant question isn’t whether Meta cares about safety in the abstract. The question is what specific trade-offs the company has made, what it knew about risks, and whether those trade-offs align with what you’re comfortable with for your family.

These trials are providing answers to those questions in Meta’s own words.


SOURCES:

Related Articles

Top Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST

Digital Wellbeing

Smartphone Effects on Children’s Brains by Age

The impact of devices on the brains of infants, children and adolescents.

How To Stop Brain Rot By Age Group

Practical tips for parents to help your children avoid or minimise "brain rot" from overconsuming low-quality online content.

🛡️ UK’s New Online Safety Rules Go Live: A Landmark Moment for Child Protection

New online requirements in the UK to protect children

Teen Stroke from Phone Use: What Parents Need to Know About ‘Text Neck’ Risks

A Chinese teenager's stroke from 'text neck' made global headlines, but leading spinal researchers call it 'a buzzword' rather than a real medical condition.

IYKYK: The Teen Texting Codes Every Parent Should Know

Parents may feel fluent in “LOL” and “BRB,” but today’s teens are using a new wave of texting codes.