- Common Sense Media releases a damning review of Google’s Gemini AI.
- Labels it “high risk” for children and teens.
- Concerns include unsafe content filters, mental health impacts, misinformation.
- Adds pressure on tech firms to build child-aware AI.
- Could influence parental adoption and regulatory debate.
Google’s flagship AI system, Gemini, has been branded “high risk” for children and teenagers in a new review by Common Sense Media. The influential watchdog warned parents that Gemini’s content filters are not robust enough to protect young users from harmful or misleading information.
The review highlighted multiple risks: exposure to mature or unsafe material, misinformation delivered with authority, and potential mental health impacts from children forming unhealthy dependencies on AI. Gemini was found to perform inconsistently when faced with prompts about wellbeing or safety, often producing responses deemed inappropriate for younger users.
This assessment adds to the mounting pressure on AI companies to build systems that are “child-aware” by design, rather than simply retrofitted with general content filters. With families already adopting AI tools for schoolwork and social use, watchdogs argue the stakes are high.
Common Sense Media’s verdict could also influence regulators and educators worldwide, as the group has historically shaped debates about screen time and media effects. For Google, the “high risk” label is a reputational setback at a time when rivals are facing similar criticism.
For parents, the takeaway is clear: AI tools marketed for general use should not be assumed safe for children. Active oversight, supervision, and conversation remain critical until child-specific safeguards are in place.



