Two significant pieces of US legislation moved closer to law this week, and taken together they signal a shift in where American child safety regulation is heading next: from social media feeds to AI chatbots.
What passed committee
On 5 March, the KIDS Act cleared the House Energy and Commerce Committee by 28 votes to 24. The bill consolidates several pieces of child safety legislation and includes two newer additions that reflect where the threat landscape has moved.
The first is the AWARE Act, which would require social media platforms to alert parents when their child searches for terms related to suicide or self-harm. The second is the SAFEBOTs Act, which specifically addresses AI chatbots and would require them to implement safety measures when interacting with minors.
Separately, COPPA 2.0, the long-awaited update to the 1998 Children’s Online Privacy Protection Act, passed the Senate Commerce Committee unanimously. COPPA 2.0 would raise the age of digital consent from 13 to 16, ban targeted advertising to minors, and give children the right to request deletion of their data.
Both bills still need full chamber votes and would then need to be reconciled between the House and Senate. Progress in past sessions has stalled at exactly that stage.
Why AI chatbots are now the focus
The inclusion of SAFEBOTs in the KIDS Act reflects a rapid escalation of concern about children’s interactions with AI companions. Across 27 US states, ~78 chatbot safety bills are currently live in state legislatures. Oregon passed a chatbot safety bill this week. Washington’s is pending. California has proposed a four-year moratorium on AI chatbot toys designed for children.
The concern is specific: AI chatbots are designed to be engaging in ways that make them feel like friends, and several high-profile cases in 2024 and 2025 involved children becoming deeply emotionally dependent on chatbot companions, with some cases linked to self-harm. A lawsuit against Character.AI, filed by the family of a 14-year-old in Florida, is progressing through the courts and has driven significant congressional attention.
In the UK, Prime Minister Starmer announced this month that AI chatbots will be brought under the Online Safety Act, subjecting them to the same child safety obligations as social media platforms.
What the AWARE Act specifically requires
The parental alert provision has generated a separate debate. Meta announced a similar feature for Instagram earlier this month, and the response from mental health specialists has been cautious.
The concern is straightforward: if teenagers know that searching for help will trigger a parental notification, some will not search. The tension between parental oversight and a teenager’s ability to seek support privately is not easily resolved by legislation, and mental health organisations have consistently argued that notification-based systems can create as many risks as they address.
What this means in practice
For parents watching US legislation, the honest assessment is that COPPA 2.0 and the KIDS Act have both passed important milestones but face a difficult path to becoming law. Both have stalled before.
What the state-level activity shows more clearly: AI chatbot safety is becoming a regulatory priority in the same way social media safety was four or five years ago. If your child uses an AI companion app, it is worth knowing which one, what its safety features are, and whether the company behind it has been transparent about how it handles interactions with minors.
The technology moves faster than the legislation. That gap is where the risks live.
Sources
- House Energy and Commerce Committee vote, 5 March 2026: https://energycommerce.house.gov
- COPPA 2.0 Senate Commerce Committee passage: https://www.commerce.senate.gov
- Oregon chatbot safety bill, March 2026: https://oregon.gov
- UK Online Safety Act chatbot announcement: https://www.gov.uk



