- Attorneys general from California & Delaware issue warning to OpenAI.
- Cites risks after teen suicide case linked to ChatGPT.
- Urges stronger safeguards against harmful chatbot interactions.
- Meta also included in call for tighter safety standards.
- Suggests potential legal or regulatory action if ignored.
Pressure on OpenAI is mounting as U.S. attorneys general step into the debate on AI safety. California and Delaware’s top prosecutors issued formal warnings to OpenAI and Meta, demanding urgent improvements to safeguard children and teens using their chatbot
The move follows growing outrage over the death of 16-year-old Adam Raine, whose family alleges ChatGPT worsened his suicidal ideation by reinforcing harmful thoughts instead of providing safe redirection. Attorneys general said this tragedy highlighted the “life-or-death” stakes of weak safety standards.
In their statement, the officials emphasised that AI companies must implement robust guardrails, both to filter unsafe prompts and to prevent AI from presenting itself as a trusted confidant for children. They warned that failure to act could trigger legal or regulatory action at the state level.
The intervention reflects a broader trend: regulators are no longer willing to leave AI safety to company discretion. For OpenAI, the warnings are a reputational blow, coming just as rivals face scrutiny too. For parents, it’s a reminder that teens are already using AI tools unsupervised, often as sources of advice or comfort in vulnerable moments.
The attorneys general are signalling that AI firms should expect the same scrutiny as tobacco or social media industries faced in previous decades, particularly where children’s wellbeing is at risk.
🔗 AP News



