OpenAI funded a child safety coalition — without telling the members
In mid-March, child safety organisations across the United States started receiving emails from a group called the Parents & Kids Safe AI Coalition, asking them to endorse a set of AI policy priorities. The principles sounded reasonable — age verification, parental controls, no advertising targeted at children. Many groups signed up. What the emails did not mention was that the coalition had been founded and was entirely funded by OpenAI, the company behind ChatGPT. When that became public in early April, at least two member organisations quit.
What the coalition was actually pushing for
The policy principles the coalition was asking nonprofits to endorse were not random. They mirrored almost exactly the provisions of a California ballot initiative that OpenAI itself had co-sponsored — legislation that would establish rules for how AI companies interact with children, including mandatory age verification.
That matters because OpenAI already provides age verification services. The legislation it was quietly lobbying for would, in effect, create a legal requirement for exactly the kind of product it sells. A University of Michigan professor who reviewed the coalition’s public materials told the San Francisco Standard that it meets the classic definition of astroturfing — a campaign designed to look like a grassroots movement while being driven by a single corporate interest.
OpenAI pledged up to $10 million to support the coalition, according to a report in the Wall Street Journal. That funding was not disclosed in the outreach emails sent to child safety groups, and was not prominently listed on the coalition’s website.
Why child safety groups are angry
The organisations that joined the coalition did so because they believed they were lending their credibility to an independent campaign. Several said they had no idea OpenAI was involved until after the coalition went public. “It’s a very grimy feeling,” one nonprofit leader told the San Francisco Standard, adding that the emails were “pretty misleading.”
Josh Golin, executive director of FairPlay — a children’s advocacy group that declined to join after discovering OpenAI’s role — put it plainly: “I don’t want OpenAI to write their own rules for how they interact with children.” That concern goes beyond this specific coalition. The criticism is that OpenAI is using the language of child safety to shape regulation in ways that protect its commercial position, rather than genuinely strengthening protections for children.
Other major tech companies have used similar tactics. Meta and Google have both backed age verification legislation that shifts compliance burdens onto app stores and device manufacturers rather than the platforms themselves — reducing their own exposure while appearing to support child safety measures.
What this means for parents following the AI debate
None of this means that the policy principles the coalition was promoting are necessarily wrong. Age verification and parental controls are worth having a serious conversation about. But the OpenAI story is a useful reminder that when a tech company backs child safety legislation, it is worth asking what the legislation actually does — and who it benefits.
The most protective regulations tend to be the ones that hold platforms liable when children are harmed, give families the right to sue, and require platforms to change their design rather than just their paperwork. Legislation that focuses primarily on age verification at the point of download — while leaving the product itself unchanged — is a lower bar, and one that happens to suit companies already offering verification services.
As governments around the world move to regulate AI and children, the question of who is shaping those regulations, and why, will matter as much as the regulations themselves. The OpenAI coalition story is a useful place to start asking that question.
Sources: San Francisco Standard — Kids groups say they didn’t know OpenAI was behind their child safety coalition, 1 April 2026 Gizmodo — Group pushing age verification for AI turns out to be backed by OpenAI, 1 April 2026 Futurism — OpenAI secretly founded a child safety coalition to push its own agenda, April 2026



