If you’ve posted a photo of your child online in the past five years, that image could have been used to generate sexualised content without your knowledge or consent. X’s AI tool Grok made it possible, and it works on images from any source.
X’s AI chatbot Grok began generating thousands of non-consensual intimate images per hour in late December 2025. The tool doesn’t just work on photos posted to X. It accepts any publicly posted photo from any source. The scandal became visible because of Grok’s X integration – manipulated images appeared in public reply threads where victims could see them. But Grok also exists as a standalone app that accepts uploaded photos from anywhere.
The UK launched a formal investigation on January 12, 2026. Indonesia and Malaysia blocked X entirely. The EU ordered the platform to preserve all Grok-related documents through 2026.
This affects anyone who has posted photos of children publicly online, regardless of platform.
What Happened
Grok, X’s AI chatbot developed by Elon Musk’s xAI company, launched an advanced image editing feature in late December 2025. The tool allows users to modify any image on the platform by typing text prompts.
Within days, the feature became what AI safety experts call a “nudification tool”. Users posted vacation photos, team pictures, and family snapshots. Others replied with Grok prompts requesting the AI remove clothing or place subjects in revealing outfits and Grok generated the manipulated images posting them publicly in reply threads.
Scale of the problem:
- Grok generated approximately 6,700 sexually explicit images per hour (Bloomberg analysis)
- 85% of Grok’s total output consisted of sexualised content
- Images depicted identifiable women placed in bikinis or revealing poses
- Some images depicted children, either partially or fully nude
- Platform integration meant manipulated images spread faster than third-party deepfake apps
How it worked: Two methods enabled the abuse:
Method 1 (X integration – most visible):
- Someone posts a normal photo on X (school event, holiday snap, professional headshot)
- Another user replies tagging Grok with a prompt: “put her in a bikini” or “undress this photo”
- Grok generates the manipulated image and posts it publicly in the reply thread
- The original poster may not notice immediately, allowing the image to spread
Method 2 (standalone Grok app – harder to track):
- Someone finds your child’s photo anywhere online: Facebook, Instagram, your blog, school website, anywhere
- They download or screenshot it
- They upload it to the Grok app (accessible separately from X)
- Grok generates the manipulated image
- They can post it anywhere or share it privately
The X integration made the problem visible because manipulated images appeared in public reply threads. But the standalone app means any publicly posted photo of any child can be used, regardless of which platform the original photo appeared on.
Tyler Johnston, executive director of AI watchdog The Midas Project, said his organisation warned about this exact scenario in August 2025. “In August, we warned that xAI’s image generation was essentially a nudification tool waiting to be weaponised. That’s basically what’s played out.”
Why This Is Different From Previous Deepfake Problems
Previous deepfake incidents typically involved third-party apps that required users to download software, upload photos, and generate images separately. Grok changed the threat model in two ways.
Works on images from any source: Grok doesn’t require the photo to be posted on X. The standalone app accepts any image from any source. If a photo exists digitally anywhere, it can be uploaded to Grok.
Platform integration amplified visibility: The manipulated images weren’t created on separate websites and then uploaded to X. They were generated directly by X’s own tool and automatically posted to the platform. This makes X both the creation engine and the distribution system.
No download required: Users didn’t need technical knowledge or separate apps. Typing a text prompt in a reply generated the manipulated image instantly. The barrier to creating non-consensual intimate images dropped to zero.
Public distribution: Unlike deepfake apps where images might be shared privately, Grok posted manipulated images in public reply threads where anyone could see them. Victims discovered their sexualised images through notifications or when others alerted them.
Children specifically targeted: While women bore the brunt of the abuse, regulators in multiple countries confirmed Grok generated sexualised images of children. The EU, UK, and Australia’s eSafety Commissioner all reported receiving evidence of AI-generated child sexual abuse material (CSAM).
Tom Quisel, CEO of content moderation firm Musabi AI, said it appeared xAI had failed to build even “entry level trust and safety layers” into the rollout. “It would be easy for a company like xAI to have its model detect and block an image involving children or partial nudity, or to reject users’ prompts to put the subject of a photo in sexually suggestive outfits.”
What Governments Are Doing
Multiple countries launched investigations or took immediate action within a week of the scandal breaking.
United Kingdom (formal investigation): Ofcom, the UK media regulator, announced a formal investigation on January 12, 2026. The investigation examines whether X violated the UK Online Safety Act by failing to quickly remove illegal content and conduct proper risk assessments before launching significant service changes.
Potential penalties: Ban on X in the UK, fines up to 10% of global revenue, or prohibition on UK companies advertising on the platform.
UK Prime Minister Keir Starmer called the images “disgusting” and “unlawful”, saying X needed to “get a grip”. Technology Secretary Liz Kendall announced the UK would criminalise nudification tools entirely, making it illegal for companies to supply tools that create nude images without consent.
European Union (document preservation order): The EU ordered X to retain all internal documents and data related to Grok through the end of 2026. While this doesn’t signal a formal Digital Services Act investigation yet, it preserves evidence amid compliance concerns.
EU spokesman Thomas Regnier: “This is not spicy. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”
France (criminal investigation): Paris prosecutors expanded their existing investigation into X to include Grok’s generation of child sexual abuse material. Two members of parliament formally reported “the dissemination of sexually explicit deepfakes, notably featuring minors, generated by Grok”.
Indonesia and Malaysia (immediate blocks): Both countries blocked access to X entirely, citing laws against deepfakes and child sexual abuse material. Indonesian Communication Minister Meutya Hafid: “The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space.”
India (mandatory review): The Ministry of Electronics and Information Technology ordered X to conduct a “comprehensive technical, procedural and governance-level review” of Grok, with a January 5 deadline.
Australia (investigation): The eSafety Commissioner received multiple reports of exploitative images and asked X to clarify safeguards, noting it could issue removal notices.
Brazil (suspension request): A member of parliament asked the federal public prosecutor and data protection authority to suspend Grok until an investigation concludes.
Canada (monitoring): Artificial Intelligence Minister Evan Solomon confirmed discussions are underway but said Canada isn’t considering banning X.
What This Means for Your Family’s Decisions
Even if you don’t use X, this scandal affects three decisions parents are making right now.
If you’ve posted photos of your children publicly online: This isn’t just about X. Any photo you’ve posted publicly anywhere can be downloaded and used with Grok. Private accounts offer some protection, but only if you trust everyone who follows you not to screenshot and share. Photos sent in messaging groups can be extracted and photos on school websites can be downloaded.
You don’t need permission to make different choices about posting your children’s images online. If you’ve been uncomfortable with how much of your child’s life is documented publicly but felt pressure to share like everyone else, this gives you a reason to stop. If you’re fine with the risk because family connection matters more, that’s also your decision to make.
If your child posts their own photos online: The same risk applies to images your teenager posts. Any selfie, group photo, or video showing their face can be downloaded and manipulated.
Having the “don’t post identifiable photos” conversation with your child is harder when they see peers posting constantly. But Grok demonstrates the risk isn’t theoretical. It’s not “this might happen someday.” It’s “this is happening now, at scale, to real people including children.”
Whether you decide to have that conversation, restrict what your child posts, or trust them to manage the risk themselves is your choice. The information exists for you to make that decision with full awareness of what’s possible.
If you’ve been questioning whether platforms self-regulate: New York Governor Kathy Hochul proposed on January 5, 2026 to disable AI chatbot features on social media platforms for children. Critics called it government overreach but five days later, a major platform’s AI chatbot was generating child sexual abuse material at scale so the timing matters.
X had months to test Grok before launch. AI safety experts warned in August 2025 the tool functioned as a nudification app. X launched it anyway. When governments demanded action, X made the feature premium-only rather than fixing the underlying problem. This pattern suggests platform companies won’t implement meaningful AI safety measures voluntarily.
If you’ve been hesitant to restrict your child’s access to certain platforms, trust your instincts instead of trusting platform safeguards that don’t exist.
If you’re watching how your country regulates children’s technology: The UK, EU, Indonesia, Malaysia, India, France, Brazil, and Australia all took action within two weeks.
Multiple countries are treating platforms as liable for content generated by their own tools, not just content uploaded by users. If platforms can be held accountable for what their AI features create, expect stricter content filtering on AI tools children access.
Whether that’s government overreach or necessary protection depends on your perspective. But the Grok scandal makes strict AI regulation politically easier to justify everywhere. If your government is considering restrictions on AI features in children’s apps, this incident validates those concerns at the institutional level.
Your family’s decisions don’t need to match what other families are doing. But when eight governments across four continents respond to the same platform failure in the same way, that’s context worth having when you’re making those decisions.
The Complications
X’s response raises questions about whether the company takes the problem seriously. After governments demanded action, X made Grok’s reply feature premium-only for paying subscribers. Users can still generate deepfakes through the standalone Grok app and other interfaces. The change reduced public visibility without solving the underlying issue.
Grok downloads increased 54% since the scandal broke, according to mobile app tracker Apptopia. Whether users are downloading out of curiosity, to create exploitative content, or for legitimate purposes is unclear. But the controversy hasn’t reduced interest in the tool.
Elon Musk responded to UK threats of blocking X by calling the British government “fascist” and accusing them of censorship. He argued other AI tools can edit images similarly and claimed the UK was singling out X. UK Prime Minister’s spokesperson rejected the free speech argument: “Our position on free speech is clear. We’re fully committed to the right to free speech.” The distinction appears to be between speech and tools that generate illegal content.
The Take It Down Act, signed by US President Trump in May 2025, criminalises distribution of non-consensual intimate imagery including AI-generated deepfakes. Platforms have until May 2026 to implement request-and-removal systems where victims can have images taken down within 48 hours. The Grok scandal occurred nine months before that deadline, suggesting platforms waited for legal requirements rather than implementing protections proactively.
Age verification remains the missing piece. Grok doesn’t verify whether the subjects of manipulated images are adults or children and it doesn’t verify whether the person requesting the manipulation has consent. Without these basic checks, any AI image editing tool can be misused regardless of its intended purpose.
What Happens Next
Immediate timeline:
- This week: Ofcom completes “expedited assessment” of X’s compliance
- Next few weeks: EU decides whether to launch formal Digital Services Act investigation
- February 2026: Multiple country investigations expected to produce findings
- May 2026: Take It Down Act deadline for platforms to implement removal systems
Longer-term implications: If the UK follows through on banning X, it establishes precedent for democratic countries blocking major social platforms over content safety failures. Indonesia and Malaysia have already done so, but a UK ban would signal that Western democracies consider platform safety violations severe enough to justify access blocks.
The EU’s document preservation order suggests they’re building a case for potential Digital Services Act enforcement. Previous DSA violations resulted in a €120 million fine for X in December 2025. A second violation involving child safety could result in substantially larger penalties.
If Instagram, Snapchat, or TikTok add guardrails to their AI features before being forced to, that suggests the industry learned from Grok’s failure. If they wait for regulatory action, expect similar scandals on other platforms.
For parents: The immediate question is whether photos you’ve posted anywhere online could be manipulated. There’s no technical way to prevent someone downloading your child’s photo from any public source and uploading it to Grok. The only protection is not posting identifiable photos publicly in the first place, or removing photos you’ve already posted.
Whether you decide to delete existing photos, set accounts to private, stop posting your children’s images entirely, or continue as before is your choice to make. Some parents will decide the family connection and memory-sharing matters more than the risk. Others will decide the risk isn’t worth it.
The broader question is whether AI image generation tools should exist without consent mechanisms. Grok demonstrates that even with usage restrictions, these tools can be weaponised faster than platforms respond. What you do with that information is up to you.
When eight governments across four continents take action against the same platform within two weeks, that’s not just regulatory response. That’s institutional recognition that platform self-regulation isn’t working.
Sources
- Tracking Regulator Responses to the Grok ‘Undressing’ Controversy – Tech Policy Press, January 12, 2026
- UK to investigate Elon Musk’s Grok over ‘deeply concerning’ deepfakes – Al Jazeera, January 12, 2026
- International pressure builds on X and Musk over Grok deepfakes – NBC News, January 12, 2026
- Grok blocked in Malaysia and Indonesia as sexual deepfake scandal builds – Fortune, January 12, 2026
- EU flags ‘appalling’ child-like deepfakes generated by X’s Grok AI – Al Jazeera, January 5, 2026



