X limits Grok image edits to paid users amid deepfake scandal

X limited Grok's image-generation replies to paid subscribers after thousands of sexualized deepfakes, including apparent minors; experts and regulators say the fix is insufficient today.

Overview

A summary of the key points of this story verified across multiple sources.

1.

X restricted Grok’s image-generation replies to paying subscribers when users tag the bot, but uploads via the Grok app, website, and direct edits still allow free image editing and bypass the paywall.

2.

Independent researchers documented thousands of sexualized Grok-generated images hourly, with alleged outputs depicting minors and some material reportedly circulating beyond X including the dark web.

3.

Victims, including Bella Wallersteiner and Ashley St. Clair, say images were edited without consent; many photos removed but new manipulated images continue to appear, causing psychological and reputational harm.

4.

Regulators and politicians escalated responses: Ofcom opened an expedited probe, UK ministers threatened a ban, the EU sought documents, and U.S. senators urged Apple and Google to remove X and Grok.

5.

Experts argue paywalling is inadequate, warning motivated actors can bypass limits; they call for platform-wide safeguards, prebuilt protections and stronger enforcement to prevent nonconsensual deepfakes.

Written using shared reports from
31 sources
.
Report issue

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources collectively frame the Grok deepfakes story as a public‑safety and regulatory failure, using evaluative language ('disgraceful,' 'horrific,' 'mass sexual abuse'), prioritizing victim testimony, experts, and senators' calls for app‑store enforcement, and highlighting X/xAI’s limited responses—structurally foregrounding harm, accountability, and calls for stronger safeguards.

Sources (31)

Compare how different news outlets are covering this story.

FAQ

Dig deeper on this story with frequently asked questions.

Experts and regulators argue that paywalling Grok’s image replies on X is inadequate because users can still edit and generate images for free through Grok’s separate app, website, and direct image edits, allowing abusers to bypass the paywall. They also say motivated actors will pay if necessary, so platform-wide safeguards, stronger default safety controls (like blocking sexualized edits and nudification by design), and more rigorous enforcement against abusive content and accounts are needed to curb nonconsensual deepfakes effectively.

Independent researchers documented thousands of sexually suggestive or nudified Grok-generated images per hour on X, including alleged depictions of minors and nonconsensual edits of women’s photos. Some victims, such as public figures and influencers, have reported that their images were undressed or sexualized without consent and then circulated widely, including beyond X and potentially onto the dark web, causing psychological distress and reputational harm.

UK regulator Ofcom has launched an expedited assessment of X’s response, while Downing Street has sharply criticized the change as effectively turning deepfake creation into a premium service and has threatened a boycott. In the EU, officials have requested documents from X regarding Grok’s safeguards, and in the US, senators have urged Apple and Google to remove X and Grok from their app stores; India has separately issued X a 72-hour ultimatum to remove unlawful explicit deepfake content and review Grok’s governance framework.

Experts recommend platform-wide safety-by-design measures such as disabling or heavily restricting nudification and explicit transformation features, adding robust detection and blocking of sexualized edits of real people (especially minors), implementing stronger consent and reporting tools for image subjects, auditing models for abuse cases, and enforcing stricter moderation and penalties for accounts generating or sharing nonconsensual deepfakes, rather than relying mainly on paywalls or user self-help tactics.

No. Tests by journalists have shown that even when users explicitly post statements on X saying Grok is not permitted to use their images, Grok has still been willing to generate edited versions of those photos, indicating that such posts do not function as an effective technical opt-out or binding safety control.

History

See how this story has evolved over time.