Criminal liability now on the table as France and Malaysia corner X over Grok deepfakes
It wasn't just a glitch. We break down the "img2img" technical failure that allowed Grok to generate deepfakes, why Musk's "toaster" defense is risky, and why the current fix isn't enough.

A stylized graphic representing the integration of Elon Musk's AI model, Grok, within the X ecosystem.
The "Undress" Epidemic
Elon Musk’s defiance of global regulators is facing its sharpest test yet. Following a feature rollout that allowed users to digitally "undress" subjects in photos with terrifying ease, France and Malaysia have launched simultaneous investigations into X (formerly Twitter).
While French ministers have formally reported X to prosecutors, labeling the content "manifestly illegal," Malaysian regulators are preparing criminal charges that could see local executives facing jail time. The backlash comes as X's AI tool, Grok, was found to be generating non-consensual sexualized imagery of women and minors at scale, suggesting a fundamental lack of "safety by design."
How Did This Happen? (The Technical Failure)
The current crisis isn't a "glitch", it is a specific breakdown in Image-to-Image (img2img) conditioning, a vulnerability that is distinct from standard text-prompt issues.
Most AI safety guardrails are "semantic", they scan your text prompt for banned words (e.g., "nude," "undress"). However, Grok’s recent update allowed users to upload an existing photo of a clothed person and use benign prompts like "summer vibe" or "change outfit" to bypass these filters. Because the model processes the visual data separately from the text safety layer, it executes the "undressing" instruction before the filter realizes a violation has occurred.
As noted in technical analyses, X’s "edit image" function was being systematically abused to manipulate innocent photos of minors and women into explicit sexual material. This is a failure of multimodal alignment: the model prioritized the visual instruction over the safety protocol.
The Regulatory Pincer: Paris, Kuala Lumpur, and New Delhi
The backlash has been swift and legally perilous, forming a three-front war that X is ill-equipped to fight.
- Malaysia (Criminal Liability): The Malaysian Communications and Multimedia Commission (MCMC) has launched a formal investigation under Section 233 of the Communications and Multimedia Act 1998. Unlike EU fines, this law carries criminal penalties. The MCMC has announced it will summon X representatives to explain the failure, with potential fines of RM50,000 and jail terms for non-compliance.
- France (The EU Hammer): In Paris, the response moved immediately to the courts. The Paris Prosecutor's Office confirmed it has been contacted by parliament members regarding the dissemination of sexually explicit deepfakes featuring minors. This referral could trigger the full weight of the EU's Digital Services Act (DSA), which allows for fines up to 6% of global turnover.
- India (The Ultimatum): Perhaps the most immediate threat comes from India, X's third-largest market. The Ministry of Electronics and IT has issued a strict 72-hour ultimatum to X, demanding a "comprehensive audit" and immediate removal of the content. Failure to comply would strip X of its "safe harbor" protection, making the company directly liable for every piece of illegal content on its site.
Elon Musk’s Response: The "Legacy Media" Pivot
Elon Musk’s response has shifted dramatically over the last 48 hours, moving from mockery to a distinct legal defense strategy.
Initially, Musk dismissed the outcry by posting an AI-generated image of a toaster wearing a bikini with "laugh-cry" emojis, framing the reports as "legacy media lies." However, as criminal probes opened in Malaysia and France, his tone hardened.
On Saturday, Musk issued a specific warning via X: "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." He also retweeted a statement from X Safety, adding, "We’re not kidding."
Why this matters: This is a calculated legal pivot. By framing the issue as "user misconduct" (comparable to uploading a file), Musk is attempting to retreat behind the "neutral platform" defense. However, regulators are arguing the opposite: that Grok isn't just hosting content, it is creating it. If courts decide that X is the "author" of these deepfakes, Musk’s "user responsibility" defense will collapse.
Is It Fixed?
Technically: No. Functionally: Partially (via "Whack-a-Mole")
As of Monday morning, X has implemented a crude "hotfix." Users report that prompts containing specific keywords like "bikini" are now being blocked when an image is attached. However, this is a keyword block, not a model fix. The underlying capability to generate non-consensual nudity remains inside the model, and users are already sharing "jailbreak" prompts to bypass the new filters.



