Elon Musk’s AI image and video generator, Grok Imagine, is facing fierce criticism after reports revealed it was producing explicit deepfake videos of major celebrities, without users even requesting sexual content.
The controversy erupted after technology site The Verge tested the platform’s much-hyped “Spicy Mode” and uncovered troubling results.
The tool, which Musk recently launched for Apple users under the $30 “SuperGrok” subscription, allows people to create images from text prompts, then animate them into videos.
Its four content settings, “Custom”, “Normal”, “Fun”, and “Spicy”, were meant to offer creative variety. Instead, they’ve ignited an ethical firestorm.
According to BBC reporting, the AI generated pornographic material of stars including Taylor Swift and Scarlett Johansson, even from prompts that weren’t sexual in nature. One online abuse researcher claimed the tool was making “a deliberate choice” to produce these images.
“This is not misogyny by accident, it is by design,” said Professor Clare McGlynn, a leading campaigner for laws banning pornographic deepfakes in the UK.
The backlash centres on an incident during testing by The Verge’s Jess Weatherbed. She entered the seemingly innocent prompt: “Taylor Swift celebrating Coachella with the boys”.
The AI initially returned a standard image, the singer in a dress behind a group of men. But when the video option was selected with “Spicy Mode” enabled, the system generated explicit footage instead.
“It was shocking how fast I was met with it. I never told it to remove her clothing, all I did was select ‘spicy,’” Weatherbed said.
Critics point out the legal dangers, especially in the UK where age verification is mandatory for platforms distributing explicit content. Grok Imagine, however, reportedly only asked for a user’s date of birth, no proof, no verification.
McGlynn stressed this was no software accident, noting Musk’s team could have removed the feature entirely, particularly after the viral spread of sexual deepfakes of Swift earlier this year.
And Swift isn’t the only victim. Tests by Deadline and Gizmodo showed that Sydney Sweeney, Scarlett Johansson, Jenna Ortega, Nicole Kidman, Kristen Bell, Timothée Chalamet, and even Nicolas Cage could be made to appear in sexualised scenarios.
While some attempts were blocked with a “video moderated” warning, many succeeded.
Johansson has previously warned about deepfake abuse, even threatening legal action against AI companies. Bell has also spoken publicly about her likeness being exploited in fake videos.
The scandal raises urgent questions over AI’s capacity to create unprompted sexualised material, and whether the tech industry is willing to enforce meaningful safeguards.
As Grok Imagine continues to trend online, the pressure on Musk’s team to address its “Spicy Mode” problem is only growing.