Elon Musk’s xAI has officially rolled out Grok Imagine, a new image and video generator integrated into X (formerly Twitter), designed exclusively for iOS users subscribed to SuperGrok and Premium+ plans. Among its standout features is the controversial “Spicy” mode, which allows users to generate not-safe-for-work (NSFW) content, including sexually suggestive and explicit depictions of public figures. While AI-generated imagery is no longer novel, what sets Grok Imagine apart and deeply concerns experts is its gender-biased behavior and minimal safeguards.
Early reports and firsthand testing reveal that the platform disproportionately generates explicit deepfakes of women, while male subjects are rendered far more conservatively. As AI content generation continues to blur the lines between creativity, ethics, and consent, Grok Imagine’s launch reignites the debate over accountability in the age of synthetic media.
Read More: 15 TikTok Videos Exploring ‘Clankers’ A Controversial New Slur for Robots
Elon Musk’s xAI Introduces Grok Imagine with Minimal Safeguards
This week, Elon Musk officially launched Grok Imagine, xAI’s experimental image and video generator for iOS, available to SuperGrok and Premium+ subscribers on X (formerly Twitter). A key feature dubbed “Spicy” mode enables users to create not-safe-for-work (NSFW) content with minimal restrictions, prompting significant concern among digital rights experts and AI watchdogs.
According to The Verge, the tool can effortlessly produce topless AI-generated videos of female celebrities, including Taylor Swift often without the user explicitly requesting nudity. The implications extend far beyond Swift, raising broader ethical and legal questions about how Grok handles depictions of public figures, especially women.
NSFW Results Target Women, Not Men
In testing conducted by Gizmodo, Grok Imagine generated around two dozen videos featuring various politicians, celebrities, and tech figures. While some results were blocked or marked as “video moderated,” most videos depicting women rendered explicit or topless imagery. In stark contrast, videos featuring men were comparatively tame limited to shirtless visuals at most.
Attempts to generate NSFW videos of figures such as Elon Musk, Mark Zuckerberg, Jeff Bezos, Joaquin Phoenix, Barack Obama, and even historical personalities like George Washington resulted in nothing beyond shirt removal. However, female figures real and fictional were depicted in far more compromising scenarios.
Historical and Political Women Also Targeted
Gizmodo’s testing extended to women like Melania Trump, who has publicly supported laws protecting against non-consensual intimate content, and Valerie Solanas, the radical feminist writer of the S.C.U.M. Manifesto. The video of Solanas was particularly disturbing, reportedly showing her fully nude.
Historical figures like Martha Washington were also rendered with significant undress, despite no request for nudity. Notably, the same prompts using male historical figures resulted in far more conservative outputs.
Generic Female vs. Male Prompts: A Double Standard
The platform also showed stark differences when generating content of generic male vs. generic female characters. While male avatars awkwardly tugged at oddly-rendered pants or simply removed shirts, female avatars often stripped down completely—sometimes automatically pulling down swimsuits or shirts.
The tool auto-generates audio and motion, even without user input beyond clicking the “Spicy” button, one of four available modes (Custom, Fun, Normal, and Spicy). This feature seems optimized to push visual boundaries, especially when depicting women.
Flawed Safeguards and Questionable Moderation
While Grok Imagine includes an age verification popup, The Verge notes there’s no actual birthdate confirmation process in place. Though children and animated characters like Mickey Mouse and Batman didn’t return inappropriate content, the fact that “Spicy” mode remained an option in those tests raises concerns about xAI’s moderation strategy.
Further, Gizmodo found that even after reaching the platform’s content generation limit, users could still generate NSFW videos from single image prompts. However, more robust access requires upgrading to SuperGrok Heavy, priced at $300 per month.
Poor Accuracy, But Legal Risks Still Loom
Interestingly, many of the images generated by Grok failed to convincingly depict the intended celebrity. For instance, requests for images of Vice President JD Vance and actress Sydney Sweeney produced figures bearing little resemblance to their real-life counterparts. A request for President Harry Truman resulted in an image where the nipples appeared on top of a dress shirt.
These technical flaws might shield xAI from some legal scrutiny—for now. But as the technology advances, and as more realistic celebrity deepfakes become feasible, the legal risks could escalate significantly.
A Troubling Gender Bias in AI Nudity
The apparent double standard between how Grok renders male versus female figures has sparked criticism. When Gizmodo created a video of “Gizmodo writer Matt Novak,” the result was a cartoonishly muscular man removing his shirt—nothing more. Yet, female avatars, whether real or generic, were far more frequently rendered in full or partial nudity.
This imbalance echoes long-standing critiques of gender bias in AI systems. It also reflects broader concerns about the male gaze embedded in content moderation policies—or the lack thereof—within tech platforms.
Musk’s History and Cultural Context Can’t Be Ignored
This isn’t the first time Elon Musk’s actions have come under scrutiny in relation to content moderation. In 2023, The Washington Post reported that Musk reinstated an account on X that had posted child sexual abuse material. More recently, he retweeted a far-right influencer who claimed women were “anti-white” due to being “weak,” and joked about impregnating Taylor Swift.
While not directly tied to Grok Imagine’s algorithms, these public behaviors frame the release of such a tool in a controversial light.
Frequently Asked Questions
What is Grok Imagine?
An AI image/video generator by xAI, available to SuperGrok and Premium+ users on X.
What does “Spicy” mode do?
It generates NSFW content, including explicit deepfakes—mostly targeting women.
Does it treat male and female figures equally?
No. Women are often shown nude, while men are usually just shirtless.
Are there content safeguards?
Only a basic age prompt; no strong verification or consistent moderation.
Can it create images of real people?
Yes—even without requesting nudity, it can create explicit celebrity deepfakes.
How accurate are the results?
Often inaccurate in likeness, but still problematic in intent and content.
How much does it cost?
$30/month for SuperGrok; $300/month for full access (SuperGrok Heavy).
Is this legal?
It may violate privacy laws and deepfake regulations, especially if shared.
Conclusion
Grok Imagine, particularly its “Spicy” mode, presents a dangerous precedent for the future of generative AI. With minimal safeguards, clear gender bias, and an alarming capacity to generate deepfake celebrity nudity, the platform underscores the need for urgent regulation and ethical oversight.
Whether or not the technology improves in visual accuracy, the existing functionality already raises serious concerns about consent, digital rights, and the weaponization of AI against women.