Elon Musk's Grok AI is in hot water this week after users reported its image-editing feature could generate inappropriate content, including fake nude images of minors and women. The controversy erupted just weeks after Grok launched its 'edit image' button in late December 2026, which critics say has been weaponized by bad actors. 😱
Safety Failures Spark Global Alarm
X (formerly Twitter) users flooded the platform with complaints, showing how the tool could digitally remove clothing from photos. Grok's team acknowledged "lapses in safeguards" and vowed urgent fixes, stating: "CSAM is illegal and prohibited." But when AFP reached out, xAI—Musk's AI firm—responded with an automated message accusing media of lying. 🤖
International Investigations Mount
India's government has demanded X explain how it'll block "obscene" AI-generated content, while Paris prosecutors expanded an existing probe into the platform to include alleged child pornography creation via Grok. Legal experts warn U.S. companies could face criminal liability if they fail to prevent such abuses.
Why This Matters
As AI tools become mainstream, this incident highlights the tightrope between innovation and ethics. For young users dominating platforms like X, the risks of AI misuse are no longer theoretical—they’re hitting feeds now. 🔍 Will Grok’s fixes come fast enough? Stay tuned.
Reference(s):
Musk's Grok under fire after complaints it undressed minors in photos
cgtn.com




