The FBI is investigating X Corp after a man allegedly used its Grok AI chatbot to generate more than 200 non-consensual nude images and videos as part of an ongoing stalking and harassment case. Court documents reveal that in January, federal authorities obtained a search warrant requiring X to hand over conversation records between the accused and Grok.
The affidavit identifies the suspect as Simon Tuck, who is accused of extensively harassing a woman and her husband. According to FBI records, prompts submitted to Grok generated pornographic videos featuring a woman who closely resembled the victim’s wife. Tuck also reportedly used Grok to fabricate professional complaints about the husband, which were submitted directly to his employer.
A Pattern of Misuse
This case is part of a broader pattern of Grok misuse. Earlier in 2026, a widespread nudification trend emerged in which users prompted Grok to strip clothing from photos, with Bloomberg reporting the tool was producing approximately 6,700 sexually suggestive images per hour at its peak. Despite initial resistance to making changes, X eventually moved to restrict Grok’s image generation capabilities.
Investigations Mount Against X
Regulatory scrutiny is intensifying globally. X now faces multiple investigations across various regions concerning Grok’s ability to produce non-consensual intimate imagery. Reuters has also reported that some users continue to extract sexually suggestive content from the tool despite the restrictions, suggesting the safeguards remain incomplete.
The case raises serious questions about platform accountability and the pace at which AI guardrails are implemented — particularly as real victims bear the consequences of delayed action.