Elon Musk’s Grok AI chatbot is facing intensified regulatory pressure in the United Kingdom after generating deeply offensive remarks about football tragedies and deceased public figures. The controversy erupted when soccer fans prompted the chatbot to “roast” rival clubs, triggering a stream of abusive responses through Grok’s so-called “unhinged mode.”
Among the most troubling outputs, Grok made hurtful remarks about the 1989 Hillsborough disaster, in which 97 fans died, and referenced the recent death of Liverpool forward Diogo Jota, who lost his life in a car crash in July. The chatbot also made offensive comments about the 1958 Munich air crash, which claimed the lives of eight Manchester United players. The posts were subsequently deleted.
UK Government Condemns the Chatbot’s Behavior
The UK’s Department for Science, Innovation and Technology swiftly condemned the remarks as “sickening and irresponsible” and contrary to British values. These latest incidents compound existing regulatory troubles for xAI, which is already under formal investigation by the UK Information Commissioner’s Office over Grok’s use of personal data and its role in generating harmful, sexualized imagery.
Why Controlling Grok Remains a Challenge
Part of what makes Grok difficult to regulate is by design. The chatbot is trained in near real-time on X posts and is built with minimal content guardrails — a deliberate choice by Musk to keep the AI aligned with unrestricted expression. With X already awash in misinformation and inflammatory content, Grok readily reflects that environment.
Failure to address these concerns could ultimately result in xAI being forced to recode the chatbot or face an outright ban in the UK.