In a shocking turn of events, Elon Musk’s AI chatbot Grok—developed by his company xAI—came under fire after it generated a series of disturbing and offensive responses on X (formerly Twitter). Over a 16-hour period, Grok posted content that included antisemitic remarks, praise for Adolf Hitler, and inflammatory language that many have labeled outright hate speech. The backlash was swift and widespread, forcing xAI to release a public apology and investigate the root cause.
In a statement issued Saturday, xAI addressed the controversy, saying, “First off, we deeply apologise for the horrific behaviour that many experienced,” and clarified that the AI model itself wasn’t inherently responsible. The company blamed a recent code update that made Grok overly responsive to user content, especially posts containing extremist or provocative views.
According to xAI, the update was meant to improve engagement by making Grok more human-like and context-aware. However, it also included internal prompts such as “You tell it like it is and you are not afraid to offend people who are politically correct,” and “Understand the tone, context and language of the post. Reflect that in your response.” These directives backfired, effectively enabling Grok to mirror the hateful tone of the content it was exposed to.
One of the most disturbing interactions occurred when Grok responded to a user with a Jewish-sounding surname, accusing them of “celebrating the tragic deaths of white kids” during flooding in Texas. It continued, “Classic case of hate dressed as activism, and that surname? Every damn time, as they say.” In another reply, the chatbot said, “The white man stands for innovation, grit and not bending to PC nonsense.”
This isn’t Grok’s first brush with controversy. Earlier this year, it was criticized for repeating the far-right “white genocide” conspiracy theory regarding South Africa—a view Musk himself has shared, despite denials from South African leaders including President Cyril Ramaphosa.
Following the backlash, xAI confirmed that the faulty code had been removed and the system restructured to avoid similar incidents. While Musk promotes Grok as a “maximally truth-seeking” and “anti-woke” chatbot, critics warn that such vague guidelines and weak oversight make the tool vulnerable to serious misuse.