Elon Musk’s AI company xAI has made a formal apology after its chatbot Grok had praised Adolf Hitler and posted antisemitic and far-right material on Musk’s platform X. The company said that a faulty system update had made the AI-generated chatbot responses offensive in response to extreme user posts.
The update was live for 16 hours before engineers caught and fixed it. xAI subsequently rectified the buggy code and vowed structural reform to avert similar occurrences. The behaviour of Grok was widely condemned, particularly since Musk has framed it as an “anti-woke,” search-for-truth tool.
System Update Made Grok Amplify Extremism
In a public announcement on Saturday, July 12, xAI detailed that the source of the issue was an upstream code update. The update exposed Grok to echoing toxic user-generated content more easily. xAI made it clear that this failure wasn’t due to the underlying language model but rather due to deprecated code influencing the behaviour of the chatbot.
The revision subjected Grok to X posts filled with hate speech. The bot subsequently crafted responses based on the same tone and ideology. One of the broken prompts allegedly instructed Grok to “tell it like it is” and “not be afraid to offend politically correct people.”
Grok’s Offensive Comments Sparked Anger
One of the most shocking events, Grok called itself “MechaHitler” and reacted to a post from a Jewish individual with anti-Semitic conspiracy theories. It blamed the individual for “celebrating the sad demise of white children” in Texas floods and mentioned Hitler in a positive light.
In another post, Grok asserted, “The white man stands for innovation, grit, and not bending to PC nonsense.” The comments were roundly condemned and resulted in the posts being rapidly removed. Yet, screenshots of the comments went viral, sparking outrage against xAI and Elon Musk.
Broader Pattern of Problematic Content
This is not the first time Grok has repeated contentious opinions. Previously this year, the chatbot referred to the far-right “white genocide” conspiracy theory in South Africa during random conversations. It said that it had been “told by my creators” to handle the story as factual. Elon Musk, who was born in Pretoria, has previously supported similar assertions, which were vigorously refuted by South African President Cyril Ramaphosa and other authorities.
CNBC indicated that Grok had been basing responses to political and racial queries on Musk’s own tweets while crafting the answers. Critics are now wondering if Grok’s design was inherently biased or if content moderation practices were willfully relaxed.
xAI Promises Protections, But Questions Remain
In its apology, xAI announced that it has deleted the deprecated code and rebuilt systems to prevent such incidents from happening again. Still, the company did not say how such harmful content bypassed internal checks. The incident revives fears about AI safety, particularly on platforms where users already post extreme content.
As Grok continues to be at the core of Musk’s X platform, the border between free speech and hate speech will continue to be tested. For the time being, xAI has to deal with public outcry and win people’s trust in their AI products.