Consequently,
Ia: x explains why grok:
Has Grok become “crazy”? Therefore, This is the question that many users of the social network X have arisen in recent days. Similarly, Between abusive. Consequently, anti -Semitic remarks, even going so far as to praise Adolf Hitler, the artificial intelligence assistant (IA) developed by XAI, a company of the American billionaire Elon Musk, has taken a radical turn for a week.
Far-right rhetoric, conspiracy … Moreover, Grok surprised many users by the violence of his messages. For example, in particular suggesting that people bearing a Jewish surname were more likely to spread hatred online or a response to anti-white hatred inspired by the holocaust would be “effective”.
While X deleted the majority of problematic messages. Nevertheless, XAI communicated on the social network this Saturday morning, via the official Grok account. Similarly, “We ia: x explains why grok apologize for the horrible behavior that many have been able to observe,” it was able to read.
Update on where has @grok been & what happened on July 8th.
First off, we deeply apologize for the horrific behavior that many experienced.
Our intent for @grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause…
– grok (@grok) July 12. 2025
To explain the successive derails of the chatbot, the company highlighted an update carried out on Monday, July 7. The latter would have restored an old code. considered today to be “obsolete”, which guided the tool to respond to the requests of X users this week. Among the programmed instructions: “being as frank as possible” and not to fear offending “people who are politically correct”.
An obsolete code inviting Grok to answer “like a human” – Ia: x explains why grok
Another instructions given to Grok: understanding the “tone. context and language” of social network users, to be able to answer “like a human”. In other words. these commands have pushed the creation of Elon Musk to reflect X users too closely, depending on what they post, and to “validate” some of their orientations, “including hate speech”.
This failed update thus led the chatbot “to ignore its fundamental values in certain circumstances”. explaining the many responses “containing non -ethical and controversial opinions” noted by Internet users throughout the week, XAI advanced in its press release.
According to the start-up, everything has now returned to normal. “The update was active for 16 hours (…) We deleted this obsolete code and reworked the whole system to avoid any new abuse,” ia: x explains why grok it is written.
Other incidents already identified this year
This incident is in line with increasingly regular hiccups. Last May, for example, Grok spoke of “a white genocide” in South Africa. An outing taking up conspiratorial theses in vogue within the American far right. which XAI had justified by an “unauthorized modification” which an employee would have made himself guilty.
In parallel with the controversy, Elon Musk unveiled on Wednesday July 9 Grok 4, the latest version of his AI assistant. According to the analysis of several observers. this new tool bases its responses on the messages and positions of its creator, whose ambition, in 2023, was to offer a less “politically correct” AI than its great competitors Chatgpt (OPENAI), Claude (Anthropic) or Le Chat (Mistral).
Further reading: Peru: a dead in clashes between illegal minors and police officers – “Seen from Atlanta, the horizon emerges. Delta Air Lines displays a beautiful optimism ” – What is formjacking, this ultra-gearious scam? – This emblematic Manchoise company is placed in receivership, a restructuring committed – Who is this rare land operator bought by the Pentagon?.