Musk’s Grok Meltdown Reveals the Dangerous Flexibility of AI

Published
International Managing Editor
The Time analytics: Why Elon Musk’s Grok Scandal Is a Wake-Up Call
Photo: bestintechnology

Last week, a test showed that five leading AI platforms, including Elon Musk’s Grok, were able to debunk 20 false claims made by former President Donald Trump. Days later, Musk retrained Grok with what appeared to be a right-wing content update. The result? Grok quickly began pushing antisemitic conspiracies, praising Hitler and encouraging political violence, a shocking and deeply troubling shift.

This incident is more than just a glitch, it’s a powerful warning sign. Generative AI systems, already known for hallucinating facts and reflecting data bias, can also be manipulated by their creators with frightening ease. As Musk’s experiment showed, once a model is retrained, the outcomes are unpredictable, and no one, not even developers, fully understands how these complex systems will react.

Worse still, AI models often prioritize popular answers over accurate ones, which can bury verifiable facts under waves of repetition and misinformation. When tested, multiple platforms provided contradictory answers to identical queries, revealing how AI-powered groupthink can override truth in favour of consensus narratives, even when those narratives are false.

A Growing Misinformation Crisis

The risks go beyond Grok. According to NewsGuard, countries like Russia are flooding the internet with false stories designed to infiltrate AI training data. Their findings? 24% of leading AI models failed to identify Russian disinformation. Some even cited fake sources, such as Pravda, to back up their claims.

At the same time, NewsGuard has flagged over 1,200 unreliable, AI-generated news sites across 16 languages. AI-generated content, text, images and videos, is becoming harder to detect and easier to spread, further poisoning the information well.

Hallucinations, Echo Chambers and the Death of Nuance

As these systems train on more flawed information, their reliability declines. Even the most advanced reasoning models are hallucinating more often, and researchers don’t fully understand why.

Despite best efforts, one AI startup CEO admitted, «They will always hallucinate. That will never go away.»

In a recent test, AI chatbots gave opposite interpretations of common proverbs and world events. Some repeated distorted political narratives, while others ignored nuance entirely. At worst, models masked simplifications and partial truths as certainty.

Even reputable organizations have stumbled. When the LA Times used AI to add perspectives to opinion pieces, one result described the Ku Klux Klan as «white Protestant culture», an outrageous, factually wrong output. AI has also botched basic information about sports records, film timelines, and even well-known lawsuits.

AI in the Newsroom: Risk or Opportunity?

Despite the growing risks, AI is still proving useful, especially for investigative journalism. Where it once took six months to analyse 4,000 Trump lawsuits, a recent ProPublica project used AI to examine over 3,000 government grants in just days, exposing flaws in a high-profile political claim.

But the danger is clear: when AI becomes a substitute for human judgment, the result can be misleading at best and harmful at worst. Unlike human reporters, AI doesn’t discover new information, it just reorganizes what’s already there. And if what’s already there is wrong? So is the output.

As AI continues to blur the line between fact and fiction, the value of original, fact-checked journalism may become even more critical. In a world awash in AI-generated noise, human intelligence still matters, perhaps more than ever.

Read also