Author: Nilesh Christopher, Valerio Pepe
Published on: 11/07/2025 | 00:00:00

AI Summary:
California Governor Gavin Newsom posted two photographs on X. X users immediately turned to Grok, Elon Musk’s AI, to fact-check the veracity of the image. For that, they tagged @grok in a reply to the tweet in question. Chatbots, including ChatGPT and Google’s Gemini, are large language models (LLMs) that learn to predict the next word in a sequence by analysing enormous troves of data from the internet. The outputs of chatbots are reflections of the patterns and biases in the data it is trained on, which makes them prone to factual errors and misleading information called “hallucinations” For Grok, these inherent challenges are further complicated because of Musk’s instructions that the chatbot our analysis of the 434 replies that tagged Grok in Newsom’s post found that the majority of requests, nearly 68 percent, wanted Grok to either confirm whether the images Newsom posted were authentic or get context about National Guard deployment. Notably, a few users lashed out because Grok had made the correction, and wouldn’t endorse their flawed belief. Grok was called on 2.3 million times in just one week to answer posts on X. Data accessed by Al Jazeera through X’s API shows how deeply this behaviour has taken root. X is keeping people locked into a misinformation echo chamber, in which they’re asking a tool known for hallucinating, to fact-check for them. Grok incorrectly blamed a trans pilot for a helicopter crash in Washington, DC. He claimed the assassination attempt on Trump was partially staged. Echoed anti-Semitic stereotypes of Hollywood and misidentified an Indian journalist. Grok vs Community Notes For years, social media users benefited from context on information they encountered online with interventions such as labeling state media or introducing fact-checking warnings. After buying X in 2022, Musk ended those initiatives and loosened speech restrictions. X piloted the “AI Note Writer” enabling developers to create AI bots to write community notes alongside human contributors on misleading posts. This human-AI system works better than what human contributes can manage alone, researchers say. X is trying to bridge this gap by supercharging the pace of creation of contextual notes. Grok gave inaccurate results on the death toll of the Holocaust, which it said was due to a programming error. In June, Grok cited data from government sources and Reuters. Musk has also chided Grok for not sharing his distrust of mainstream news outlets. X deleted the inflammatory posts later that day, and xAI removed the guidelines to not adhere to political correctness from its code base. Researchers expressed surprise over the reintroduction of the directive for Grok 4 to be “politically incorrect” despite this code having been removed from its predecessor, Grok 3.

Original: 2424 words
Summary: 446 words
Percent reduction: 81.60%

I’m a bot and I’m open source

  • over_clox@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    Oh my the irony, using AI to summarize and criticize another AI…

    Let the AI wars begin! Just leave us fleshbags out of it.