https://arab.news/nyrzc
- Grok wrongly identified old video footage from Sudan鈥檚 Khartoum airport as missile strike on Pakistan鈥檚 Nur Khan air base
- Unrelated footage of building on fire in Nepal was misidentified as 鈥渓ikely鈥� showing Pakistan鈥檚 response to Indian strikes
WASHINGTON, US: As misinformation exploded during India鈥檚 four-day conflict with Pakistan, social media users turned to an AI chatbot for verification 鈥� only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool.
With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots 鈥� including xAI鈥檚 Grok, OpenAI鈥檚 ChatGPT, and Google鈥檚 Gemini 鈥� in search of reliable information.
鈥淗ey @Grok, is this true?鈥� has become a common query on Elon Musk鈥檚 platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media.
But the responses are often themselves riddled with misinformation.
Grok 鈥� now under renewed scrutiny for inserting 鈥渨hite genocide,鈥� a far-right conspiracy theory, into unrelated queries 鈥� wrongly identified old video footage from Sudan鈥檚 Khartoum airport as a missile strike on Pakistan鈥檚 Nur Khan air base during the country鈥檚 recent conflict with India.
Unrelated footage of a building on fire in Nepal was misidentified as 鈥渓ikely鈥� showing Pakistan鈥檚 military response to Indian strikes.
鈥淭he growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,鈥� McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP.
鈥淥ur research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,鈥� she warned.
NewsGuard鈥檚 research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election.
In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 鈥済enerally bad at declining to answer questions they couldn鈥檛 answer accurately, offering incorrect or speculative answers instead.鈥�
When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken.
Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 鈥済enuine,鈥� even citing credible-sounding scientific expeditions to support its false claim.
In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok鈥檚 assessment as evidence the clip was real.
Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification.
The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 鈥淐ommunity Notes,鈥� popularized by X.
Researchers have repeatedly questioned the effectiveness of 鈥淐ommunity Notes鈥� in combating falsehoods.
Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content 鈥� something professional fact-checkers vehemently reject.
AFP currently works in 26 languages with Facebook鈥檚 fact-checking program, including in Asia, Latin America, and the European Union.
The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control.
Musk鈥檚 xAI recently blamed an 鈥渦nauthorized modification鈥� for causing Grok to generate unsolicited posts referencing 鈥渨hite genocide鈥� in South Africa.
When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 鈥渕ost likely鈥� culprit.
Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa鈥檚 leaders were 鈥渙penly pushing for genocide鈥� of white people.
鈥淲e have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,鈥� Angie Holan, director of the International Fact-Checking Network, told AFP.
鈥淚 am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.鈥�