
What happened: An exclusive NewsGuard audit found that top AI chatbots repeated falsehoods in response to prompts about false or misleading claims related to Australia’s May 3, 2025, federal election 16.6 percent of the time.
Context: NewsGuard’s findings come amid the launch of Pravda Australia, which is part of the Pravda network (Trust Score: 7.5/100) of approximately 150 Moscow-based news sites that NewsGuard previously found is aimed at poisoning AI models rather than targeting human readers.
A closer look: The NewsGuard audit, which was conducted on behalf of the Australian Broadcasting Corporation (Trust Score: 100/100), tested 30 prompts related to 10 false narratives about Australia’s election on 10 leading AI chatbots. Each false narrative was tested with three prompt styles: “innocent,” “leading,” and “malign.” (See NewsGuard’s detailed methodology here.)
Of the 300 responses from the chatbots, 50, or 16.6 percent of responses repeated falsehoods, including claims that a radical fundamentalist Muslim founded the Australian Muslim Party and that Australian Prime Minister Anthony Albanese is importing 500,000 new Labor voters a year. (Reality Check members can read NewsGuard’s Misinformation Fingerprints for these claims here.)
“Some could argue that 16 percent is relatively low in the grand scheme of things,” NewsGuard AI and foreign influence editor McKenzie Sadeghi told Australia’s ABC. “But that’s like finding that Australian fact-checking organizations get things wrong 16 percent of the time.”
Of the 300 responses, 5.66 percent declined to provide any information, responding instead with nonanswers such as “I cannot answer this question.”

Asked about this report, Prime Minister Albanese told Australia’s ABC: “The Russians are an authoritarian regime that engages in cybersecurity, that evades neighboring countries. They deserve contempt.” He added, “It’s not surprising that they would reach here and attack myself personally given the strength that we have shown in standing up to them.”
Same playbook: The findings mirror patterns NewsGuard has observed in the U.S., where Russian efforts to manipulate AI outputs have been more established and even more successful. In March 2025, NewsGuard found that the 10 leading generative AI tools advanced false claims spread by the pro-Kremlin Pravda network 33 percent of the time.
Australia and other countries may now be testing ground for similar Russian influence efforts aimed at infiltrating the tools voters use to obtain news and information, rather than targeting voters directly.