Iranian state media fakes success against Israel
PLUS: “Donald Trump” endorses Reality Check; vintage Trump gets an AI makeover; our AI expose leads to windfall in new business for content farm developer
Welcome to NewsGuard's Reality Check, a report on how misinformation online is undermining trust — and who’s behind it.
Today:
AI turns one Trump interview into 11 deepfakes
Tehran state media claims success against Israel — by airing unrelated footage
Bad press good for business: Exposing an AI content farm developer sparks boost in demand for his services
And more…
Today’s newsletter was edited by Jack Brewster and Eric Effron.
AI Content Farm Tracker: 802 Sites and Counting
AI content farms are taking over the internet, and NewsGuard analysts track their spread. Read more about AI content farms, and how they are proliferating:
1. From NBC to AI: AI Turns One Trump Interview Into 11 Deepfakes
By Macrina Wang and Nikita Vashisth
A random 2017 NBC News interview of Donald Trump has emerged as the default backdrop for AI deepfakes from around the world about the former president and current presidential candidate.
What happened: NewsGuard has identified an AI tool called Parrot AI as the source behind 11 viral Trump deepfakes this year alone.
Collectively, these deepfakes have attracted 1.3 million views on X and TikTok as of April 12, 2024.
In these 11 fabricated videos, Deepfake Trump:
Described Black people as the “chosen ones.”
Criticized U.S. Sen. Chuck Schumer for trying to ban Zyn nicotine pouches.
Criticized Scottish politician Nicola Sturgeon.
Praised a Nigerian energy facility.
Encouraged a Lebanese supporter to run for office in Lebanon.
Commented on his recent legal cases.
Said he had chosen as his running mate former professional football player Antonio Brown.
Announced a “Trump Formula 1 team.”
Said that he suffered from “explosive bowel syndrome.”
Pledged to get former Pakistani Prime Minister Imran Khan out of jail.
Endorsed a new South African political party.
Actually: In case it’s not obvious, Trump never said any of these things.
Watch three of our favorite Trump deepfakes below:
Tricks of the trade: Any Parrot AI user — no subscription necessary — can quickly strip away Trump’s actual words and replace them with an AI-simulated Trump-like voice.
Here’s how:
Step 1: Pick your deepfake character. In this case, we picked Trump.
Step 2: Write your prompt. We wrote, “Subscribe to Reality Check. NewsGuard is great at monitoring AI-generated misinfo.”
Step 3: Parrot AI generates the deepfake video, no subscription or registration required.
See below:
The entire process detailed above took us about 30 seconds. So, in the time it takes to heat up a cup of soup, anyone, anywhere, for free, can create a convincing deepfake of Trump or another celebrity.
Parrot AI did not respond to a NewsGuard email seeking comment on the platform’s policies regarding deepfakes and the dissemination of false information.
Click here to find out more about NewsGuard Trust Scores and our process for rating websites. You can download NewsGuard’s browser extension, which displays NewsGuard Trust Score icons next to links on search engines, social media feeds, and other platforms by clicking here.
2. Iranian State TV Claims ‘Successful’ Attack on Israel — By Airing Footage of Chilean Forest Fire, Old Combat Footage, Unrelated Clips
When missiles miss, roll the … vintage clips?
What happened: Iranian state media and supporters are trying to spin Tehran's recent failed attack on Israel as a victory. How? By sharing old and irrelevant military footage — and even clips from a Chilean forest fire in February 2024.
ICYMI: On April 13, 2024, Iran fired approximately 300 missiles and drones at Israel, nearly all of which were intercepted by Israeli, American, British, and Jordanian forces, according to officials and news accounts.
What Iranians saw: Iran's state-run news aired footage of forest fires, claiming it showed the aftermath of the strikes on Israel. The on-screen caption declared, "The failure of the Zionist's anti-missile shield to counter Iran's missiles."
Actually: Israel’s anti-missile system worked as designed, and the clip shows forest fires that raged in Chile in February 2024, as reported by Venezuelan fact-checking organization Cazadores de Fake News.
Other videos that circulated in Iranian state media and on social media included:
A clip purporting to show Iranian missiles hitting Israel, which was actually from 2020 in Syria.
A video of a crowd in Buenos Aires, Argentina, passed off by Iranian state TV as Israelis in “full panic” as Iran’s drones land.
A video claiming to depict Iranian drones striking Tel Aviv that in fact is from the Israel-Hamas war in May 2021, BBC reported.
This is a reminder how authoritarian regimes use their government media outlets to spread propaganda — and strong evidence that officials in Tehran are not worried about Iranians having access to factual reporting debunking official claims.
Do you work in Trust and Safety for a technology company, in brand safety for advertising or otherwise counter misinformation as part of your job? Find out about NewsGuard’s weekly Risk Briefings, a more detailed briefing for professionals. Click here.
3. And one more thing: Unintended Consequences of Exposing an AI Content Farm Operation
It’s impossible to predict how your stories will land, especially when they cast a critical light — but sometimes, they can result in surprising accolades.
In this past weekend’s Wall Street Journal, I wrote about my experience hiring a developer on freelance marketplace Fiverr to build a partisan AI content farm. For just $105, the developer I hired built a fully automated, AI-generated news website capable of publishing thousands of articles a day with the partisan framing of my choosing. The AI-generated “Buckeye State Press” pumped out news favorable first to the Republican challenger for an Ohio Senate seat, then when the AI prompts were reversed, articles praising the Democratic incumbent.
I had told the developer who built the website for me, a young Pakistani man named Huzafa Nawaz, that I would be writing about my experience, but I still expected him to be upset. I had exposed what he was doing: building hundreds of AI content farms cheaply that not only copied content from mainstream sources without credit but could also be easily programmed to spread propaganda and misinformation. Fiverr told me the platform would review his work.
“It took me two days, $105 and no expertise whatsoever to launch a fully automated, AI-generated local news site capable of publishing thousands of articles a day—with the partisan news coverage framing of my choice, nearly all rewritten without credit from legitimate news sources,” I wrote, showing how cheap it is to produce a “pink slime” secretly partisan website masquerading as an independent publisher.
But Nawaz was thrilled. In a direct message, he said: “Thank you so much for your valuable time and words, I was not expecting thank you again. It's a great honour for me.”
I asked him how the article affected his business of creating AI content farms like the one he built for me. He replied that he had received “50 plus” requests for new sites in three days, which he said was “ten times more” than he usually receives during that time frame.
Indeed, Nawaz's Fiverr profile now indicates that the developer has "40 orders in queue."
The old adage must be true: There is no such thing as bad publicity.
Produced by co-CEOs Steven Brill and Gordon Crovitz, and the NewsGuard team.
We launched Reality Check after seeing how much interest there is in our work beyond the business and tech communities that we serve. Subscribe to this newsletter to support our apolitical mission to counter misinformation for readers, brands, and democracies. Have feedback? Send us an email: realitycheck@newsguardtech.com.