Reality Check Commentary: I Unintentionally Boosted an AI “Content Farm” Business. I Don’t Regret It.
A journalist’s role is to report information. Sometimes there are unintended consequences, NewsGuard Enterprise Editor Jack Brewster writes.
Welcome to Reality Check, your inside look at how misinformation online is undermining trust — and who’s behind it.
I Unintentionally Boosted an AI “Content Farm” Business. I Don’t Regret It.
By Jack Brewster, NewsGuard Enterprise Editor
When I set out to expose how easy it is to build an AI-powered propaganda machine online, I never expected to become a promoter of the very problem I was trying to highlight. That’s what happened, and yet, I don’t regret it. Let me explain.
Last spring, my NewsGuard colleagues and I were curious how easy it had become to use online tools to create biased news sites powered by AI misinformation. In April, The Wall Street Journal published an article I wrote detailing how, for just $105, NewsGuard hired a developer on freelancer platform Fiverr to build my own AI “content farm,” a term used to describe a site that automatically churns out low-quality articles at large scale. The point was to reveal how quickly and inexpensively anyone can now launch a site that can produce thousands of articles daily, designed to push one political side or the other, powered by misinformation generated by AI.
The article underscored a growing concern about the democratization of AI, in which anyone with a laptop, an internet connection, and a little cash can manufacture and disseminate misinformation on a massive scale.
But three months after the article was published, I found myself confronting an unintended consequence: My reporting had fueled the phenomenon I aimed to expose. The developer of my AI site, a Pakistani man named Huzafa Nawaz, messaged me on Fiverr and said: “Hi Jack, thank you again for your article on WSJ. It helped me to meet my yearly target in three months. Thank you again.”
A closer look at Nawaz’s Fiverr profile confirms his claims. When the article was published, Nawaz had 293 reviews; now, a few months later, he has 368. He had built at least 75 new AI-generated news sites since my Wall Street Journal article appeared in April — roughly one every two days.
Some of his clients credited The Wall Street Journal article for their decision to hire Nawaz. In June 2024, a U.S.-based user, @buruusu, wrote in a review, “I learned about Mr. Huzafa Nawaz from an article that was written in the Wall Street Journal.” Similarly, in August 2024, @dglandarch wrote, “I saw the Wall Street Journal article, and this was exactly what I needed.”
The experience prompted some soul-searching on my part and forced me to examine more closely how journalism that exposes an unsavory practice can inadvertently boost that same practice. But that is a price worth paying.
It may seem counterintuitive to give publicity to someone creating tools that spread misinformation, especially considering that at NewsGuard, our core mission is to help expose and counter misinformation. But explaining how AI is being used to manipulate the news ecosystem is needed to help readers recognize the dangers and get tools they need to know who’s feeding them the news. Sometimes, as with boosting the business of a grateful Huzafa Nawaz, this means living with the unintended consequences.
Jack Brewster is the Enterprise Editor of NewsGuard. He previously worked at Forbes as a politics reporter.
We launched Reality Check after seeing how much interest there is in our work beyond the business and tech communities that we serve. Subscribe to this newsletter to support our apolitical mission to counter misinformation for readers, brands, and democracies. Have feedback? Send us an email: realitycheck@newsguardtech.com.