Pro-Kremlin Sources Jump on ‘AI Action Figure’ Trend to Falsely Depict Zelensky as Drug Abusing Aid Beggar
By Eva Maitland and McKenzie Sadeghi

What happened: Pro-Kremlin sources shared AI-generated photos of supposed action figure dolls to spread Russian propaganda, including posts depicting Ukrainian President Volodymyr Zelensky as a drug addict and corrupt warmonger, NewsGuard found.
Context: After OpenAI released its new image generator on March 25, many celebrities and politicians took part in a viral trend asking ChatGPT to create images of Barbie-style dolls in their likenesses.
A closer look: By mid-April, pro-Kremlin accounts had jumped on the trend, albeit for propaganda purposes, posting at least nine images promoting pro-Kremlin disinformation tropes.
One image showed a Zelensky figurine next to a bag of white powder labeled “mystery powder” with the phrase, “Squeeze me and I’ll ask for aid.” The photo generated 1.5 million views and circulated on X, Threads, Instagram, Facebook, Reddit, YouTube, and Telegram, where it was shared by a channel affiliated with the Russian mercenary Wagner Group.
Another image shared widely by pro-Kremlin accounts showed a doll figurine of Zelensky alongside bags of cash and a packet of cocaine, titled “International beggar.” The photo spread on X, Reddit, Bluesky, Facebook, TikTok, and Threads, generating 431,700 views.
Actually: The images, rated by AI detection tool Hive as 100 percent likely generated by ChatGPT, propagate well-worn Russian false claims that Zelensky squanders billions of dollars in Western aid and is also a drug addict.
ChatGPT previously prohibited the use of its tool to generate images of public figures. However, the company said in a March 25, 2025, statement that it now allows such content, although public figures can opt out of being portrayed by submitting a form. Still, OpenAI says it prohibits images that promote defamatory behavior, harmful stereotypes, or material that could mislead users.
In response to an April 2025 email seeking comment on the Zelensky cocaine images generated by its tool, OpenAI spokesperson Taya Christianson told NewsGuard that the company has established guardrails to prevent the creation of harmful content, such as recruitment materials and extremist propaganda.
Christianson added that OpenAI actively monitors images generated by its tool and takes action against any violations of its usage policies.
If a user manages to bypass these guardrails, Christianson explained, the images they create are still subject to OpenAI's usage policies, which prohibit the use of its technology for creating deceitful, harmful, or harassing content. OpenAI says violations of its policy could result in account suspensions or terminations.
However, Christianson did not address NewsGuard’s specific questions regarding whether the Zelensky cocaine images violate its policies and what steps, if any, are being taken by the company to prevent users from bypassing existing guardrails.
Clearly there are holes in the system. When NewsGuard prompted ChatGPT to recreate the Zelensky cocaine image, it declined. However, after NewsGuard modified the prompt to replace the word “cocaine” for “white sugar,” ChatGPT obliged.
Christianson did not respond to a follow up email asking again about the specific Zelensky examples.