A recent OpenAI report titled “Disrupting Malicious Uses of Our Models” reveals that Kremlin-affiliated groups are utilizing American AI tools like ChatGPT for disinformation campaigns, despite Russia’s claims of technological self-reliance.
Operation “Fish Food” highlights the Rybar network, which employs ChatGPT to produce articles and social media content targeting both Russian and foreign audiences in multiple languages. This operation mimics typical Russian disinformation strategies, with examples including fabrications about Germany influencing Moldova and plans for electoral interference in African nations.
Operation “No Bell” involved blocked accounts generating content on sub-Saharan African geopolitics, portraying Russia positively while attacking Western leaders. This content often appeared under fake names and was designed to resemble legitimate journalism, further emphasizing the Kremlin’s effort to destabilize regions and preserve influence.
Overall, the report underscores the extensive use of AI technology by Russia in global influence operations, reflecting a broader hybrid warfare strategy despite ongoing conflicts in Ukraine.

