Bot farms have become central to information warfare, using automated accounts to manipulate public opinion, influence elections, and erode trust in institutions. According to Thales, automated bot traffic accounted for 51% of all web activity in 2024—marking a significant shift as it surpasses human activity. This rise of bots contributes to a “liars dividend,” where genuine content is often doubted due to the prevalence of fakes. Bots can also create the illusion of consensus by making certain viewpoints trend, thus amplifying extreme opinions.
State-sponsored bot farms, often linked to countries like Russia, China, and Iran, utilize racks of smartphones or software systems to simulate human activity on platforms such as X, Facebook, and Instagram. For example, before the UK vote, 45 bot-like accounts on X generated divisive content that reached billions of views.
Russia’s disinformation campaigns have been particularly notable, targeting various elections worldwide. In 2024, Microsoft discovered a Russian campaign that falsely accused Kamala Harris of a hit-and-run, highlighting similar tactics from past elections. Research indicates that Russian bots are also actively influencing occupied regions of Ukraine by blending pro-Russian propaganda into local discourse.
Despite attempts by online platforms to curb malicious content, bots continue to evade detection and spread disinformation. Enforcement of existing rules is often weak, and platforms like X face penalties for failing to manage automated manipulation effectively. Experts suggest that solutions require collaboration between policymakers and tech companies, improved digital literacy, and user caution in consuming information.
On a positive note, AI is being deployed to combat disinformation, helping to identify false content and detect coordinated inauthentic behavior. Both the EU and US are taking steps to address bot-driven disinformation through legislative measures. The EU’s Digital Services Act and new AI Act aim to mitigate manipulation risks and promote transparency for AI-generated content. Globally, organizations like NATO and the G7 are acknowledging the threat of bot farms and enhancing resistance against disinformation campaigns.

