Following the recent U.S.-Israel military strikes against Iran, misinformation surged online, coinciding with the devastating impact on the Shajareh Tayyebeh school, resulting in the deaths of up to 168 individuals. Users began sharing manipulated media, such as AI-edited videos and misleading images, erroneously portraying Iranian military successes. These posts amassed hundreds of millions of views, prompting X (formerly Twitter) to revise its policies. They now suspend users from the Creator Revenue Sharing program for sharing AI-generated content about armed conflict without proper labeling.
A Wired investigation revealed that many misleading posts came from premium accounts, including those linked to state-funded media in Iran. Examples included fake missile launch footage and false claims about attacks on U.S. warships. The rapid spread of disinformation was driven by a combination of bots and engagement farming, exploiting gaps in authentic reporting.
Reports by misinformation watchdog NewsGuard highlighted that users frequently share exaggerated claims tied to geopolitical conflicts. The nature of social media and the influence of AI have exacerbated the misinformation crisis, creating a dangerous environment where users are more prone to believe distorted claims amidst the chaos of war. As digital platforms become primary news sources, the reliability of information deteriorates, underscoring the need for interventions to combat these growing threats to public safety and democratic integrity.

