AI-generated videos depicting American soldiers captured by Iran, ruined Israeli cities, and burning US embassies are spreading widely on Elon Musk’s platform, X, despite attempts to combat wartime disinformation. This surge of lifelike deepfakes has overwhelmed previous conflicts, making it difficult for users to discern fake information from reality, according to researchers.
In response, X announced a policy to suspend creators from its revenue-sharing program for 90 days if they share AI-generated war visuals without proper disclosure, with subsequent violations leading to permanent suspension. This marks a significant shift for a platform criticized for facilitating disinformation since Musk’s acquisition in 2022.
While the new policy has received praise from officials, disinformation researchers like Joe Bodnar express skepticism, noting that AI-generated content remains prevalent. Many top accounts share misleading images, which often achieve more views than X’s messages regarding the crackdown.
Despite the policy, researchers highlight that numerous accounts contributing to misleading AI content are not part of the revenue program. X’s previous failures in effectively managing disinformation, paired with the financial incentives for sensational content, complicate the fight against misinformation.
Experts suggest that while X’s policy is a reasonable measure against viral disinformation during the war, its effectiveness hinges on proper enforcement and the challenges of managing AI content metadata and Community Notes’ overall utility.

