AI-generated videos depicting fabricated scenarios related to the Middle East war are proliferating on Elon Musk’s platform X, causing confusion among users who struggle to distinguish them from reality. This surge of lifelike deepfakes includes striking visuals like captured American soldiers and destroyed cities, despite X’s recent policy aimed at curbing wartime disinformation. X announced that creators posting AI-generated videos without disclosure would face a 90-day suspension from its revenue-sharing program, which could lead to permanent bans for repeat offenders.
While this policy was praised by some officials, disinformation researchers remain doubtful, noting that misleading AI content continues to flood social media feeds. Despite X’s efforts, many creators seem undeterred by the potential consequences, as demonstrated by an account sharing an AI video that received massive engagement without appropriately labeling it.
The presence of AI-generated visuals, often intertwined with authentic images, poses significant challenges for fact-checkers, further complicated by X’s own AI system providing inaccuracies in content verification. Researchers caution that X’s monetization model incentivizes spreading fake news and misinformation, particularly among premium accounts, some linked to Iranian propaganda.
Though X’s new demonetization policy is seen as a step forward, it faces implementation challenges, particularly given the large number of users not part of the revenue-sharing program. Experts question the effectiveness of X’s Community Notes system for fact-checking, noting that a vast majority of submissions go unpublished. Overall, while X’s policy aims to reduce incentives for disseminating misleading information, its success remains uncertain due to potential loopholes and low engagement in community fact-checking efforts.

