Advances in artificial intelligence and synthetic media are escalating the global challenge of misinformation and disinformation, which the Global Risks Report 2026 highlights as a significant short-term risk. Experts warn that technologies like generative AI and deepfakes enable malicious actors to rapidly manipulate public opinion, often leveraging emotional triggers such as fear and anger to enhance the spread of misleading content.
Artificial intelligence can analyze behavioral data and psychological patterns to deliver targeted messages, thus tailoring disinformation campaigns that resonate with specific audiences and deepen social divisions. The evolution of synthetic media, particularly deepfake technology, complicates matters by making it increasingly difficult to distinguish manipulated content from authentic material. Recent elections have seen a surge in AI-generated videos and false political messages on social media, raising concerns over public trust in information sources.
To combat disinformation, experts suggest a mix of technological solutions, public education, and robust governance frameworks. Verifying information, promoting open debate, and holding accountable those behind harmful campaigns are deemed crucial. Long-term resilience also hinges on education, with some countries implementing media literacy programs in schools to help students recognize manipulation tactics.
Regulatory measures, like the EU AI Act, are evolving to improve transparency around synthetic media and clarify labelling for AI-generated content. As numerous elections and geopolitical tensions loom in 2026, this period will be critical in assessing how governments, institutions, and technology platforms address the threat posed by AI-driven disinformation.

