The article investigates a coordinated campaign that utilized low-credibility websites and automated social media accounts to disseminate false allegations against a businessman in 2024. Researchers documented over 70 articles with identical claims, lacking credible sources, which were rapidly republished across various platforms, gaining apparent legitimacy through search engine indexing. The content was mostly AI-generated, with identifiable patterns suggesting automated production, and was amplified by social media accounts showing bot-like behavior.
This phenomenon highlights the increasing use of AI in misinformation, enabling rapid production and distribution of false narratives. Unlike prior disinformation efforts relying on human writers, modern AI tools allow for near-limitless variations of a story to be created swiftly. The campaign’s tactics included “newswashing,” where fabricated stories briefly appeared on respectable outlets, lending them undeserved credibility.
The research emphasizes the dangers posed by AI-generated content, which blurs lines between authenticity and deception, complicating the identification and mitigation of falsehoods. It calls for improved digital governance, accountability measures for social media platforms, and increased public awareness and media literacy to defend against such campaigns. Without urgent action, AI-assisted disinformation could severely undermine trust in online information.

