In 2025, artificial intelligence (AI) significantly transformed work and social interactions, revealing persistent racism and the shortcomings of fact-checking amid rampant disinformation. Algorithms enable harmful narratives to spread quickly, often outpacing efforts to counteract them. A major disruption was OpenAI’s Sora, which produced lifelike AI-generated videos that influenced public discourse, especially during a prolonged U.S. government shutdown that raised concerns over Supplemental Nutrition Assistance Program (SNAP) benefits.
Amid rising anxieties, AI-generated clips depicting Black women venting about SNAP caught on, showcasing the damaging stereotype of the “Black welfare queen.” These portrayals revived indelible racist tropes and triggered discussions on misogynoir, reflecting deep-seated societal biases against Black women. Even when labeled as AI-generated, many still believed these clips were reflective of reality.
A parallel incident involved a 2022 COVID malpractice scheme involving Somali-Americans, which former President Trump exploited to rally anti-immigrant sentiments. His remarks prompted viral AI-generated videos invoking anti-Black tropes, portraying Black men as thieves plotting against taxpayers. These narratives, sidestepping the facts of legal citizenship among Minnesota Somalis, reinforced harmful racial stereotypes.
Both phenomena reveal a broader issue: AI amplifies pre-existing racial prejudices, underscoring how deeply ingrained beliefs affect public perception and discourse. The persistence of such narratives highlights how society grapples with racism as intertwined with systemic inequalities, further complicating political discourse in an age dominated by disinformation and digital blackface.

