In just three days since its launch, OpenAI’s app Sora has generated realistic videos of events like ballot fraud and protests that never occurred. Users can create footage from simple text prompts, integrating their likeness and even deceased celebrities. Experts warn that tools like Sora and Google’s Veo 3 could fuel disinformation and abuse, posing threats to democracy and society by making it easier to produce convincing fake content. Despite OpenAI’s safety measures, such as refusing to create certain types of violent or politically sensitive content, the app lacks robust user verification, allowing potential misuse.
Sora is designed to generate hyper-realistic video and audio, which raises concerns about deception and misuse. Even with watermarks indicating the content as AI-generated, viewers may still trust less in video evidence as genuine, a phenomenon referred to as the liar’s dividend. This risk could lead to serious consequences, including the spreading of propaganda, false criminal accusations, and the fueling of conspiracy theories.
Sora will produce videos involving children and historical figures, but not political leaders like Donald Trump. In attempts to create politically inclined content, it inadvertently generated a video featuring the voice of former President Barack Obama. Experts express alarm over the quality of Sora’s output, which may undermine viewers’ abilities to discern real from fake, exacerbating existing challenges in distinguishing truth in digital content.