As newsrooms increasingly integrate artificial intelligence (AI) into their operations, a hybrid model combining human oversight and AI automation emerges to address the growing challenge of disinformation. While AI enhances efficiency and creativity, it also brings ethical and authenticity concerns that cannot be ignored.
To produce trustworthy content, the human touch remains vital, particularly in verifying information to combat misinformation. Given that technology accelerates the spread of disinformation, a multi-faceted approach is necessary. Improved AI literacy and training for media professionals are critical for using AI responsibly.
Susan D’Agostino discusses the “misinformation feedback loop,” emphasizing the need for dual-front strategies to curb the supply of AI-generated falsehoods and transform societal factors that drive their consumption.
Ramaa Sharma highlights strategies to mitigate AI bias in journalism, including forming diverse teams and employing transparency in AI outputs. Addressing these biases is crucial in maintaining public trust.
Some newsrooms, like those discussed by Rowan Philip, are developing internal AI chatbots that utilize vetted journalism to answer reader queries, fostering reliability and building trust.
AI is transforming newsroom workflows, creating both opportunities and challenges. To maintain journalism’s democratic role, European newsrooms and policymakers must ensure transparency, fairness, and oversight in AI implementations.

