In the Loop Summary
TIME’s newsletter discusses the growing issue of AI chatbots repeating false information, particularly narratives from Russian disinformation networks. New research by NewsGuard Technologies indicates a rise in chatbots sharing misinformation, jumping from 18% to over one-third of the time in the last year. The study tested 10 leading AI models, exposing a tendency to “parrot” specific false claims online. However, the sample size was limited, and the author observes a perceived decline in AI “hallucinations.”
The core problem lies in how AI models source information. They access a mix of reliable news outlets and social media, allowing malicious entities to sow misleading information to influence chatbot behavior. The functioning of AI’s search capabilities is affected by copyright concerns and the clandestine nature of their information sourcing.
Further, a California AI regulation bill, SB 53, is moving forward, requiring companies to maintain transparency and risk management frameworks. Meanwhile, advancements in AI are also raising security concerns, as researchers have created an autonomous agent capable of identifying vulnerable data for theft when introduced through compromised USB connections.
Lastly, a report highlights internal dissent at Meta, alleging the suppression of research on child safety risks associated with its virtual reality products.