In a cramped London flat, a content creator has thrived by manipulating TikTok’s algorithm to spread anti-immigrant disinformation, reaching millions with fabricated stories that incite fear and hatred toward migrants in the UK. This operation, revealed by investigative journalists, showcases how easily an individual can shape public sentiment on immigration—one of Britain’s most contentious issues.
The creator, whose identity is partially concealed, produces multiple videos weekly featuring entirely made-up claims about immigrants, such as receiving government benefits or committing crimes. These videos are crafted to appear credible with urgent voiceovers and fabricated news articles. As reported by London Centric, this creator has gained hundreds of thousands of followers and views, making them a leading source of anti-immigrant content.
Distinguishing this effort from standard xenophobic expressions is its calculated, large-scale approach. The creator crafts fake news stories that exploit existing anxieties, packed into TikTok’s short-form format. TikTok’s algorithm amplifies emotionally charged content, creating a feedback loop where disinformation thrives, drastically impacting public perception of immigration.
TikTok’s algorithmic design allows unknown creators to achieve massive reach, democratizing disinformation. The platform has faced criticism for inconsistent enforcement of its community standards against hate speech and misinformation. Despite claims of removing harmful content, many of the creator’s posts remained active for long periods, allowing disinformation to spread widely.
The dangers of these misinformation campaigns extend to real-world consequences, evidenced by a rise in anti-immigrant sentiment and associated violence in the UK. Events like summer 2024 riots were partly fueled by misinformation, illustrating how distorted narratives can catalyze hostility toward migrants.
Furthermore, this disinformation operation has a financial aspect, as TikTok provides monetization opportunities for viral content. The economics of engagement—outrage seen as profitable—encourages more misleading information. As long as platforms reward such content, the cycle of disinformation is likely to continue.
To combat these issues, the UK is implementing the Online Safety Act, aimed at regulating harmful content. However, the speed of disinformation dissemination poses challenges for effective enforcement. As disinformation spreads quickly, regulations often fail to keep up.
This situation exemplifies a broader challenge facing democracies: the convergence of algorithmic promotion, low content production barriers, financial incentives for outrage, and insufficient regulation creates an environment where disinformation can easily flourish. This case highlights the urgent need for enhanced media literacy and robust regulatory frameworks to mitigate the impact of disinformation.
For now, the creator’s false narratives continue, and TikTok profits from the engagement generated by this disinformation operation. Without structural changes, the cycle will likely persist.

