The article highlights the ongoing challenges in content moderation technology, which remains largely proprietary and inaccessible, despite increasing calls for multi-stakeholder collaboration. Multi-stakeholder partnerships, as noted in the 2023 ASEAN Guideline, aim to engage various sectors—business, media, civil society, academia, and government—to combat false and harmful online content like misinformation and propaganda.
While institutions like Malaysia’s Communications and Multimedia Content Forum and the Philippines’ Inter-Agency Council Against Child Pornography have established platforms for collaboration, the foundational technologies powering content moderation remain under the control of tech companies. The article categorizes moderation technologies based on their effectiveness for clearly harmful content versus borderline content, revealing an imbalance where clearer harmful content is easier to address.
A key observation is the need for a clear distinction in regulatory frameworks between illegal and borderline content, with suggestions for a “partnership by design” approach to embed collaborative principles into the technical architecture of moderation systems. Such an approach could help bridge the existing gap between policy intent and execution.
The challenges extend to the power dynamics between governments and tech platforms, where governments often exhibit limited influence over the technologies developed by Western companies. The article argues for a need to translate collaborative frameworks into robust regulatory measures and technical practices, emphasizing that mere agreements are insufficient for effective content moderation.