© NGO Forum on ABD

Forus

© Midia Ninja

2025-02-03

Social media changes and content moderation: implications for civil society organisations

In an extremely rapidly evolving digital landscape, social media platforms are becoming both battlegrounds and tools for civil society organisations. As oversight on harmful content diminishes, and state-imposed restrictions grow, the ability to advocate and protect rights is increasingly at stake.  

 

Surveys have found that worldwide, over 90% of internet users use at least one social media platform owned by one of a few major corporations - Meta (Facebook, Instagram, Whatsapp), Alphabet (Youtube), X (Twitter), LinkedIn, Snapchat and Bytedance (TikTok). This concentration of control has profound implications for digital rights and social justice oriented work, as these companies continue to shape how people across the world access information and communicate with each other. 

As mainstream social media platforms are applying the “trend” of scaling back fact-checking programs and loosen content moderation rules, a complex dual dynamic is emerging: on one hand, an increase in unchecked harmful content, and on the other, growing state-imposed restrictions on digital expressions and rights, including internet shutdowns and censorship. 

 

Let’s begin with content moderation changes, what are the challenges for civil society?  

 

Recent shifts in social media policies and practices have far-reaching consequences for civil society organisations and defenders. Platforms like X (formerly Twitter) and Meta (owner of Facebook, Instagram, and WhatsApp) have adopted policies that are altering the landscape of content moderation and information dissemination.  

 

Meta’s recent decision to dismantle its third-party fact-checking program and reduce oversight of political discourse has sparked widespread debate. The company has justified the move as a shift toward "free expression," arguing that moderation policies have led to excessive content removal and user distrust. Instead, it is adopting a "Community Notes" model similar to X’s crowdsourced corrections system. 

 

However, research suggests that such models have limitations. Studies on X’s Community Notes system indicate that corrections often appear too late to counteract the viral spread of misinformation. Furthermore, misinformation and harmful content frequently remain accessible, as automated content moderation systems are deprioritised.  

 

“Moderation will now prioritise illegal content, significantly downsizing the scope of moderation efforts across the board and leaving room for “lawful but awful” hate, harassment, and abuse to circulate unchecked,” says the Human Rights Campaign. 

 

As mentioned by Cipesa, Forus partner in the CADE project, the decision to get rid of its third-party fact-checkers has particular repercussions at regional and national levels: “Meta’s decision is particularly concerning for Africa which is unique in terms of linguistic and cultural diversity, limited digital and media information literacy, coupled with the growing challenges of hate speech and election-related disinformation, lack of context-specific content moderation policies, and inadequate investment in local fact-checking initiatives. Africa’s content moderation context and needs are also quite different from those of Europe or North America due to the predominant use of local languages that are often overlooked by automated fact-checking algorithms and content filters.” 

 

As a result of recent developments, several NGOs, universities and activists, have decided to quit mainstream social media platforms - X in particular – and are now looking for alternatives. As part of the “great X-odus”, users are now seeking “anti-toxicity” and hate platforms with more content moderation features. Others decide “to stay” to avoid the “filter bubble” effect. 

 

Spotlight on digital repressions: forms and impact, what to look out for 

 

The Forus communications team has firsthand experience with the challenges posed by inconsistent and opaque social media content moderation policies. Posts highlighting anti-NGO laws, gender justice campaigns such as March With Us, and the killing of protesters were flagged and banned for allegedly violating community guidelines, despite being grounded in legitimate activism and advocacy. Not only was the content taken down, but communications colleagues were banned entirely and had to go through identity checks to regain access to mainstream social media platforms. 

 

Surveillance, disinformation, and targeted censorship, having repercussions on activism, advocacy and controlling narratives are now common. Below we sketch a few recurring forms of digital repression: 

  1. Arrests or harassment of online activists and dissenters, as governments target individuals critical of state policies. 
  1. Disinformation campaigns, where false narratives are spread to undermine the credibility of civil society organisations, social movements and activists. 
  1. Internet shutdowns, used to suppress protests and block access to communication tools. 
  1. Surveillance, enabling governments to monitor digital actions 
  1. Content bans and shadowbanning, where platforms remove advocacy messages under vague or arbitrary policies and with very limited opportunities for transparency or recourse. 
  1. Internet censorship 

 

Forus advocacy and actions: opportunities in digital governance and rights 

 

To address these challenges, Forus is engaging in workshops on internet censorship for its members as part of its digital governance and rights initiatives with Article 19 and OONI. Additionally, with our members, we are exploring ways to response to this crisis as well as alternative social media platforms through communications workshops on crisis communications. 

 

As part of the EU System for an Enabling Environment project (EU SEE), which officially launched on January 29 in South Africa, Forus is also examining the dimensions of digital rights, to address opportunities, but also data on concerns about censorship, surveillance, and the misuse of regulatory frameworks. Our partner Digital Reporting International has also shared an interesting report on the current social media landscape and its impact on civil society organisations and social justice activists – especially in the Majority World.  

 

Forus is also part of the CADE project, co-funded by the European Union, which focuses on access to internet governance forums but also to advancing conversations around digital rights. With a new mapping report and set of recommendations coming out in the next few months on barriers to civil society participation and solutions, we aim to support Forus members and civil society partners to influence internet governance spaces – especially ahead and during the upcoming Internet Governance Forum (IGF) in Norway this June. 

 

Through our #Let’sTalkDigital campaign, we amplify the voices of civil society on critical digital issues, from digital civic space, content moderation, internet access and much more. Don’t miss our Digital Futures podcast, where we delve into challenges and opportunities of the digital age. We also invite you to stay tuned for new documentaries and podcasts exploring these themes, as well as our upcoming session with Forus member Fingo, that highlights perspectives on artificial intelligence from the Global South and discusses how these views can shape the global digital landscape.  

 

What can we do as civil society platforms? 

 

Below are some actionable suggestions: 

 

  • Reduce Dependence on Corporate Social Media Platforms and explore decentralised or open-source platforms: Explore the transition to alternative platforms to create independent communication channels. Conduct workshops to train colleagues and members on using mainstream and alternative sociam media tools effectively for advocacy, communications and outreach. Check out our recent workshop on crisis communications which features Democracy Reporting International analysis of the current content moderation shifts. 
  • New ways to combat disinformation: map or partner with fact-checking organisations, media outlets, and other allies to create localised and culturally relevant resources for identifying and countering disinformation – especially when targetting civil society organisations. Develop digital literacy programs to support users to recognise and report false narratives. Advocate for platform accountability and algorithmic transparency, pushing companies to invest in local language moderation and independent verification mechanisms. 
  • Strengthen advocacy for digital rights & inclusive governance: Engage in national, regional, and global internet governance spaces (e.g., the Internet Governance Forum, UN digital governance processes, EU regulations) to push for civil society inclusion. Advocate for stronger global frameworks on platform transparency, content moderation accountability, and protection against digital repression. Push for ethical AI regulations that ensure non-discriminatory algorithms and equitable access to online spaces for groups that have historically been marginalised.  
  • Strengthen internal digital security within your civil society organisations and networks: Explore cybersecurity measures, such as end-to-end encrypted communication tools and regular security audits. Train colleagues on protecting sensitive data, identifying phishing attacks, and responding to digital threats or internet shutdowns. Prepare crisis communications plans for maintaining communication during internet shutdowns or platform bans, or when the online reputation of your organisation is at risk. Diversify outreach channels by incorporating email newsletters, SMS alerts, and community radio where feasible. Create multilingual shared repositories of resources, tools, and strategies. 
  • Monitor and document digital rights violations: As implemented as part of the EU SEE initiative, encourage civil society organisations and allies to collect and report evidence of internet censorship, rights violations and shutdowns. 
Forus