© NGO Forum on ABD
© Forus
2025-08-29
AI and civil society: rethinking communication in a digital age
Artificial intelligence (AI) and emerging technologies are reshaping the way civil society organisations communicate, advocate and mobilise. But what does this mean for civil society – especially those working in fragile contexts where trust, rights and participation are consistently at risk?
“AI is not just technical, it’s political. The question is not whether civil society will use it, but how. And we must ensure that our ‘how’ always puts people and rights first.”
AI is not coming – it is already here
From translation to content creation, AI tools are already embedded in daily communications.
“For small CSOs with limited budgets, AI can be a real game changer,” said Mika Valitalo, Advisor for Multistakeholder Partnerships and Digital Development at Fingo, at a recent webinar organised by Forus. “It can help create multilingual content quickly or track trends across platforms in a way that used to require expensive tools.”
But Mika stressed that efficiency is not enough. “Technology should never replace human creativity or empathy. Our role is to ensure that AI supports, rather than distorts our voice.”
Ethical risks: “we cannot ignore the dignity of those we serve.”
Fernanda Martins, of Fundacion Multitudes (Chile), warned that AI also brings ethical dangers:
“These tools are trained on biased data. If we're not careful, we risk amplifying discrimination rather than inclusion.”
Fernanda also highlighted surveillance: “The same systems that can help us reach communities can also be used to monitor and silence them. Civil society cannot afford to be naive about this.”
Transparency is critical. “We should be clear of when we use AI and why. Trust is fragile and it depends on honesty,” Mika added.
Opportunities: “AI should be a support system, not a substitute”
“AI can make our work more accessible,” Mika explained. “For example, text to speech and instant translation open doors for groups that have historically been excluded.”
Fernanda added: “AI can help us scale our advocacy. Imagine analysing thousands of documents to identify policy gaps – something no small CSO team could do alone.”
Alain Serge Mifoundou from REPONGAC, the regional coalition in Central Africa, and Malick Ndome from CONGAD, Senegal, shared the idea that the future of AI must be intergenerational – bridging generations not dividing them - and community driven. AI shouldn’t just serve tech hubs or elites but should be increasingly designed to strengthen local capacity and support grassroots communities' efforts.
For instance, developing AI tools that help track and prevent gender-based violence or AI systems that support climate-affected communities, mapping weather shifts and supporting adaptation.
Bibbi Abruzzini, Forus communications lead presented the Civil Society Manifesto for Ethical AI, developed by over 50 Forus members and partners and aimed at changing the processes and narratives around AI and machine learning.
Clarisse Sih, Forus Digital Communications Coordinator, developed a participative AI Mapping: a resource designed to put AI directly in the hands of civil society communicators. The mapping presents practical AI models from tools that support greater inclusion for people with disabilities, to systems that help organisations monitor media narratives and better understand public discourse and how to react as a result.
Mika shared the Equalizers of digital power project, a course designed for civil society organisations and activists working in sustainable futures, global education, and development cooperation and which examines digital power from different perspectives. The goals are to understand the principles and power dynamics related to digitalization, explore your opportunities to utilise digitalisation and digital power and envision and begin creating a more just digital world.
A rights-based approach
So how can civil society use AI responsibly? The workshop highlighted four key strategies:
- Anchor technology in human rights: CSO principles must lead the tools – not the other way around
- Be transparent: Be open about when AI is used in reports, images, or campaigns - use disclaimers.
- Invest in digital literacy: If users understand the risks, they can use AI more safely
- Build alliances: Share experiences across CSOs to develop safeguards
- Data protection: Avoid entering sensitive data into public AI tools.
- Fact-checking: AI content can be inaccurate and biased, always verify facts and sources.
- Human voice matters: AI should support, not replace, your voice. Include AI use in your communications strategy and carry out open discussions with your members or secretariat staff. What are they using AI for? How is AI supporting your content production?
- Bias & inclusion: AI often reproduces stereotypes and excludes minority voices.
- Learn more about the Civil Society Manifesto for Ethical AI
- More on the Equalizers of digital power project