© Forus

Forus

2025-12-16

Digital Space, Artificial Intelligence, and the Struggle for Civic Life

Across the globe, artificial intelligence is rapidly reshaping economies, governance, culture, and civic participation. Yet while the ethics of AI are debated intensely in boardrooms and policy circles in Europe and North America, communities across the Global South often encounter these technologies first as testing grounds rather than as co-architects of their digital futures. At the heart of this imbalance lies a fundamental question: who governs public life when the digital rules are still being written?

 

Digital technologies now mediate nearly every aspect of modern life—from healthcare delivery and financial inclusion to elections, journalism, education, and cultural memory. Yet civic space and digital rights remain among the most unevenly protected freedoms worldwide. As the United Nations has noted, the global digital transformation has been deeply uneven, with persistent gaps in access, governance, and accountability shaping who benefits from digitalisation and who is left behind.

 

In many countries, there is still no comprehensive legal framework governing how AI systems collect data, how that data is used to shape public discourse, or how algorithmic systems influence democratic participation. This absence of regulation has tangible consequences. Disinformation spreads rapidly, biometric data is harvested without meaningful consent, and surveillance expands without public oversight—often normalised through narratives of efficiency rather than democratic debate.

 

Campaigns such as Forus’ #LetsTalkDigital initiative illustrate how civil society is actively creating inclusive spaces for public dialogue on digital governance, power, and participation—particularly in regions where these conversations are frequently excluded from formal policymaking processes.

 

Nowhere are these tensions more visible than in the Global South. AI systems are increasingly deployed in contexts marked by linguistic diversity, informal economies, and uneven state capacity. The challenge is not only that many datasets fail to reflect local languages, histories, and social realities, but also that data from these regions is often extracted without meaningful consent, transparency, or fair compensation.

 

This dual dynamic—under-representation on the one hand, and over-extraction on the other—reinforces global inequalities. Communities in the Global South frequently supply the raw data that powers machine-learning systems, while the economic value generated from that data accrues largely to technology companies headquartered in the Global North. As UNDP has observed in the context of Latin America and the Caribbean, digital transformation can deepen inequality when connectivity and innovation advance faster than inclusive governance and public safeguards.

 

These structural imbalances manifest in concrete ways. Automated language systems routinely misclassify or exclude African and Indigenous languages, limiting access to digital services and reinforcing linguistic marginalisation. Biometric identification systems—often used for welfare access, border management, or voter registration— have disproportionately misidentified women and darker-skinned individuals due to biased training data, leading to exclusion from essential services. Similarly, AI-driven credit scoring, labour platforms, and social protection systems frequently fail to recognise informal work, effectively rendering millions of people economically invisible.

 

Speaking to Mika Välitalo at FINGO – Finnish NGO Platform, these patterns reveal how technological systems can appear neutral while reproducing deeply unequal power relations.

 

On the ground, civil society organisations (CSOs) play a critical role in training communities, journalists, and local officials to understand—and critically question— algorithmic decision-making. AI is not a “neutral efficiency”; it reflects political and social choices about whose knowledge counts and whose lives are legible to systems of power. In this sense, civil society acts as a democratic stress test for innovation, resisting hype-driven adoption and demanding evidence, transparency, and accountability.

 

Importantly, this does not require CSOs to position themselves as anti-technology. Rather, they can pilot community-centred data collection models and work with developers to design systems that reflect local languages, norms, and lived realities. In an era often described as an “AI tsunami,” however, this kind of grounded, participatory work remains under-resourced and difficult to scale.

 

Overall, civil society occupies a uniquely powerful position: close enough to communities to identify harm early, yet often independent enough to challenge governments and corporations alike. CSOs make invisible harms visible—by collecting testimonies from those repeatedly misclassified, denied services, or silenced by automated systems. Without this work, many AI-related harms remain statistically “acceptable” while being socially and economically devastating.

 

Language exclusion remains a stark example. Most AI systems are trained on fewer than 100 languages out of more than 7,000 spoken globally. In Africa, none of the continent’s languages appear among the most widely used languages online, reinforcing a digital hierarchy in which entire cultures remain marginal to the systems shaping public life.

 

Why Civil Society Must Shape AI Governance

 

It is within this gap—between technological power and public protection—that civil society has emerged as a vital democratic force. The Civil Society Manifesto for Ethical AI, developed by Forus, reflects a growing transnational movement calling for AI governance rooted not only in innovation, but in human rights, dignity, justice, and environmental sustainability.

 

Developed through in-person roundtables, regional workshops, community storytelling sessions, and expert consultations across multiple regions, the Manifesto centres the voices of grassroots organisers, human rights defenders, journalists, and digital rights advocates. Its purpose is both civic and practical: to challenge the notion that AI governance belongs solely to governments and corporations, and to assert that communities themselves must help shape their technological futures.

 

The Manifesto poses urgent questions: How can AI systems be made transparent and traceable? Who is accountable when harm occurs? How do we ensure emerging technologies do not deepen inequality or accelerate environmental damage? And critically, how do we prevent digital innovation from becoming a new architecture of exclusion?

 

Africa’s AI Landscape: Growth Without Safeguards?

 

These concerns are not theoretical. Across the Global South—and particularly in Africa—digitalisation is accelerating rapidly, often outpacing the development of regulatory safeguards. New infrastructures, platforms, and AI systems are reshaping economies and public institutions, while raising urgent questions around data sovereignty, accountability, and democratic oversight.

 

Africa’s growing integration into global digital systems brings both opportunity and risk. As connectivity expands and AI adoption increases, the continent is becoming more deeply embedded in global data flows—yet many countries continue to grapple with fragmented digital governance frameworks and limited regulatory capacity. This gap between technological expansion and public protection creates a critical role for civil society in shaping how innovation unfolds.

 

In 2024, the Core 2Africa submarine cable was completed, dramatically expanding high-capacity internet connectivity across the continent and strengthening Africa’s integration into global data flows. While this promises economic growth, it also intensifies concerns over who controls data, platforms, and digital infrastructure.

 

At the same time, multinational corporations are making high-stakes strategic moves. In 2025, Tesla officially incorporated Tesla Morocco, selecting Morocco as its first African commercial base. The move reflects Morocco’s rise as a major automotive manufacturing hub and a growing electric vehicle corridor linked to European and Chinese supply chains. While such investments can generate jobs and infrastructure, they also extend the influence of powerful technology actors into regulatory environments still struggling to protect workers, consumers, and digital citizens.

 

Here again, civil society plays a crucial role—safeguarding democratic accountability while engaging constructively with innovation.

 

Civic Power in the Digital Age

 

Civil society networks such as Forus amplify the work of grassroots organisations across Africa, Latin America, Asia, and Eastern Europe, strengthening coalitions that defend civic space at a time when digital repression is on the rise. Across the Forus network, a growing number of members and partners are engaging on digital rights and governance — documenting how digital laws and practices affect civic space and civil society's enabling environment, raising concerns about surveillance and discriminatory impacts, advocating for a secure and rights-based digital environment, and more inclusion of civil society in global digital governance decision-making processes. Through initiatives such as EU SEE and CADE, Forus supports members with evidence, tools, and platforms to strengthen civil society participation in digital policymaking

 

Crucially, the Civil Society Manifesto for Ethical AI is not a protest document. It is a constructive political intervention—a blueprint for governments, donors, and technology companies seeking collaborative governance rooted in lived realities rather than abstract innovation narratives.

 

The digital landscape is no longer a parallel world; it is public space itself. Elections are shaped by algorithms, movements are organised online, and power is increasingly exercised through platforms. As the Manifesto reminds us, there is no civic space without digital space—and no just digital future without the leadership of the communities most affected by technological power.

 

This article is written as part of the Forus journalism fellowship programme. Learn more here