© Forus

Forus

(c) Forus

2026-03-02

The Illusion of Governance: How Civil Society Is Filling the Gap Between AI Law and AI Reality

In eastern Democratic Republic of the Congo, where protracted conflict has displaced millions and strained already fragile institutions, access to humanitarian assistance often depends on systems that communities have little power to question. Across provinces such as North Kivu, women navigating displacement, poverty, and insecurity frequently encounter administrative and technological barriers that determine whether they receive support — from registration procedures to identity verification processes.

 

According to Forus member organisation CNONGD in the( DRC), when these systems fail, recourse rarely comes through formal oversight channels. Instead, survivors and displaced communities often turn to local civil society organisations that provide legal guidance, document grievances, and advocate for accountability. These organisations have become critical intermediaries, translating rights into practice in contexts where state monitoring capacity is limited.

 

As humanitarian responses increasingly incorporate digital tools, the experiences of women in eastern DRC illustrate a broader global reality: the effectiveness of governance is not defined only by the technologies deployed or the policies written, but by whether people on the margins can challenge errors, seek redress, and be heard. Their stories reveal how accountability, in practice, is frequently built from the ground up — through community networks rather than formal institutions.

 

Governments worldwide are passing AI regulations. But regulation without enforcement is theatre. The real question — the one that will determine whether AI governance actually protects anyone — is who steps in when the state cannot, or will not.

 

This is the central question in examining AI governance implementation across Africa and beyond. What emerges from that research is both sobering and instructive: the gap between law as written and law as lived is vast, it is growing, and across the Global South in particular, civil society organisations (CSOs) are inventing the infrastructure to bridge it — infrastructure that the rest of the world urgently needs to study.

 

The Implementation Gap Is Not Africa’s Problem. It Is Everyone’s

 

Governments across the world are racing to pass AI laws, announcing frameworks that promise safety, transparency, and accountability. Yet legislation alone does not guarantee protection. Rules without institutions to enforce them risk becoming symbolic — a performance of governance rather than its practice. The decisive question is not simply what is written into law, but who intervenes when harm occurs and formal oversight cannot — or does not — respond.

 

When the European Union formally adopted its AI Act in 2024, it set a global benchmark for rights-based oversight . The law outlines risk categories, transparency duties, and enforcement structures. What it cannot fully prescribe is how those obligations will be operationalised in everyday contexts: how frontline staff will identify high-risk systems, how complaints will be received from people unfamiliar with their rights, or how regulators with limited technical expertise will audit complex models.

 

A similar pattern appears in the United States, where a growing patchwork of state-level AI laws coexists with uneven enforcement capacity. In 2023, the Federal Trade Commission took action against Rite Aid for unlawful facial recognition practices that disproportionately harmed Black, Asian, Latino, and women customers. The case demonstrates that accountability is possible, but also how rare and resource-intensive such interventions remain. For every case that reaches a regulator, many more harms remain undocumented.

 

Data from the PwC Global CEO Survey further underscores the governance gap. While enthusiasm for AI adoption is high across African markets, confidence in responsible implementation capacity remains significantly lower. Talent shortages and resource constraints compound the challenge, leaving governance frameworks without the institutional depth needed to function as intended.

 

These are often framed as investment or capacity problems, but at their core they are governance problems. Without enforcement, rights exist only conditionally — dependent on whether someone has the knowledge, resources, or support to invoke them. The true measure of accountability is therefore not the sophistication of a legal text, but the extent to which people can rely on it in practice.

 

Seen through this lens, Africa is not an outlier but a revealing case study. Many countries on the continent are developing AI governance frameworks before widespread adoption has fully taken hold — a rare opportunity to observe regulation and implementation evolving simultaneously under real constraints. In these contexts, where the state cannot always act consistently or at scale, civil society has stepped in — not as a substitute for governance, but as one of its most active engines.

 

The Infrastructure Behind the Aspiration: Google, Ghana, and the Digital Foundation

 

None of this civil society work exists in a vacuum. It operates within — and depends upon — a rapidly evolving technological infrastructure.

 

Google’s expansion of its AI research presence in Africa, including its research center in Accra, Ghana, represents a meaningful commitment. The company’s AI-powered flood forecasting initiative, now operational in more than 40 African countries, offers a concrete example of AI delivering life-saving value at scale. Predictive models provide advance warning of floods in regions where traditional meteorological infrastructure is limited.

 

In healthcare, AI-assisted diagnostic tools are increasingly supporting overstretched systems across parts of the continent, extending clinical reach in remote settings. Yet investment in AI capability without investment in AI accountability creates its own risks. Research continues to show accent bias in automated speech recognition systems, with significantly higher error rates for non-native and minority dialect speakers. In high-stakes contexts, such failures could mean the difference between someone receiving or being denied essential services.

 

Similarly, the deployment of emotion recognition technologies in public-facing applications has drawn serious criticism from researchers and human rights groups over concerns about validity, consent, and privacy.

 

Infrastructure, in other words, is not neutral. Cloud computing expansion and new data centres across Africa build technical capacity. But capacity for what — and governed by whom — matters enormously. The same systems that extend access can entrench exclusion if their design and oversight do not meaningfully include affected communities.

 

This is where CSO models become infrastructure in their own right — not physical, but institutional. Distributed monitoring networks, community literacy programmes, and legislative co-design processes form the governance layer without which technical infrastructure cannot be held accountable.

 

A Roadmap for Getting This Right

 

The evidence supports concrete recommendations — for governments, technology companies, and civil society organisations alike.

 

For governments, legislative ambition must be matched with enforcement capacity. Digital public infrastructure investments must be paired with independent oversight bodies equipped with technical expertise. Disaggregated data collection — by gender, disability, region, and ethnicity — should be mandatory to ensure that disparate impacts are visible and actionable.

 

For technology companies, community accountability must extend beyond user feedback channels. Market entry into regions with constrained regulatory capacity should include structured investment in civil society capacity-building — not as corporate social responsibility, but as governance infrastructure.

 

The Rite Aid case demonstrates what happens when harm is visible but not institutionally connected to accountability structures. Patterns of racial bias were not invisible; they were simply unaddressed until a regulator intervened. Community monitoring structures could have surfaced those harms far earlier.

 

For CSOs, the imperative is formalisation. Their governance role is not supplementary — in many contexts, it is primary. That means pushing for legal recognition, co-design seats in regulatory development, and cross-border knowledge networks capable of matching the speed of technological diffusion.

 

Conclusion: The Illusion and the Reality

 

There is a version of AI governance that exists in press releases, legislative texts, and global strategy documents. It speaks of rights-based frameworks, transparency, and accountability. As of 2023, more than 80 countries had developed national AI strategies . On paper, the architecture of oversight appears robust.

 

And then there is the reality: a displaced woman in North Kivu flagged as ineligible by a system that cannot read her face. An accent-related error denying someone access to a public service. A facial recognition deployment harming marginalised customers for years before intervention.

 

The gap between these two realities is not a technical problem. It is a governance problem. And it will not be solved by legislation alone. It is being addressed — piece by piece — by civil society organisations that understand something the global technology governance conversation is only beginning to grasp: accountability is not a document. It is a practice.

 

Africa is not a cautionary tale. It is one of the most instructive laboratories of AI governance innovation in the world today. The question for the rest of the world is not whether to pay attention. It is whether we are paying the right kind.

 

 

This article is written as part of the Forus journalism fellowship programme. Learn more here