© Forus
(c)Boitumelo
2026-03-02
When Democracy Gets Deepfaked As six African nations head to the polls in 2027, AI-generated disinformation is already reshaping the electoral battlefield
The video spread quickly — and convincingly. In it, Nigerian President Bola Ahmed Tinubu stands before a microphone, two men flanking him, and addresses an unseen crowd. 'I am a fan of Chelsea,' he says, 'and I'm planning to buy from their owner.' The clip racked up thousands of shares before anyone stopped to ask the obvious question: did he actually say that?
He didn't. The footage was AI-generated — a deepfake. And while this particular fabrication was relatively harmless, those who watched it spread in real-time understood something significant: the technology to convincingly put words in a president's mouth had arrived in Nigerian politics. The 2027 elections are still ahead. The real test hasn't started yet.
With six African nations — Nigeria, Kenya, Angola, Gambia, Equatorial Guinea, and the Republic of the Congo — heading to the polls in 2027, the continent faces its most technologically complex electoral cycle to date. Deepfake content grew by 550 percent between 2019 and 2023, according to a January 2025 Deloitte report. AI tools that once required specialist knowledge can now produce convincing synthetic video and audio in minutes. The political battlefield is being quietly remapped.
Nigeria is not waiting for the worst to happen — it is already living through the warning signs. In June 2025, a deepfake video in broadcast news format circulated online claiming that Nigerian soldiers were guarding cattle in VIP fashion in Yelwata, Benue State. The video emerged just days after an overnight attack in the same town left over 100 people dead. The fabricated footage — debunked by PRNigeria — landed into a community already raw with grief, designed to inflame tensions between farming and herding communities that have long been a source of deadly conflict.
The same month, an AI-generated clip of US President Donald Trump, supposedly speaking about Nigerian national life, oil, and the military, was shared on TikTok. It garnered over 421,500 views.
These are not fringe incidents. They are a preview.
"In 2019 it was cheap fakes; in 2023 it was false edits and captions. Today, we face hyper-realistic voices and videos that ordinary citizens can hardly distinguish from reality."
— Dr. Chinonso E. Okoye, Senior Special Assistant to the Governor of Anambra State on Cyber & Infrastructure Security
In a country where 62 percent of people access their political news through social media, and where WhatsApp's encrypted forwarding architecture means content travels faster than corrections can follow, this is not merely a technology problem. It is a democracy problem.
"Many voters can identify simple photo manipulations,' says Victoria Oladipo, founder of Learn Politics. 'But deepfakes, AI-generated audio, or hyper-realistic images are much harder to detect. In Nigeria, where trust in visuals and voice recordings is high, this makes voters particularly vulnerable."
NNNGO — Nigeria's foremost civil society platform and a Forus International member — has been raising the alarm about these structural vulnerabilities. With a membership base of over 4,000 organisations and digital content reaching 2.5 million people annually, NNNGO sits at the intersection of community accountability and democratic advocacy. Its 2025 work on civic space and digital protection documented deepening concerns about the enabling environment for civil society during electoral periods — including the risks posed by AI-generated disinformation to civic trust and sector credibility.
The Continental Picture
Nigeria is the largest democracy in Africa, but it is far from the only country facing this threat. As the 2027 electoral cycle approaches, the pattern across the continent is consistent: AI-generated content is being weaponised to amplify ethnic tensions, impersonate leaders, and saturate the information environment with manufactured reality.
In Kenya, which also heads to polls in 2027, researchers at Trust Lab — an EU-funded project — identified 17 major coordinated disinformation campaigns operating in 2025 alone, many with significant cross-campaign overlap revealing organised networks. Deepfake videos and forged documents have been deployed to inflame ethnic divisions, particularly targeting the Kikuyu community. In one operation, a fake technical note attributed to the International Foundation for Electoral Systems falsely claimed voter influence had shifted to minority counties — a fabrication designed to fracture voting coalitions.
Ghana's 2024 elections offered a sobering case study in how AI-driven disinformation intersects with electoral instability. Bot networks, manipulated media, and deepfake content were all deployed to spin narratives during the campaign season. And in Romania's 2024 presidential election — a cautionary tale that reverberated globally — election results were annulled after evidence emerged of AI-powered interference using manipulated videos, demonstrating what happens when synthetic media is not merely noise but a decisive intervention in a democratic outcome.
What makes the African context distinct is the speed at which disinformation travels through communities with limited access to formal verification infrastructure, and the political fragility of the ground on which it lands. Elections across the continent are already tense national events, shadowed by fears of manipulation, violence, and post-election unrest. The introduction of AI-generated synthetic media into this environment does not merely add a new type of false information — it supercharges existing vulnerabilities.
Forus member platforms across Africa — including in Nigeria, Senegal, Ghana, and Kenya — are embedded in the civic ecosystems most exposed to AI-driven electoral manipulation. As Forus International's enabling environment monitoring has documented, digital repression and disinformation are now among the primary threats to civic space during election cycles. Their networks provide the early-warning infrastructure and community trust that national governments and technology companies cannot replicate alone.
The Algorithm Is the Architecture
Behind every deepfake that goes viral is something just as powerful and far less visible: an algorithm that decided to amplify it. Across West Africa and beyond, AI-powered social media algorithms have effectively replaced traditional newsrooms as the gatekeepers of political discourse. These platforms, driven by engagement metrics, systematically favour content that triggers powerful emotional responses — the kind most likely to prompt a like, a share, or a furious comment.
The consequence is structural. Sensationalist or emotionally manipulative content outperforms nuanced reporting not because audiences are gullible, but because the platforms' financial architecture rewards virality over veracity. This 'attention economy' creates the perfect infrastructure for AI-generated disinformation to thrive. A coordinated disinformation campaign does not need to deceive everyone. It needs only to create enough noise to muddy the information environment, erode trust in legitimate sources, and widen existing social fractures.
The threat is not purely domestic. A Russian-funded disinformation network uncovered ahead of Moldova's 2025 parliamentary election paid engagement farms — including in Africa — to promote Kremlin-aligned narratives through verified social media accounts. The lesson: in the age of AI-enabled influence operations, the line between foreign interference and domestic disinformation has become difficult to draw. Research indicates that 60 percent of disinformation campaigns targeting Africa are foreign-sponsored, primarily from Russia, China, and Gulf states, with West Africa bearing the heaviest burden of these attacks.
Women in the Crosshairs
If deepfakes pose a general threat to electoral integrity, they pose a specific and distinctly gendered one to women in politics. Across African elections in Ghana, Namibia, and Senegal, AI-generated content has been deployed not merely to discredit female politicians politically, but to destroy them personally — through fabricated sexual imagery, manufactured scandals, and content designed to humiliate rather than debate.
A 2025 report by Tanda Community Network, drawing on focus groups and interviews with female politicians and journalists across three African countries, found that deepfake attacks inflict lasting socio-cultural, professional, and psychological harm — and that the violence frequently spills over into offline spaces. In Kenya in 2025, a smear campaign using an AI-generated video targeting Dorcas Gachagua, wife of former Deputy President Rigathi Gachagua, garnered over 163,000 views — designed to humiliate her husband and damage his standing through attacks on her.
"Online harassment will have a higher cost for female politicians because that harassment manifests in not just attacks on political competency but a cultural rejection of women. Women candidates are already too underfunded to challenge sexualised and gendered disinformation."
— Lucy Purdon, founder and director, Courage Everywhere
This asymmetry is not incidental. Women face criticism that targets their appearance, family life, and personal choices in ways their male counterparts rarely do. Deepfakes amplify these gendered forms of attack, making them hyper-realistic and devastatingly shareable. The result is a chilling effect on women's political participation that operates quietly but powerfully. In several African contexts, the evidence already points to women disengaging from public life rather than enduring the cost of digital assault.
The Counteroffensive
The defences are being built — but against a threat that moves faster than institutions.
Nigeria's electoral commission, INEC, made history in May 2025 when it established a dedicated Artificial Intelligence Division mandated to improve decision-making, voter engagement, and combat disinformation. It is a meaningful step. But Kingsley Owadara, AI ethicist and founder of the Pan-Africa Center for AI Ethics, argues that structural creation is only the beginning. 'There is a need to invest in training electoral officials, cybersecurity experts, and fact-checkers,' he says. 'Educating the electorate about AI disinformation is crucial. And platforms must be held accountable for removing manipulated content quickly.' He outlines a three-layered response: restricting AI models from generating harmful propaganda; detecting synthetic content with forensic and provenance tools; and escalating removal protocols with evidence capture. He concedes that verification remains a gap: 'No detector is fully reliable as generators evolve.'
The legal framework lags further still. Nigeria's Section 123 of the 2022 Electoral Act prohibits publishing false statements about candidates — an offence punishable by a 100,000 Naira fine or six months' imprisonment. But the statute was designed for a pre-AI world. It does not account for synthetic video, voice cloning, or the speed at which AI-generated content circulates. The Philippines, by contrast, introduced requirements for candidates to disclose AI use in campaign materials ahead of its May 2025 elections, designating deepfakes as an electoral offence. Nigeria's regulatory gap is significant.
Grassroots fact-checking networks have emerged as a vital line of defence, even as their capacity is strained. Organisations including Dubawa and FactCheckAfrica are actively monitoring AI-generated political content ahead of 2027. But these organisations are competing with the economics of synthetic media at a fundamental disadvantage: disinformation is cheap to produce, emotionally compelling, and spreads through encrypted channels where it cannot be countered in real time.
Civil society platforms within the Forus network — including NNNGO's 4,073-member network spanning all six of Nigeria's geopolitical zones — are uniquely placed to bridge this gap. In 2025, NNNGO conducted Civil Society Mapping and Capacity Assessments evaluating digital protection capacities across its membership. Its findings confirmed what frontline organisations already knew: the sector needs urgent investment in AI literacy, digital security, and real-time disinformation response. These are not abstract policy demands — they represent the conditions civil society needs to function as democracy's last line of defence in 2027 and beyond.
The South Africa Exception — and What It Teaches Us
Not every election in Africa in recent years has been reshaped by AI-driven disinformation. South Africa's 2024 elections offered a counterintuitive finding: research found limited presence of AI-driven disinformation, with most misinformation originating from traditional sources. Public trust in media remained relatively resilient, with no notable decline linked to AI-generated content beyond isolated deepfake cases. The country's relatively robust media infrastructure — including a mature fact-checking ecosystem and stronger press freedom than most African nations — provided structural buffers.
The lesson is not that AI poses no threat to South African democracy, but that the impact of AI-generated disinformation is not uniform. It is mediated by the strength of existing institutions, the quality of media infrastructure, and the baseline level of public trust in democratic processes. Nigeria and Kenya enter 2027 without those buffers. In Nigeria, 63 percent of those surveyed after the 2023 elections expressed no confidence in INEC's vote tally. That trust deficit is the environment into which AI-generated disinformation will be released.
Racing the Clock
The 2027 elections in Africa will not be decided by AI alone. But the choices made in the next two years — whether to invest in digital literacy at scale, hold platforms to legal account, resource fact-checkers with real-time capabilities, and establish national protocols for AI-generated political content — will shape whether those elections are decided by citizens or by algorithms.
"In 2019 it was cheap fakes; in 2023 it was false edits and captions. The escalation is not slowing."
— Dr. Chinonso E. Okoye
The technology to fabricate a candidate's withdrawal from a race, clone an electoral commissioner's voice, or generate a broadcast news format video claiming voter fraud exists today. The question is not whether it will be used in 2027, but whether democratic institutions will be fast enough — and equipped enough — to hold the line when it is.
For civil society, the stakes extend beyond any single election. As Forus International's own enabling environment monitoring has documented, elections are the moments when pressure on civil society organisations intensifies most sharply — arrests, shutdowns, harassment, and now synthetic media designed to discredit civic actors themselves. The organisations watching for deepfakes are the same ones fighting for the space to exist. That fight cannot be separated from the fight for the ballot.
You can only count votes. But first, you have to protect the truth.
This article is written as part of the Forus journalism fellowship programme. Learn more here