Behind the polished image of Rwanda as an African model of stability and success lies a ruthless machinery of manipulation and terror, orchestrated by Paul Kagame since he seized power in 1994. The investigation revealed by Rwanda Classified and expert analysis expose a vast digital disinformation operation run by the regime, even harnessing artificial intelligence to fabricate false support, discredit opponents, and rebrand a bloody dictatorship as a democracy endorsed by the people.
This façade of popularity—built on elections locked at more than 99 percent of the vote, assassinated journalists, opponents eliminated or imprisoned, and the relentless pursuit of Hutu refugees across the globe—serves only to camouflage the reality: an absolute power that has crushed dissent for three decades.
The tacit complicity of organizations such as Amnesty International—whose recent campaigns in Rwanda echoed pro-Kagame narratives instead of condemning his crimes—raises a burning question: How much longer will the international community remain a passive accomplice to this manipulation that tramples the very values it claims to defend?
The following analysis by Professor Morgan Wack, political scientist and research faculty at the Media Forensics Hub, Clemson University (USA), rigorously illustrates these sophisticated methods of digital manipulation. Published in July 2024, his study remains strikingly relevant today, revealing—with concrete evidence—the coordinated use of influence networks and AI tools to fabricate a façade of popularity, conceal the regime’s crimes, and divert global attention. It shows how this sham democracy fits into a broader strategy of propaganda, allowing Kagame’s Rwanda to keep enjoying international admiration and sympathy that it does not deserve.
Amnesty International’s propaganda campaign in Rwanda spread pro-Kagame messages – a dangerous new trend in Africa
By Morgan Wack, Assistant Professor of Political Science, Media Forensics Hub, Clemson University – USA
In late May 2024, several media outlets led by the investigative network Forbidden Stories published a series of reports under the title Rwanda Classified. The investigation detailed evidence linked to the suspicious death of journalist and government critic John Williams Ntwali.
The reports included further details about Kigali’s efforts to silence critics.
As a political scientist who has studied digital disinformation and African politics, I work with the Media Forensics Hub, which monitors the internet for evidence of coordinated influence operations. Following the release of Rwanda Classified, we identified at least 464 accounts that flooded online discussions of the report with pro-Kagame content.
Many of the accounts we linked to this network had been active on X/Twitter since January 2024. During this period, the network generated more than 650,000 posts.
Rwandans went to the polls on July 15, 2024. The presidential result was a foregone conclusion, largely due to the expulsion of opposition candidates, the harassment of journalists, and the assassination of critics. Kagame secured more than 99 percent of the vote.
Even though the outcome was inevitable, the network’s accounts were repurposed to promote Kagame’s candidacy online. These inauthentic messages will likely be used as supposed evidence of the president’s popularity and the legitimacy of the election.
In both the response to Rwanda Classified and the pro-Kagame presidential campaign, we identified the use of AI tools to disrupt online discussions and amplify government narratives. The large language model ChatGPT was one of the tools deployed.
The coordinated use of these tools is a troubling sign. It indicates that the methods used to manipulate perception and maintain power are becoming increasingly sophisticated. Generative AI allows networks to produce larger volumes of varied content compared to human-only operations.
In this case, consistent posting patterns and content markers made it possible to detect the network. Future campaigns will likely refine these techniques, making inauthentic discussions harder to spot.
Researchers, policymakers, and African citizens must remain alert to the potential challenges posed by generative AI in the production of regional propaganda.
Influence networks
Coordinated influence operations have become common in Africa’s digital spaces. While each network is distinct, they all aim to make inauthentic content appear authentic.
These operations often “promote” material aligned with their interests while “down-ranking” other conversations by flooding feeds with unrelated content. This appears to be exactly what the network we identified was doing.
Across East Africa alone, social media platforms have taken down networks of accounts created to appear legitimate but targeting Ugandan, Tanzanian, and Ethiopian citizens with false and partisan political content.
Non-state actors, including several global public relations firms, have also been traced as the source of bots and websites in South Africa and Rwanda.
Most earlier influence networks were identified by their use of “copy-paste” text, directly lifted from a central source and reused across accounts.
Unlike those previous campaigns, the pro-Kagame network we uncovered rarely copied text verbatim. Instead, the associated accounts used ChatGPT to generate content on similar topics and targets, with slight variations. The material was then posted alongside hashtags.
Likely due to the inexperience of the actors involved, the campaign was sloppy. Errors in the text-generation process allowed us to track the associated accounts. In some cases, for example, affiliated accounts even included the instructions used to generate pro-Kagame propaganda.
These posts were then deployed to flood legitimate discussions with irrelevant or pro-government content. This included information on Rwandans’ ties to sports clubs and direct attempts to discredit journalists and media outlets involved in the Rwanda Classified investigation.
In recent weeks, several of the coordinated network’s accounts promoted election-related hashtags such as #ToraKagame2024 (“tora” meaning “vote”). Given the sheer volume of posts generated, readers were highly likely to encounter content resembling uncritical support for the country and its leader.
AI and propaganda
The integration of AI tools into online campaigns has the potential to reshape the influence of propaganda for several reasons:
Scale and efficiency: AI tools can rapidly generate massive volumes of content. Producing similar output without AI would require far more resources, staff, and time.
Borderless reach: Techniques such as machine translation allow actors to spread influence across borders. For instance, the most frequent target of the inauthentic Rwandan network was the conflict in eastern Democratic Republic of Congo.
Attribution: Certain behavioral patterns can indicate coordination. Generative AI enables subtle variations in text, making attribution far more difficult.
What can be done
As the primary targets of influence operations, citizens must be prepared to navigate these evolving tactics. Governments, NGOs, and educators should expand digital literacy programs to strengthen resilience against disinformation.
Better communication is also needed between social media platforms like X/Twitter and large language model providers such as OpenAI. Both play a role in enabling influence networks. When inauthentic activity can be tied to specific actors, platforms should consider temporary suspensions or outright bans.
Finally, governments should work to raise the costs of misusing AI tools. Without real consequences—such as restrictions on foreign aid or targeted sanctions—actors will continue to experiment with ever more powerful AI technologies with impunity.

