Research Digest 1: Made in Africa AI Principles and Enhancement of Evaluation Practice in Africa


The MERL Tech Initiative launched Made in Africa AI Approaches in MERL: A Landscape Study and Practitioner Perspective in October 2025. This research explores how AI intersects with data and evaluation work across the African continent. We interviewed 23 AI experts, technologists, evaluators, civil society leaders, and policymakers from Africa and the global evaluation community. The study asked several guiding questions: What do evaluation practitioners in Africa need? What gaps exist? And what priorities should guide the development and use of AI tools in this field to strengthen evidence-informed policymaking?

Because there’s so much to unpack and explore in that publication, we are publishing a blog series that unpacks key findings and invites the ecosystem to engage with what we learned.

In this first digest, I focus on a foundational finding: how practitioners define “Made in Africa AI for MERL”, and what those definitions imply for strengthening the evaluation practice on the continent. I also show how these definitions can guide decisions about AI use in evaluation in Africa; what we build, what we adopt, and what we refuse.

Across interviews, participants did not describe Made in Africa AI as a “tool list” or a purely technical standard. Instead, they framed it as a multidimensional approach concerned with power, purpose, accountability, and the real-world conditions required for African communities to benefit from AI on their own terms. Put differently, the “Made in Africa” question is not only technical (“What works?”); it also centres a deeper question: What kind of AI ecosystem should evaluation practice help create, and who should it serve?

Principles of Made in Africa AI 

  1. Community-driven participatory methods

A central finding is that if AI is “made in Africa,” communities cannot function only as data sources or end users. They must help define what success looks like and shape the design and use of AI-enabled evaluation systems. This position challenges a common pattern: programs introduce AI systems in African contexts using preset indicators, donor templates, and external definitions of impact. Practitioners argued for approaches that prioritize indigenous definitions of progress and treat local knowledge as a legitimate foundation for evaluation, design, learning, and decision-making. Participation, then, is not a one-off consultation. It redistributes authority over meaning-making and aligns with Made in Africa Evaluation commitments to reclaim agency in knowledge production and evaluation design.

  1. Alignment with Ubuntu principles

Practitioners linked Made in Africa AI to Ubuntu as an ethical and relational anchor. They framed Ubuntu not as a slogan, but as a design requirement: AI-enabled MERL systems should be understandable, contestable, and shaped through relationships of mutual accountability. This requires evaluators and system designers to build for fairness and transparency from the outset and to ensure communities can recognize themselves in the system, ask questions of it, and influence how institutions use it. For evaluation practice, this emphasis supports evidence systems that take human dignity seriously, strengthening trust, legitimacy, and responsiveness, not merely generating “better data.”

  1. Avoidance of colonial harms through new technology adoptions

A third principle emerged as a warning: technology can reproduce colonial harms even when intentions are good. AI can do this when it primarily advances external agendas, reinforces extractive data practices, or deepens dependency on imported tools and vendors. Participants also raised concerns about capacity-building models that train African practitioners chiefly to produce higher-quality evidence for external priorities, rather than strengthening African-defined pathways to transformation. In this framing, Made in Africa AI is not simply “using new technology.” It is adopting AI in ways that disrupt, rather than extend, unequal power relations. Evaluators should treat AI not as neutral efficiency, but as a political and ethical choice shaped by data ownership, labor conditions, environmental costs, and who captures economic value.

Challenging African exceptionalism in Made in Africa AI 

The findings also reveal a productive tension. Many participants found the “Made in Africa” framing empowering; others cautioned that it can slide into African exceptionalism, implying Africa must constantly justify contextual approaches, or that African innovation matters only when it comes with a special “African angle.” This critique does not reject the movement. It strengthens it by keeping the focus on substance over symbolism. Made in Africa AI should not become branding or defensiveness. Instead, it should assert equal standing: African communities, including local actors and civil society groups to NLP developers, policymakers, and MERL practitioners, should shape AI futures in general (and, by extension, AI’s use in evaluation) because they have the right to do so, and because African knowledge systems are not optional “add-ons” to global innovation.

How Made in Africa AI principles enhance evaluation

Evaluators increasingly work with large, diverse, and often unstructured evidence: interviews, narratives, community feedback loops, and administrative records. Much of this evidence remains partially digitized, and making sense of it requires time, judgment, and careful interpretation. AI may support this work, but transforming evaluation practice in Africa requires approaches that engage complexity, center marginalized voices, and support systemic change. Our findings suggest that MAAI principles can strengthen how evaluators listen, interpret, and respond, if governance and design align with those principles.

First, MAAI principles could strengthen language- and culture-aware interpretation. If AI systems recognize African languages, cultural concepts, and knowledge structures, evaluators can analyze qualitative data without forcing meaning into English-only categories. This matters because translation is never neutral; it can flatten nuance, erase context, and shift power over interpretation. The value here is not speed for its own sake, but more nuanced and just engagement with community perspectives, especially perspectives that evaluation systems currently lose when evidence must travel through colonial linguistic and institutional channels.

Second, MAAI could support real-time synthesis while protecting participation. Many programs generate continuous feedback. If evaluators synthesize signals as they emerge, across languages and geographies, evaluation can become more adaptive and learning-oriented. The non-negotiable condition is community ownership: faster synthesis must not become faster extraction. When designed well, MAAI can help evaluators notice patterns earlier, ask sharper questions, and support timely adjustments while sustaining participatory commitments.

Third, MAAI could broaden participation without sacrificing depth by reducing common barriers to engagement. In many contexts, participation is constrained by literacy, time, technology access, or the mismatch between written tools and oral traditions. Voice-based interfaces in local languages can enable elders and community members to contribute through speech rather than text, treating oral testimony and storytelling as legitimate evidence. This shift can expand who participates in evaluation and advance justice by redistributing whose knowledge counts.

These possibilities remain aspirational and require rigorous testing. MAAI is not automatically transformational; its value depends on design choices, governance arrangements, and whether communities retain ownership and agency over data. As we progress with this block of work, we would love to hear from community members who may be working on exploring these possibilities and are piloting the same. 

Conclusion 

The landscape study, and the principles we’ve been unpacking, point to something clear: AI in African evaluation can’t be copied and pasted from elsewhere. It needs to be designed, governed, and judged through indigenous knowledge frameworks and lived local realities. And yet, the findings also show a real gap between what we say we believe about “Made in Africa AI” and what’s actually happening in practice.

The task ahead is not simply to “increase AI adoption.” It is to operationalize Made in Africa AI commitments in ways that shift power toward communities, toward African data sovereignty, toward inclusive language futures, and toward ethical positioning within global AI systems that remain deeply extractive. Technical solutions matter, but the transformative potential of AI in African evaluation ultimately turns on one question: Who shapes AI for evaluation in Africa and for whose progress?

To advance these ideas:

  • Donors should continue deepening their understanding and strategy on how to fund longer-term, community-led AI-for-evaluation work (beyond short pilots) and resourcing African data sovereignty and local-language infrastructure. Donor approaches for AI-supported MERL investment should require clear governance, accountability, and safeguards for creating agency and ownership in the communities that are being invested in. 
  • MERL practitioners should define locally grounded use-cases, co-design tools, frameworks, and protocols with communities, build AI literacy within MERL teams, and document lessons (including failures) so that AI in evaluation practice steadily aligns with Made in Africa practitioner-led principles.

This is only one part of the conversation; more posts in this series will dig into more research findings, “operationalising” what Made in Africa AI looks like in practice, and dive deeper into competencies and recommendations for the same. 

Click here to read the full report: “Made in Africa  Artificial Intelligence for Monitoring, Evaluation, Research and Learning:  A Practitioner Perspective  and Landscape Study.” 

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *