Research Digest 2: State of AI Adoption and Competencies for Evaluators for Made in Africa AI in MERL
In this research digest series, we unpack learnings from our report “Made in Africa AI Approaches in MERL: A Landscape Study and Practitioner Perspective”, from October 2025.
In the first digest, we examined how African MERL practitioners define “Made in Africa AI for MERL” and what these definitions imply for strengthening evaluation practice on the continent. We demonstrated how practitioner-led definitions can inform more grounded, contextually appropriate decisions about AI use in African evaluation practice.
In this second digest, I look at three themes from the research: the ways MERL practitioners in Africa are currently using AI, the barriers and risks shaping that use, and the competencies needed to move closer to a genuinely Made in Africa approach to AI in MERL. We offer a practical way of thinking about what evaluators need to navigate AI responsibly and critically.
The state of AI Adoption by MERL Practitioners in Africa
Across the continent, MERL practitioners are already using AI, often informally and experimentally. Most commonly, the individuals who participated in our research indicated they are using AI tools to support literature reviews, drafting, summarizing documents, and handling administrative tasks. At the same time, adoption is uneven and constrained. Many practitioners struggle to find Africa-relevant evidence using AI, even when such research exists. Several reported having to repeatedly specify countries or regions just to surface partial or fragmented results.
This challenge is not simply about weak prompts or limited user skills. It points to a deeper structural issue. Most AI tools work best where data is digitized and well-indexed. Many African knowledge sources, such as community documentation, locally published research, indigenous knowledge systems, and grey literature, are still poorly represented in AI training data. As a result, African realities are more likely to be missed, simplified, or distorted when AI is used to support evidence work. Developing initiatives such as Google’s Waxal dataset for African speech technology signal progress, yet significant gaps persist. Critically, however, the solution is not simply more data. Many custodians of indigenous knowledge systems do not wish their knowledge incorporated into AI systems at all, raising fundamental questions of consent and sovereignty. Moreover, the problem extends to how these tools are designed, the value systems they embed, and the power structures they reproduce. Adding more African data alone will not rectify epistemic biases built into the AI architecture and development processes as we explore below.
Barriers to adoption and risks for Made in Africa AI for MERL
Participants were clear that the main barriers to AI adoption go beyond data availability, cost, bandwidth, and technical capacity. Four interconnected risks stood out.
Cultural and epistemic bias: Because most AI tools train on data predominantly from the Global North, they often reflect Northern theories, assumptions, and definitions of “best practice.” Over time, this orientation could quietly shift evaluation questions, indicators, and interpretations away from African contexts and priorities. However, addressing this bias requires more than diversifying training data geographically; it demands fundamental shifts in how these systems are designed, and who participates in their development.
Language exclusion and erasure: Limited African language data reduces the usefulness of AI in MERL and makes communities less visible in evidence systems. Translation gaps, especially for technical terms in sectors like health and education, reinforce reliance on colonial languages. Practitioners also noted a real tension: while language inclusion is essential, continental collaboration often depends on colonial legacy languages like English, French, and Portuguese, making this challenge political.
Data extraction and commercialization: One of the most urgent concerns relates to externally -driven open data requirements. Practitioners described situations where African language data and locally collected evidence are shared openly, commercialized elsewhere, and later sold back to African institutions and governments. An example of how this manifests is how in 2023 reports showed how African governments spent over 1 billion in surveillance technologies. This dynamic undermines data sovereignty and reinforces Africa’s role as a consumer rather than a co-owner in AI value chains.
Over-reliance and deskilling: Many evaluators worry about becoming too dependent on AI tools. Weak validation practices, privacy risks, and uncritical use of AI-generated outputs can undermine evaluation quality. When biased or inaccurate outputs are treated as evidence, both credibility and accountability suffer.
Made in Africa AI in MERL Competencies
Based on these realities, the study identifies seven core competencies that can support a more grounded and ethical approach to AI in MERL. The aim is to equip evaluators to use AI thoughtfully, critically, and in ways that align with African values and priorities. This competency framework is intentionally flexible. It should evolve as AI changes and as practitioners learn through experience.
- AI Literacy – understanding of how AI systems work, what different tools can and cannot do, and when AI use makes sense in evaluation work.
- Critical and Decolonial Awareness – recognizing how AI can reproduce existing inequalities and colonial power structures, and identify extractive practices in AI development and deployment.
- Ethical Use Across the MERL Cycle – applying AI ethically and cautiously across planning, data collection, analysis, reporting, and learning; avoiding over-reliance on AI and documenting AI use transparently for accountability and reflection.
- Participatory and Community-Driven Approaches – involving communities meaningfully in AI design and deployment decisions, promoting multilingual approaches, and ensuring communities have real agency in shaping how AI is used.
- Data Quality, Governance, and Sovereignty – assessing data quality before applying AI, following sound data governance practices, and actively resisting extractive data arrangements.
- Tool Selection and Prompting Skills – weighing the trade-offs between open-source and proprietary tools and refining prompts to produce more contextually appropriate outputs.
- Environmental and Socio-Economic Awareness – understanding the environmental and socio-economic impacts of AI and choosing tools and approaches that minimize harm where possible.
Advancing Made in Africa AI for MERL Competencies
To put these competencies into practice, our findings point to four practical priorities.
First, strong multistakeholder networks are essential. Evaluators, technologists, universities, community actors, and public institutions need to collaborate while actively preventing extractive forms of engagement.
Second, capacity development must go beyond one-off training. Peer learning within evaluation associations and communities of practice is key, with deliberate inclusion of youth and early-career practitioners.
Third, digital public infrastructure needs attention. AI-enabled MERL depends on infrastructure that is treated as a public good, with clear safeguards around ownership, access, and environmental impact.
Finally, inclusive and multilingual spaces matter. Collaboration should not reinforce colonial legacy language hierarchies and should make room for intergenerational and cross-sector participation.
Conclusion
Made in Africa AI for MERL is about redefining what responsible, contextually grounded, just AI-enabled evaluation looks like from and for Africa. To reach the dream of operationalizing the competency framework above, the requirement is collective practice, workshopping, and negotiation of space.
For MERL practitioners, this means moving from passive use of AI to reflective and accountable practice. MERL practitioners play a critical role in protecting evidence quality, ethical standards, and community voice. Strengthening AI literacy, maintaining critical awareness, validating outputs, and documenting AI use are all part of that responsibility.
For policymakers, the findings highlight the need to treat AI-enabled MERL as a public good. Strong data protection frameworks, investment in digital public infrastructure, protection of linguistic diversity, and alignment with social and environmental priorities are crucial. When used well, these insights can support more credible, inclusive, and evidence-informed policymaking.
Ultimately, Made in Africa AI for MERL is an ongoing practice, not a finished model. It requires continuous learning, collaboration, and principled decision-making to ensure that AI strengthens African evaluation rather than reshaping it in externally defined ways.
Click here to read the full report: “Made in Africa Artificial Intelligence for Monitoring, Evaluation, Research and Learning: A Practitioner Perspective and Landscape Study.”
You might also like
-
Event: What are the resources we need to navigate AI, gender and MERL?
-
Event recap: The Humanitarian AI Countdown and humanitarian knowledge production with Kristin Sandvik
-
Bias in, bias out? How we’re understanding more about gender bias in LLMs
-
Event: Turning principles into actions – Made in Africa AI in MERL
