Event Recap: Turning principles into actions – Made in Africa AI in MERL with Rebecca Mbaya and Dr. Mario Marais


On February 24, the AI in Africa Working Group at the Natural Language Processing Community of Practice (NLP-CoP) gathered to discuss how to refine a “Made in Africa AI in MERL” framework to reflect diverse realities across the continent, and translate these principles into concrete steps practitioners can take in their MERL work. In this session, we also explored how we address the barriers practitioners face, and what actions can move us from aspiration to implementation as individuals and as a sector. 

We invited working group members Mario Marais and Rebecca Mbaya to discuss what critical, responsible engagement with AI in MERL can look like in practice. Drawing from their own experiences, they identified concrete strategies for members to adopt within their organizations, and explored how the AI in Africa working group can better support appropriate and effective AI uptake in MERL across Africa.

Epistemic Priors, Weird Assumptions and Structural Risk 

This segment was led by Rebecca Mbaya, who opened with a question that should trouble every MERL practitioner who has reached for an AI tool to help with coding or analysis: “What does the system actually assume when it reads African realities?”

While coding survey data from 191 respondents in the DRC using AI tools, she watched both ChatGPT and Claude consistently label a dominant theme as “misinformation resistance.” Her contextual knowledge told a different story; what the models read as cognitive bias was, in fact, historically grounded political distrust. The model had not hallucinated. It had simply applied an imported interpretive frame to a context it was not trained to understand.

The distinction between technical accuracy and interpretive validity is at the heart of epistemic risk of AI in African MERL contexts. Most large language models are trained predominantly on data from WEIRD (Western, Educated, Industrialised, Rich, Democratic) populations, which carry baseline assumptions about institutional legitimacy and rational behaviour that do not travel well across geographical contexts. The result is a class of distortions, i.e., Proxy Fragility, where data measures fail to capture local complexity; Interpretive Drift, where social phenomena get recoded into misrepresentative categories; and Data Absence Bias, where missing data is read as a missing phenomenon rather than a missing infrastructure.

This matters enormously for MERL in Africa. Our responsibility as practitioners is not only to collect valid data but to ensure that the evidence we generate reflects the realities of the people and contexts it purports to represent. When an AI tool quietly recodes a political trust phenomenon as a cognitive deficit, and that output travels into a policy brief or a programme review, the distortion moves with it. As Rebecca put it plainly, we are often the vehicle that transports these frames into policy spaces.

Rebecca’s recommendations for practitioners were the following:-

  1. Treat AI outputs as hypotheses, not conclusions. Every AI-generated theme or code should be understood as a starting point that still requires contextual validation by practitioners.
  2. Insert a mandatory human validation loop before final coding. For the Made in Africa framework specifically, this should be an intentional, documented step, and the human element must remain meaningfully in the loop.
  3. Add contextual annotations beyond what the model flags. Practitioners with firsthand knowledge of a context should actively layer in interpretations that the model would not produce, and further document why.

This segment was closed with a call for collective action, mainly the development of an AI Systems Failure Log, a shared, structured repository where practitioners document cases where AI tools produced contextually inaccurate outputs. The goal is to move from anecdotal experience to structural diagnosis, and ultimately to influence the development of tools that work better in African contexts. 

Made in Africa AI in MERL: Reflections and Implementation

This segment was led by Dr. Mario Marais, who brought the conversation into implementation territory, beginning with a question of posture: “What assumptions are we carrying before we open any tool?”

He challenged the common assumption that evaluators working across contexts have achieved genuine mutual understanding with communities, or that a client’s relationship with a community is sufficient grounding for an AI-enhanced evaluation. Africa’s cultural landscape is extraordinarily diverse, and frameworks like Ubuntu, while meaningful, cannot be stretched to represent the philosophical richness of an entire continent. Acknowledging that diversity is not a preliminary courtesy, it is a methodological necessity in moving from principle to practice in Made in Africa AI Approaches in MERL.

Before deploying AI tools in any African context, practitioners need to ask a prior set of questions: What systems exist in this context? What roles do they play? What guides decision-making for and within communities? These are questions that shape how AI should be used.

From there, a participatory process for introducing AI tools in community-facing MERL work is a necessity. Rather than presenting AI as a pre-packaged solution, it is important to start by helping communities represent and communicate their own realities. This means asking communities to reflect on four things: what they themselves use as data and information, how that information is communicated within their context, what internal and external forces shape or influence that data, and what they would want an outsider to understand and use. Only then should AI be introduced, and through the platforms communities already use, as this is what makes the output legitimate.

Mario also highlighted a concrete example of what context-sensitive AI design can look like in practice. In Ghana, a low-data strategy called the AI Voters Approach has been used as a scalable participatory mapping methodology. Researchers train lightweight models on small, participatory data samples, reducing data requirements by approximately 90%, making it possible to incorporate community preferences into national-level urban planning without the prohibitive costs of large-scale, top-down data collection. It is a practical demonstration that AI can be built around community realities rather than imposed on them.

What is the baseline starting point – My thoughts

Taken together, the contributions of the speakers map out the terrain of a genuinely context-responsive AI in MERL practice for Africa. Made in Africa AI for MERL requires practitioners to hold technical competence and contextual knowledge in productive tension and to document, collectively, what happens when that balance breaks down. Building a practical infrastructure for this calls for a participatory implementation process and honest self-assessment. This is why the MERL Tech Initiative has developed this  MERL Competency Audit framework to help practitioners ground themselves and evaluate where they sit on the AI in MERL competency spectrum. This audit allows individuals to assess individual competencies or an organization from AI literacy, Data Quality, Governance, and Sovereignty to Environmental and Socio-Economic Awareness. We invite practitioners to try this audit tool out and map out which competencies will be useful in community as we continue to refine Made in Africa AI Approaches in MERL. 

Leave a Reply

Your email address will not be published. Required fields are marked *