From hype to pragmatism: building critical AI literacy for sexual and reproductive health professionals in Kenya

On December 2, The MERL Tech Initiative facilitated a 1-day course for 32 Kenyan development, media, and public health professionals on the (many) potential uses of AI within Sexual & Reproductive Health & Rights (SRHR) programmes, supported by iMedia Associates and the Gates Foundation, and hosted by our partner Shujaaz Inc.
The course, adapted from our virtual training on AI for Social and Behaviour Change programming, provided a comprehensive, beginner-level immersion in all things AI for SRHR, including:
- A full overview of the potential uses of AI across the whole programming ecosystem.
- An introduction to the risks of AI, and recommendations on how to mitigate them (including giving yourself permission not to use AI).
- An overview of how Monitoring & Evaluation has changed in the age of genAI.
To embrace, or not to embrace, AI: Supporting ‘informed agency’
Gratifyingly, the anonymous feedback we received from participants was predominantly enthusiastic, but it’s always more interesting to look at what we could have done differently. In this case, one comment in the ‘room for improvement’ pile stood out for us:
“I did not know about AI and [its] impact on climate change 😢.”
This comment, even though it expressed a negative sentiment, felt like a win. Obviously, the aim of our course was not to depress anyone, or even to turn people off from using AI; rather, we wanted to help participants develop critical thinking skills related to AI, to enable them to make informed choices about their own use, and guide colleagues and their wider networks on how to do the same.
For this cohort, this meant gently challenging the prevailingly positive sentiments in the room at the start of the day, where words like “exciting” and “possibilities” dominated. We did this by facilitating a wide-ranging conversation on AI risks, taking the use case of a voice-based genAI capable of answering sexual health questions as a starting point. Whilst many participants had heard of AI hallucinations and biases, few were aware of the abuses to workers’ rights happening on their doorstep, or of the devastating impact of AI data centers on landscapes and on the climate (hence the teary-eyed comment above, and the stunned silence in the room when we shared some of the testimonies from Kenyan AI data workers). Conversations on privacy risks for those using AI-powered sexual or reproductive health chatbots and period trackers also left participants concerned – and justifiably so, given the recent data agreement between the Kenyan government and the US.
It’s not for us to say what participants should do with this additional layer of information, and indeed, I continue to use AI tools myself despite my own work on the harms of AI (which is a case in point that knowledge and attitude shifts alone often don’t alter behaviours!). Many participants, interestingly, left with an even deeper commitment to exploring how AI could help advance their goals in the sexual and reproductive health space. But it does serve as a reminder of the importance of providing (and budgeting for) AI literacy training across implementing organisations – whether with frontline CHWs, program managers, business development staff, or CEOs – to support more informed decision-making.

Let’s hear it for boring AI use cases!
The second piece of useful feedback emerged not from our formal feedback process, but in a quiet side-chat which happened over lunch. Terry Gichuhi, an AI product manager working with Dimagi and others to develop AI-powered tools, approached me to share that despite her more advanced technical skills, she had found the course useful as a way of framing her work within the bigger picture of sector-wide concerns and opportunities. She appreciated the hands-on exercises we included in the training, covering tasks across the SRHR programme lifecycle, from concept note and Theory of Change development to content and chatbot creation using genAI tools. However, she encouraged us and others to be more ambitious, and look beyond the basic efficiency gains, or chatbot-focused use cases that currently dominate conversations on AI for public health, suggesting that:
“The real opportunity […] is in automating the complex logistics that slow programs down. When AI handles the repetitive heavy lifting, program managers can move beyond administrative tasks and start operating as the strategic leaders they truly are.”
When a Gender Based Violence expert with decades of experience in the field manages to secure funding, she is currently likely to get bogged down by the often gruelling admin of grant and project management, people management, and bookkeeping. In Terry’s experience, the real need for AI capacity building can’t stop at basic AI literacy or even admittedly exciting community-facing uses; it needs to unpack and address the minutiae of program delivery.
Whether or not the type of agentic AI that would be required to do this delivers on its promise anytime soon remains to be seen – not least because the learning curve for using AI even for fairly straightforward efficiencies (summarising, content creation…) still seems significant, and not currently supported by programme budgets. But this piece of feedback was an important reminder that our conversations about the potential of AI within SRHR programming (or any development programmes) might more usefully start with the question “what’s the most boring thing we could do with AI to support our work” – not the most exciting.
***
Learn more about our training offer here.
You might also like
-
Kicking off a new learning group: AI and MERL in Latin America
-
Event Recap: Evaluating the Climate & Socio-Environmental Impact of Data Centers
-
No Impact Without Engagement: Towards Standardised Product Metrics for Social and Behaviour Change Chatbots
-
Join us on May 28: Building a GenAI Sexual and Reproductive Health Chatbot in Senegal and Kenya – Technical and Operational Learnings
