Hands on with GenAI: predictions and observations from The MERL Tech Initiative and Oxford Policy Management’s ICT4D Training Day


On Thursday 21st March, The MERL Tech Initiative and Oxford Policy Management ran a 1-day Training Day workshop as part of the ICT4D Conference in Accra, Ghana. 

The workshop brought together Information Communication Technology for Development (ICT4D) practitioners and academics to discuss the implications of GenAI within international development and MERL and offered them the chance to improve their skills, from prompt engineering to using an LLM to analyze audio data. We also provided participants with a crash-course in the ethical challenges raised by the advent of GenAI, with the opportunity to come up with ideas on how to mitigate these. The workshop was rich in insights, many of which you can find documented in the slides used throughout the day.

To kick off, Paul Jasper from Oxford Policy Management led a panel discussion to hear from experts already engaged with implementation or policy relating to GenAI, with a particular focus on key trends and developments that they believed will make a difference to Low- and Middle-Income Countries (LMICs). We of course aimed to avoid contributing to the mountain of hype around GenAI, breaking through the noise and providing a pragmatic overview of the possibilities and risks associated with its many potential uses.  

Here’s my hot take on some of the most interesting predictions and issues raised on the day.

Voice will drive GenAI uptake in LMICs

Uche Edwin shares his predictions as Milton Madanda, Mariela Machado, and Paul Jasper look on.

Both Uche Edwin from Data Science Nigeria, and Lukas Borkowski of Viamo, stressed that voice, not text, would drive AI uptake in LMICs. Given the hype that still exists around (text-based) chatbots, this was a timely reminder: 2.7 billion people are still offline, and literacy rates have not evolved in line with connectivity, especially for women. At the same time, Milton Madanda from Reach Digital Health noted that those who are not using web-enabled, text-based services are not always hampered by these structural barriers – but sometimes choose voice-based services out of personal preference. This was a great reminder for us all not to overlook users’ agency as consumers in our efforts to highlight genuinely problematic barriers. 

Having said this, we need to be honest with ourselves that this is extremely early days and we therefore lack real evidence to back these predictions up – Viamo and others’ interesting experiments with voice-based GenAI notwithstanding. How many people in LMICs are actually using some form of AI or GenAI powered service or tool? Who are they? How many understand them? Do they trust them? And do they have the capacity and agency to use them in a way that maximizes the potential for their livelihood and wellbeing, and minimizes the risks associated with its use? We need to ensure we are laying the groundwork for GenAI’s useful and ethical roll out within development first by answering some of these questions, something which was also corroborated by our workshop on GenAI and SBC earlier in the week. 

GenAI will democratize implementation…or will it?

Another prediction that emerged from our discussions was that the availability of no-code chatbot building tools would lead to a proliferation of locally created GenAI services. This may well be the case when it comes to commercial uses, in the same way that businesses across Africa and Asia rapidly leveraged WhatsApp’s business API to better communicate and sell to customers. However whilst AI related policies are taking shape (for example, the EU Artificial Intelligence Act, or the Singapore National AI Strategy…), digital infrastructure, regulatory bottlenecks, and growing inequities in digital literacy may slow this process down. 

Viamo presenter Lukas Borkowski highlighted the need for Large Language Models in more languages.

Similarly, the performance of LLMs developed using (mostly) data from the Global North means that the local language abilities of GenAI products developed even by local stakeholders may continue to perform poorly. What could change this dynamic is for funders to prioritize not programmes which seek to use GenAI to reach community members, but the funding of organizations such as Data Science Nigeria and Lelapa AI who are already working on rectifying this imbalance by developing data sets using data gathered, for example, from years’ worth of call-center conversations.

As Mariela Machado from IEEE pointed out, it’s worth remembering that the more LLMs are used by users in the global north, the more models will reflect their data, and therefore their language, biases and preferences. While global economic, gender, and digital divides still persist, LLMs will need constant fine-tuning in order to reflect diverse realities and needs. Finally, it’s also important to keep reminding ourselves that we haven’t yet seen the shifts in funding models and power dynamics that would need to be in place for GenAI to be the much-needed catalyst for locally led programming. (These are the kinds of questions that I hope we can explore more in MERL Tech’s Natural Language Processing Community of Practice)

Hands on with GenAI

Practical sessions took up most of the day. Here participants are practicing prompt engineering.

After our panel discussion, we moved onto some practical sessions, including an introduction to prompt engineering, no-code chatbot building using HuggingFace, voice data analysis, and model training. The content of these sessions is documented and available here. Some of the most interesting points emerging out of these activities are below. 

Prompt engineering is dialogic…but the way we are (mostly) implementing it for community members is not.

Prompt engineering is the process of refining the way we give a GenAI powered tool a task (including in the shape of a question) in order to guide it towards providing the most relevant answers or outputs for our specific needs. It happens both in the front-end (when we are asking a question of chatGPT for example), and in the back-end (when creating a custom chatbot agent, for example). Linda Raftree and Mariela Machado provided participants with best practice tips, and finished by emphasizing the iterative nature of prompt-engineering – encouraging us to see it as a dialogue between us and the model, not a search engine result. 

This important point got me thinking about the  fundamental disconnect between the way we (as practitioners) are learning about using GenAI, and the ways in which we envisage deploying it for our users. When we plan to use an LLM as part of a chatbot supporting teenagers with their sexual health, or to support Community Health Workers respond to a safeguarding issue, for example, we are mostly designing these tools in such a way that encourages users to think of them precisely as a super smart search engine. The user types a question, they get an answer that appears reliable. As we start rolling out GenAI powered services to community members, we need to equip them with the same understanding and skills that we ourselves are benefitting from, or we run the risk of creating at best, user disappointment and churn (“yet another tool that doesn’t quite meet my needs…”), and at worst, reinforcing misinformation and a deadening of critical thinking (“this AI sounds so convincing, and unlike Google, it just gives me One True Answer like magic!”). 

Through the looking glass: now ICT4D practitioners are beneficiaries too!

Working in development is a bit like having a family member that frequently embarasses you, and sometimes makes you want to report them to the authorities, but that you ultimately invite to Eid / Christmas celebrations because hey, they have a good heart! Many of us acknowledge that development is rooted in deeply problematic post-colonial and paternalistic structures and mindsets – but we keep at it, because trying to help others and reduce inequalities sure beats selling stuff. 

It’s therefore perversely thrilling to realize that when we talk about ethics and GenAI, practitioners need to count themselves as ‘beneficiaries’ too. One of the horror stories we swapped when discussing the ethical implications of GenAI was the idea that proprietary documents, uploaded as part of Retrieval Augmented Generation (RAG) would be incorporated into the LLM being used, and the data within therefore made available to anyone who might try and find it. A more mundane but equally shiver-inducing example involves the use of AI assistants within Zoom calls – who hasn’t used that 5 minutes waiting for other attendees for a much needed catch up or vent? Don’t worry, a transcript of that personal conversation has automatically been emailed to all 15 invitees! 

In the same way that the introduction of preferred pronouns in email signatures has given us an easy way to flag important information about how we identify, we need to develop and adopt new language and processes for ourselves, and the community members we’re trying to support, to set boundaries when it comes to how our privacy (personal and professional) is already being infringed on by GenAI. It would be naive to suggest that we are ‘all in the same boat’ when it comes to learning about how GenAI can put us at risk (as opposed to say, with the advent of mobile, where practitioners had a good few years head start in terms of personal use) – this is patently untrue. However, there is an opportunity to use these very fresh, personal experiences to make us better at (truly) empathizing with the needs of our users – and therefore better at working with them to confront these complex challenges.

We are all still just at the beginning of our journey with GenAI – and one of the main takeaways of the workshop was an urge to lean in: GenAI is not going away. The active participation in this workshop, as well as the one on GenAI and SBC, not to mention the other AI related sessions during the conference itself, show how much demand there is for practical and hands-on sessions. At the same time, the ability to run sessions such as these also demonstrate that barriers to wielding GenAI’s potential are disappearing: it takes 5 minutes to build a bespoke chatbot and we trained a LLM to understand and respond in Igbo in the space of one seminar. I hope you’ll join me and the many others coming together to keep learning in the NLP CoP, a safe, dynamic and forward-thinking yet critical space where we run events, share resources, and foster valuable connections.

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *