Humanitarian AI+MERL: mapping community questions, concerns and needs
On November 7th, the Humanitarian AI + MERL Working Group had its first meeting. We took a deeper look at these questions and others, as we sought to better understand our collective community needs. This working group is part of the Natural Language Processing Community of Practice (NLP-CoP), hosted by The MERL Tech Initiative.
We were looking to answer questions like: How can humanitarians utilise AI for monitoring, evaluation, research and learning (MERL) processes? In what ways might humanitarian use of AI worsen the digital divide between international and national staff? Does AI present benefits for crisis-affected communities – is the sector able to assess this?
In this collaborative session, we surfaced key areas of interest and critical knowledge gaps found in the sector. Below we share high-level takeaways and next steps for the Humanitarian AI + MERL Working Group.
“How can we use AI as an opportunity to give a positive, much needed shift to business as usual in Humanitarian MERL?”
Community questions and concerns about Humanitarian AI for MERL
- Organizational readiness and the digital divide: attendees shared particular concern around the way in which artificial intelligence, including Generative AI, may open up a gulf both within organizations and between them. Many felt that the knowledge and skills required to use AI would privilege larger organizations, and staff members located in the Global North
- Interest in better understanding AI and MERL: though AI is still new to the sector, there was already interest in how it could be used for MERL. Participants in the meeting emphasized the importance of ensuring any AI use is underpinned by established MERL best practices and humanitarian principles. Additionally, discussants noted it would be important to figure out where in the MERL cycle AI could be integrated in an appropriate manner.
- Ethical approaches and guidance: concerns about the ethical impacts of using AI in the humanitarian sector were wide ranging. Job replacemenet, challenges around oversight, inability to discern how AI tools make decisions, were all raised as critical areas where more discussion and guidance is needed.
- Prioritising localization: the sector has made commitments towards decolonization and localization that AI tools could undermine. Determining the ways in which AI usage may support or work against localization efforts was a particular area of focus. Several attendees shared examples of work on local ownership and local open source AI – particularly in Africa – that could provide alternative AI tools that support localization.
- Curiousity about using AI to help humanitarian actors parse large datasets: the humanitarian sector has large amounts of data about participants and programs. It can be hard to find the time and resources to sift through all this data, however, and this is one area where AI might be useful. Participants in the meeting shared ideas about how AI and big data approaches could be used for synthesising evidence to inform program design and for last mile and anticipatory action. They warned however of potential consequences of these approaches if they don’t follow responsible data practices. In particular the potential risks of sharing sensitive participant data were raised as critical conversations that the sector needs to have before pursuing these possibilities.
- Deployment limitations: humanitarian work is deeply challenging and complex. These working conditions pose particular obstacles in relation to developing humanitarian-specific AI. While some big data sets of participant and programmatic data do exist, the sector acknowledges chronic challenges with collecting high-quality data and difficulties with accessibility of data for the purposes of training AI. Additionally,oral traditions can lead to entire groups being left out of AI development, leading to high levels of exclusion and errors when using AI.
“I fear that humanitarianism will continue to be ‘more reporting, more data processing’ – as it is now. And if so, AI will simply substitute most of us.”
The fast changing nature of AI left many participants feeling like they are behind in their understanding – several shared that it felt overwhelming to keep up on an individual basis. Many attendees understood AI as an issue of equity and the discussion repeatedly returned to the way in which AI use could negatively impact beneficiaries.
What do humanitarians need to navigate AI use?
“What is the explainability of AI models? How can we understand their answers?”
In the face of so many important questions, humanitarians need practical advice. Devising such advice however is not easy. Participants worked together to think about how the Working Group could be a space for collectively identifying solutions and trainings necessary to build capacity for the sector. Some key suggestions included:
- Developing an evidence base to better understand how AI is being used for Humanitarian MERL and what the outcomes of use are
- Designing a shared set of criteria for tools, including what the sector wants in AI tools and indicators of how to judge their effectiveness
- Determining the appropriate level of AI from staff, beneficiary and data security perspectives
- Establishing a humanitarian AI playbook that delineates clear standards and approaches to AI use
- Collecting AI+MERL use cases and examples to share knowledge across the sector
- Matching AI tools to evaluation methods, taking a thoughtful approach to what the humanitarian workflow looks like and where AI tools can provide value and utility
The complexity conundrum – does AI leave room for the intricacy of humanitarian evaluation?
In order for AI to be useful for humanitarians, the sector needs to understand when and how AI can be complementary to the nuances of humanitarian work. The tendency of AI tools to simplify and flatten does not fit with the highly contextualised nature of humanitarian work. Our first call only just began to dig into the details and it was clear there is so much more to discuss. It is also clear that having space for humanitarian practitioners to talk about and reflect on the use of AI is vital in an ever changing and fast paced environment.
Though the conversation was focused on AI for MERL in the sector, it was clear that many age-old humanitarian issues are salient to questions about emerging AI uses. As one attendee articulated:
“What is the bigger picture? In the years to come, AI might just do our work. We may supervise it, but we need a completely different vision to empower local actors to run it in the way that is best for them. Can we use AI as a Trojan horse to bring in localization of tools and culture – we need to rediscover our roles but we can do more than that.”
Next steps
Our kickoff call surfaced a wide range of questions, challenges and actions. To take this vital work forward we will be putting together a research agenda to help collate community interests, and begin convening discussions, developing training, and gathering speakers to help working group members to answer their questions.
We are excited to keep learning from our community, and look forward to next year as we begin digging deeper into core questions about the role of AI in Humanitarian MERL. Look for an invitation for our next meeting in early 2025!
If you have ideas for future events, a desire to speak at a Working Group meeting, AI for humanitarian MERL use cases (successes or failures!), or have a project you would like to reach out about, please contact Quito, our group lead.
—–
📌 Interested in taking part in similar discussions in the future? Make sure to join the NLP CoP, a community of over 1000 development and humanitarian practitioners working at the intersection of AI, digital development, and MERL.