Event recap: Accountability in the age of humanitarian AI — A conversation hosted by the MERL Tech Initiative and CDAC Network
On August 6th, the MERL Tech Initiative and CDAC Network convened a cross-community of practice discussion on the question of accountability amidst growing humanitarian exploration of AI use.
Accountability is a critical lens through which humanitarians can both scrutinise and understand their work, ensuring ongoing alignment with long-standing humanitarian principles. This is especially useful in a moment of great upheaval where humanitarians are finding their status quo upended by substantial technological change alongside momentous geopolitical shifts.
Against this backdrop, CDAC Network’s Humanitarian AI Community of Practice and MTI’s Humanitarian AI+MERL Working Group came together to ask: what kind of questions should humanitarians be asking themselves about AI use? What kind of technical knowledge might they need? And most importantly how can they ensure their commitments to communities in the face of growing financial pressures and techno-solutionism?
Speakers
To help us find some answers we were fortunate to be joined by a highly-experienced speaker panel working at the forefront of the accountability question. On the call we heard from:
- Stella Suge, Executive Director of FilmAid Kenya based in Kenya. Filmaid’s work on facilitating access to ICTs among the Kenyan refugee population as well as host communities means FilmAid has an intimate knowledge of what frontline communities are thinking and feeling.
- John C. Havens, Global Staff Lead, IEEE Planet Positive 2030 and Former E.D, The IEEE Global AI Ethics Initiative. John has written extensively about AI systems, his most recent book is titled ‘Heartificial Intelligence: Embracing Our Humanity to Maximize Machines’.
- Anjali Mazumber, Director of AI Accountability, Inclusion and Rights at The Turing Institute. She is part of the Standards and Assurance Framework for Ethical AI (SAFE AI).
- Linda Raftree, Founder of The MERL Tech Initiative where she has been looking at what it means to use AI safely and accountably within research, and asking more broadly what the use of AI in evaluation looks like.
- Helen McIlhinney, is Executive Director at CDAC Network, a global alliance committed to ensuring people can access safe, trustworthy information during crises. Helen brought her experience leading the network, which has long focused on participation and shifting power, as well as her operational field, policy and donor background to the panel and to moderate the audience Q+A.
**Both our panellists and attendees shared a wealth or resources speaking to the humanitarian AI moment . At the end of this post you can find a list of resources we have collated from the call.**
Why do humanitarians need to be talking about AI accountability?
In both communities of practice, MTI and CDAC have been asking questions about the way in which AI changes accountability for humanitarians. Following a joint event at RightsCon we thought our communities of practice would benefit from coming together to dig deeper into the humanitarian stakes of AI accountability.
Recent work at MTI has focused on bringing lessons on ethics and responsible tech development from previous waves of technological progress and applying it to the AI age. Linda noted that there is already growing knowledge about the lack of representation, participation, and consent built into current LLM development that raises sector-specific concerns for the humanitarian space where these elements are fundamental aspects of humanitarian work.
The CDAC Network’s new strategic focus reinforces information as aid, and the importance of participation and accountability in the face of advancing technology and the ongoing dismantlement of humanitarian funding. Helen shared the efforts of the FCDO-funded SAFE AI project with CDAC, The Alan Turing Institute and Humanitarian AI Advisory. The project focuses on efforts to create practical AI compliance and regulation guidelines, AI assurance tools, as well as to ensure affected communities can participate and have a real say in how AI is used and addresses their needs.
Accountability as a multi-faceted endeavour
There are many facets of accountability for humanitarians in relation to AI. In a collective temperature taking at the start of the call we asked participants what areas of AI-related concerns humanitarians ought to be prioritizing. Governance and standards, participation, and privacy were the most critical to the group. Our panellists reflected the interconnectedness of these priorities noting the overlapping nature of community participation, standards, and evaluation as mutually reinforcing.
Key takeaways
1. Community participation should be at the heart of humanitarian accountability efforts
“To really manifest or create AI systems that consider accountability, we have to ask what we mean by accountability: accountability to whom? For whom?“
Amongst both speakers and participants there was a strong consensus about the integral knowledge of community members to humanitarian work. Drawing on her experience of working directly with and responding to communities, Stella noted the importance of reconfiguring who defines the problems before getting caught up in discussions of AI tools. This is especially critical when, as she shared, AI tools may be used to determine aid distribution in a time of increased scarcity and service reduction. Affirming participation should start at the foundations of AI, notably in language accessibility which Stella described as an ongoing obstacle to engagement. The unknown of how AI tools will help or hinder localization efforts reinforces the importance of community consultation.
John’s reflected on the tendency of technology to reflect narrow ways of being in the world adds to the impetus for more sustained, wide scale community consultation. Linda cautioned against equating inclusion in the data sets that train AI to meaningful participation, and called for more active involvement of communities.
2. Standards setting needs to be a consultative endeavour that draws on ethical frameworks and technical tools in a holistic manner
“We use the term AI systems as a reminder like a car: a car doesn’t move without gasoline or electricity. It’s the same thing with an algorithm: it doesn’t run without data — usually human data.”
In describing his experiences of consultation processes at the IEEE, John shared the centricity of human rights as a foundational principle of standard setting. However he warned against an overly Western ethical perspective, noting the significance of other ethical frameworks from Ubuntu to Shinto that can expand how we approach standards setting in regards to AI. Though AI is a technical medium, he argued for a continued emphasis on the human when discussing technological tools. At the IEEE, connecting between technically minded engineers and socially minded experts concerned with the end user’s perspective widens understanding of the way technological tools may be experienced, and the risks or implications of use that are often less visible to engineers who are, as John characterised, ‘only one part of the value chain.’
Anjali echoed these challenges of bridging social and technical divides, especially in the fast paced AI ecosystem where rapid changes can be difficult to stay on top of. Overwhelm is common: the proliferation of standards, ethics guides, and responsible approaches along with technological change means many in the humanitarian sector do not know where to start. Anjali offered contextualised model cards (one of the key outputs of the SAFE AI project) as one way for humanitarians to bring together technological specificity, ethical practice and participation. She argued that model cards can also function as a ‘reporting mechanism’ that speaks to the need to provide transparency as part of accountability in the sector.
3. For AI standards to be implemented, humanitarian organizations will need to consider revising enabling training, learning and evaluation processes
“Standards say one thing, but operational practices look very different.”
In the opening poll participants noted technical barriers as one of the most critical challenges humanitarians face when it comes to AI accountability. Many organizations are not prepared for the technical lift required to vet tools, design architectures and responsibly deploy AI systems without humanitarian work. However, as Anjali pointed out, humanitarians need to see AI uptake as a collective humanitarian journey, one that is not just about ‘one person or one team, and not even limited to technical folk,’ but is also engaging organizations as a whole to better ready them for safe and responsible use.
Linda noted that these internal limitations regarding technical knowledge implicate external explanations, with humanitarians unable to explain the tools they are using to community members. This problem is exacerbated by a lack of methodological readiness as humanitarians have not developed the evaluative processes to deal with technical intricacies of evaluation. Linda gave the example of evaluating an LLM vs evaluating the humanitarian application sitting on top of the LLM, and the challenges that may arise in doing so. She emphasized the importance of understanding the ‘whole arc of evaluation’ as part of sectoral preparedness.
4. Now, more than ever, humanitarians need to be communicating the ‘why’ of their work
“We are still a sector that is largely ticking boxes.”
Amidst the clamor for participation, Stella pointed out that action is sometimes overlooked. She noted that humanitarian consultation is not always geared up for change and changing practices. Rather, immense amounts of data is collected without responses changing as a result of the data and inputs collected from communities. One starting point she mentioned is for humanitarians to focus on communicating the ‘why’ of their work, explaining to communities why data is being collected, how it will specifically be used, and what the benefit will be to those who give their input.
Q+A: Power and the use of AI
In the Q+A we heard from Helen about the power dynamics humanitarians often find themselves negotiating both internally and externally. The humanitarian sector is deeply intertwined with wider global shifts in power—a reality that has been made acute in recent times where, as Helen described, the sector is ‘witnessing a transfer of power around these technological systems.’ In such a moment she argued for the importance of good governance and building consensus towards humanitarian practice regarding AI tools.
In a wider moment of recalibration humanitarians therefore need to ask themselves: whose preferences do communities respond to? Humanitarians focus a lot on the data they want to use and collect, but communities have their own preferences and priorities. As one participant noted, communities may make their own demands when it comes to information collection and humanitarian activities. Humanitarians need to be able to make the case to communities for AI usage else community members may be unwilling to engage absent a clear understanding of utility.
Looking ahead: Humanitarians are committed to, but not currently prepared to navigate the accountability demands of AI tools
Our call made clear the desire for a deeper reckoning with humanitarian accountability in the face of advancing AI tools and systems. However it also testified to the deeper, perhaps more existential questions remaining about the broader enabling conditions required in the sector to support accountable AI use by humanitarians. It is clear this is just the beginning of a much lengthier and ongoing discussion.
We thank our speakers and participants for their engagement and lively discussion.
If you would like to continue these discussions you can do so by joining the NLP CoP, a community of over 1600 development and humanitarian practitioners working at the intersection of AI, digital development, and MERL. We love hearing from our members; if you have ideas for future events, a desire to speak at a Working Group meeting, AI for humanitarian MERL use cases (successes or failures!), or have a project you would like to reach out about, please contact Quito, our group lead.
You can also find out more about the FCDO-funded SAFE AI collaboration between CDAC Network, The Alan Turing Institute and Humanitarian AI Advisory here.
Resources
- CDAC on building guardrails into AI: “ The New Humanitarian | Humanitarian AI guidelines: The clock is ticking to create minimum standards”
- Film Aid Kenya and CDAC’s project on the perspectives on AI from Kakuma refugee camp: “Perspectives on AI from Kakuma refugee camp, Kenya”
- MTI’s recently released vendor-vetting tool: “Tool for Assessing AI Vendors”
- An introduction on how to apply metrics and consider design from a humanitarian (but not anthropocentric) viewpoint: “Prioritizing People and Planet as the Metrics of Responsible AI”
- Overview of the TESCREAL bundle by Timnit Gebru and Emile P. Torres: “TESCREAL: A Quick guide to the mythologies driving tech power”
- IEEE CertifAIEd certification (stemming from IEEE 7000) that has a newly developed educational form of certification (for individuals): “IEEE CertifAIEd Professional Certification”
- New IEEE standard focused on AI Procurement: “IEEE Standard for the Procurement of Artificial Intelligence and Automated Decision Systems”
- Design focused IEEE standard considering age appropriate design from a human rights perspective: “IEEE Age Verification Certification Program”
- IEEE 7000 standards series focused on Autonomous and Intelligent Systems: “Autonomous Intelligence and Systems standards”.
- Action steps on how to shift the burden onto AI companies to demonstrate how their systems are built at: “A rubric for assessing the legitimacy of predictive optimization
- Thoughts on humanitarian AI usage and “shadow AI” with the Humanitarian leadership Academy and DFS: “How are humanitarians using artificial intelligence in 2025?”
- Wider evaluation processes of AI systems in humanitarian and development work: “Contextual Awareness is All You Need: Intro” and “An AI Evaluation Framework for the Development Sector”
- Culturally relevant evaluation: “Reflections on meeting the challenge of communicating the validity of culturally responsive evaluation (CRE) and getting influential voices and changemakers to listen”
- Indigenous data principles: “CARE Principles for Indigenous Data Governance”
You might also like
-
Event: What are the resources we need to navigate AI, gender and MERL?
-
Event recap: The Humanitarian AI Countdown and humanitarian knowledge production with Kristin Sandvik
-
Research Digest 2: State of AI Adoption and Competencies for Evaluators for Made in Africa AI in MERL
-
Bias in, bias out? How we’re understanding more about gender bias in LLMs
