Event recap: The Humanitarian AI Countdown and humanitarian knowledge production with Kristin Sandvik
The Humanitarian AI+MERL Working Group greeted the new year hearing from Kristin Sandvik on her ongoing work thinking about how AI usage is changing the humanitarian ways of working. As part of the Humanitarian AI Countdown series, Kristin shared her thoughts on the importance of validated humanitarian knowledge production processes, the double bind of AI, and maintaining both individual and humanitarian voices in the AI age.
Producing humanitarian knowledge in the face of generative AI
With the intensification of AI, and her first hand experience with students and humanitarians of how quickly AI has reconfigured the way they work, Kristin has been thinking extensively about the rapid change in knowledge production structures. She suggested that the timing of AI’s rise makes it especially difficult for the sector to grapple with: “If this had been a decade ago, and AI had sort of surged like it does today, I do think that we would have been able to meet this with sort of more training, more capacity building, a bit more forceful critique, and a little bit more situational awareness.”
Four key learnings for humanitarians from your new piece of research
“It’s [AI] becoming a problem for us, but it’s also the lens through which we see the world.”
- AI changes knowledge production in the humanitarian field. This includes how we communicate, how we plan and carry out aid work and how we analyze humanitarian needs and responses.
- AI is both a problem and a tool for problematization. AI is being framed as a solution to humanitarian problems—and frames solutions to humanitarian problems. However AI is significantly deframing humanitarian knowledge practices, both practice-based and academic.
- The use of AI entails the dissolution of the processes, mechanisms and prisms for producing knowledge about the world.
- Humanitarians are entering a new context of data poverty, characterized by a lack of access and funding (lack of data), data staleness, censorship and humanitarian and tech norm backsliding, AI slop overflow across the internet and lack of oversight.
Kristin argues that as AI use disrupts settled issues and knowledge forms, it is likely to do so in ways that are counter to humanitarian work. Not only can AI be wrong, but it can also be of low quality and with little added value. This makes the knowledge ‘decapitation’ effect—where AI severs the link between knowledge producers and datasets—of AI especially potent.
Three key questions your work prompts humanitarians to ask themselves
“It’s not only that we’re having less good data, it’s harder to find and use it overall. This data poverty is going to become an intensifying problem.”
- To what extent does the precarious state of the humanitarian sector and lack of policy guidance contribute to individual aid workers using AI ‘in the wild’, and what are the consequences for humanitarian accountability, for the sector and its relationship to the world?
- Given current challenges, is there a trend towards delocalization of humanitarian action, and if so, what role does AI play in driving this process?
- To what extent are we seeing a shift from data extractivism to data extrapolation as humanitarians and private sector partners try to fill the gaps (and develop business models), and what are the consequences?
Digitisation has driven humanitarian withdrawal from the field. Here, AI may only accelerate a phenomenon that has already been taking place. But AI uptake also means humanitarians are withdrawing from the digital management of their work. Known issues about the black box of AI, alongside the roll-out of AI across other critical nodes of humanitarian work, and an absence of policy on usage means humanitarians are increasingly losing oversight of how decisions are being made. Kristin noted that this the shift to extrapolation enabled by AI tools disconnect is likely to be accelerate this shift. The ability to utilise AI to triangulate between multiple types and sources of data could disrupt the ‘ability of [humanitarians] to calibrate interventions.’ In the shift to extrapolation humanitarians will need to think harder about when they do not have enough data to make decisions.
Two critical resources humanitarians can add to their knowledge base
“You’re not only supporting your communities that you serve, but you’re also extracting data from them. It’s become part of humanitarian operations that there’s something coming back.”
- Calibrating AI/d talk: framing perceptions, reframing policy, and deframing knowledge, Kristin Bergtora Sandvik
- A New Digital Divide? Coder Worldviews, the Slop Economy, and Democracy in the Age of AI, Jason Miklian, Kristian Hoelscher
Humanitarians often treated previous waves of technology uptake as singular applications. Over time a more holistic understanding has emerged, one that situates these technologies within broader infrastructures of capitalism and technology. It is within this context that humanitarians, Kristin argues, need to revisit some of the underlying assumptions of their work. Indeed, as generative tools are integrated within the sector much of the institutional framework of humanitarians could be exposed to new vulnerabilities. Her suggested reading by Mikian and Hoelscher begins to discuss more generally how the informational quality divide unsettles the possible positive contributions of AI tools.
One piece of policy advice or action for humanitarians to operationalise in light of this research
“The uncanny valley of the [AI-generated] word, it’s almost as if a human could have written it, but not quite, and it fills us with discomfort, and I do think that we should really maintain that discomfort. We should maintain our own voice, and we should, you know, stay in control.”
- Humanitarians need to develop policy and find ways to invest in ethics training and critical digital literacy now
In Q+A we discussed the challenge of training, and the importance of broader literacy. As humanitarians try to develop tools to support their staff they will need to empower individual understanding of AI generally, to aid in more specific use-case decisions. . As was pointed out on this call by several practitioners, many humanitarians are already informally using AI. Organisations need to rapidly improve literacy to better support the reach of and adherence to any policies that do emerge. Some organisations are already thinking about literacy to help with more discerning professional and personal use.
Further resources
Huge thank you to our lively participants, and all those who shared their thoughts, ideas and insights on the call. If you would like to watch the recording or revisit the notes, you can do so here.
You can also read Kristin’s latest article on ‘AI in Aid: Experimentality, Maldata, and Data Extrapolation’ and the growing norm of digital experimentation in the humanitarian sector.
About the series
As humanitarian decision makers grapple with the unknown frontier of generative AI, and with advancements in non-generative AI capacity, we at the Humanitarian AI+MERL Working Group are actively seeking out exciting new work that can support humanitarians in their thinking.
The Humanitarian AI Countdown is designed to help bridge cutting-edge research with impact and implementation-driven humanitarian decision-making. Each event will be recorded, and will ask the same four key questions.
If you have ideas on speakers or are interested in presenting your research as part of this series, please reach out to our working group lead, Quito Tsui.
And if you haven’t already, find out more about joining the NLP Community of Practice here.
Leave a Reply
You might also like
-
Event: What are the resources we need to navigate AI, gender and MERL?
-
Research Digest 2: State of AI Adoption and Competencies for Evaluators for Made in Africa AI in MERL
-
Bias in, bias out? How we’re understanding more about gender bias in LLMs
-
Event: Turning principles into actions – Made in Africa AI in MERL


This is one of the most appropriate relevant and needed discussions ever. Thank you for the clarity and setting a clear goal for all evaluators with a heart for humanity