Event recap: The Humanitarian AI Countdown and operationalising responsible AI with Shivaang Sharma


Image shows white background with rows of clocks showing different times

The Humanitarian AI +MERL Working Group kicked off our new event series, the Humanitarian AI Countdown. For our first event, we heard from Shivaang Sharma about his new work on ‘AI Responsibility Rifts.’ 

Along the way, we discussed the nature of humanity in the context of both humanitarianism and AI systems, the art and science of standard setting, and took a closer look at whose values are embedded within AI development. Read on to find out the four key learnings, three questions, two critical resources, and one policy action Shivaang shared.

How humanitarian-AI teams work in humanitarian and disaster contexts 

Shivaang’s recent research focuses on high-impact case studies and the use of AI in extreme cases, drawing on his extensive experience thinking about human AI collaboration in contexts of crisis and conflict. 

Four key learnings for humanitarians from your new piece of research

  • AI Responsibility Rifts (AIRRs) are inevitable and must be managed proactively
  • The SHARE framework provides actionable guidance for responsible AI
  • Humanitarian-AI hybrids outperform either humans or AI alone in crisis contexts
  • Information Assurance requires participatory, not traditional hierarchical approaches

‘Rifts are differences in opinions and experiences; divergence in our sense of, and involvement with, AI tools and how these AI tools impact us in our professional lives and as human beings.’

Humanitarians frequently think about risk, but the nuance of risk assessment and risk appetite result in perception ‘rifts’. This means that though humanitarians may be using the same language of risk, the details of what they mean could vary drastically. These gaps in shared understanding produce underlying disagreement that results in challenging policy-making environments.

Three key questions your work prompts humanitarians to ask themselves

  • Whose values are embedded (excluded) in an AI tool?
  • How is the continued use of AI tools affecting our professional identity?
  • How do we trace accountability in Human-AI decision-making in sensitive contexts?

Shivaang argued that though humanitarians tend to agree on risks, their subjective experiences of AI produce rifts, or disagreements, especially in the context of human-AI assemblages. As he studied these hybrid systems, Shivaang recognised that as the human agents involved grew across varying working contexts, variation in what it means to utilise AI tools responsibly in sensitive spaces became more layered, and disagreement on redlines began to emerge. These rifts both exist and persist as a result of the evolving capabilities of AI and the differential effects of such evolutions across humanitarian stakeholders. 

Two critical resources humanitarians can add to their knowledge base

The sector is grappling with perennial tensions, foremost is the frequent development and abandonment of tools that creates a high level of tool churn. Shivaang suggests there are a few mitigatory tools humanitarians can use. First, organizations should identify all humanitarian stakeholders and their different expectations, values, and contributions. Then they should think about how new features might affect stakeholders differently across the AI pipeline. This collective mapping can help humanitarians begin to chart where rifts may emerge and, crucially, how they may evolve over time. 

One piece of policy advice or action for humanitarians to operationalise in light of this research

  • Implement ‘mandatory’ participatory (and open) design processes across all phases of the AI development, implementation, and scaling pipeline

As both the frontiers of technology and humanitarian experiences shift, so too will humanitarian perspectives— both individual and collective— change. It’s not just the technology or humanitarians—the nature and shape of crises are also changing. Shivaang noted that this means it is even more important to 

think about why some tools are scaled up and successful. Amidst these changes, humanitarians will need to bring together the technical and non-technical aspects of building AI systems.  One starting point is a questionnaire developed by Shivaang to help facilitate discussion between different stakeholders and enable grounded, consistent conversation, amidst a rapidly changing backdrop. 

Further resources

Thanks to all those who joined the call, and to Shivaang for sharing his insights. If you would like to watch the recording or revisit the notes, you can do so here.

About the series

As humanitarian decision makers grapple with the unknown frontier of generative AI, and with advancements in non-generative AI capacity, we at the Humanitarian AI+MERL Working Group are actively seeking out exciting new work that can support humanitarians in their thinking.

The Humanitarian AI Countdown is designed to help bridge cutting-edge research with impact and implementation-driven humanitarian decision-making. Each event will be recorded, and will ask the same four key questions. 

If you have ideas on speakers or are interested in presenting your research as part of this series, please reach out to our working group lead, Quito Tsui.

And if you haven’t already, find out more about joining the NLP Community of Practice here.

Leave a Reply

Your email address will not be published. Required fields are marked *