Join the Humanitarian AI+MERL Working Group and CDAC on August 6th to discuss humanitarian accountability needs amidst the growing AI conversation


For humanitarians, accountability is a multifaceted endeavour, and one that the sector has been grappling with for some time now. Emerging technologies have introduced new risks, harms and operational modalities that require further and more specific reflections on accountability. Claims that AI may remake – or at the very least dramatically reshape – humanitarian work still warrant deeper analysis and research. But regardless of how much or how little of these claims play out, humanitarian actors need to be proactive in thinking about the implications of AI use for humanitarian accountability. Ensuring that humanitarian action, and by extension the tools and systems that they use, are transparent and accountable to the communities humanitarians serve is vital to upholding humanitarian principles.

Developing a humanitarian specific understanding of AI accountability 

In this session we want to look at humanitarian accountability and AI use from a number of critical perspectives, and we are excited to host several experts to help us navigate this growing and often uncertain terrain. 

To help us think more about these questions we will be hearing from speakers with a range of experience and expertise including: developing technical standards, working directly with communities, helping humanitarian decision makers developing internal policies, and more! 

Together we will ask: 

  1. In what ways does AI as a technology re-shape or require humanitarians to rethink their approach to accountability?
  2. What kind of governance should humanitarians develop in response or in preemption of changes brought about by AI? What existing frameworks can we re-use or adapt? And what kind of new frameworks might we need?
  3. What kind of participatory processes should humanitarians use to update and create accountability frameworks fit for the demands of humanitarian AI?
  4. How can standards and regulation play a role in the governance discussion?
  5. What barriers or challenges might humanitarians face in pursuing accountability for AI tools? How can we tackle these?
  6. What kinds of accountability methods metrics do humanitarians already have that will serve them well in the AI discourse?

Ultimately as humanitarians seek to confront their increasingly tumultuous operational environment, it is critical to collectively think more together about if, when and how humanitarians can establish shared frameworks to ensure accountability for their use of AI and collective efforts to push AI companies and governments for accountability for AI use in humanitarian settings.

Join the call

Register here to join us on August 6th at 9-10:30am ET/ 2-3:30pm BST/ 4-5:30pm EAT – the meeting is open to all, members and non-members alike!

And if you haven’t already, find out more about joining the NLP Community of Practice here.

1 comment

  1. I think it can be useful to segment the different types of AI for the discussion. Example, ML vs. Gen AI. Also the governance needs based on “use maturity level”, such as individual use, team use, and organizational use. I look forward to the conversation!

Leave a Reply

Your email address will not be published. Required fields are marked *