Technocolonialism in the age of Humanitarian AI
“Technocolonialism refers to how digital innovation, data and AI practices entrench power asymmetries and engender new forms of violence and new inequities.”
On February 11th the Humanitarian AI + MERL Working Group hosted Mirca Madianou, Professor and co-director of the Migrant Futures Institute at Goldsmiths, University of London, to discuss her new book “Technocolonialism: When Technology for Good is Harmful.” The event included a short author presentation, and a moderated Q+A followed by audience Q+A. A big thank you to Mirca for joining us and sharing insights into her work and research.
Understanding humanitarian tech through the lens of technocolonialism
“Colonialism is not a metaphor – it is an enduring structure whose legacy persists and shapes contemporary formations of race or gender or class. We need colonialism as a framework to explain why technological experiments take place in refugee camps.”
Mirca’s presentation was a rich exploration of technocolonialism and its relevance to the harmful dynamics of tech use in the humanitarian sector.
- The reach of technology has extended throughout the sector: Mirca notes that since her research on the response to Typhoon Haiyan in 2013 – where technology was used as part of feedback collection – tech use has accelerated. Technology is now used as part of a variety of activities in the sector such as service provision, algorithmically determining aid recipients, and AI being used to predict refugee flows. Against a backdrop of increasing humanitarian emergencies and a shortage of funding, many technological tools are being championed as solutions to meet humanitarian challenges.
- Technocolonialism traces relations of power embedded within technology use: By mapping relationships of power and following data trails, Mirca shows that digital innovation and AI entrench power asymmetries between global North and global South. She suggests this overlap is unsurprising — humanitarianism is entangled with the colonial project, emerging out of colonial expansion in the 19th and 20th centuries. The power asymmetry of tech use in the humanitarian sector mirrors that of colonial rule. In both instances, science and technology are used to ‘mould subjectivities through systems of classification.’
- The six logics of technocolonialism: Technocolonialism is driven by six key logics which underpin the analytical framework of Mirca’s work. These are the logic of humanitarian accountability, the logic of audit, the logic of capitalism, the logic of solutionism, the logic of securitisation, and the logic of resistance.
- AI and the logic of technocolonialism: AI tools embody these six logics of technocolonialism. Claims of efficiency and neutrality meanwhile underpin the logic of accountability. The intertwining of AI tools with Big Tech and private capital are demonstrative of the logic of solutionism and capitalism. Assumptions of neutrality, efficiency and truth in relation to AI have also led to claims about the ability of AI tools to contribute to the logic of accountability.
- Accountability and humanitarian AI: The overarching lack of transparency in AI regarding training data, how algorithms are built, how they’re coded and the manner in which AI is often used in the background of humanitarian work make it exceptionally challenging for both humanitarian staff and communities to understand how tools operate, and why certain decisions are made. Exclusion errors and biases that occur as a result of these systems are thus incredibly difficult to address or contest.
- Suspicion and the drive to surveil: AI tools are part of a wider securitisation of humanitarian work, used to discern threats and anomalies. Much of this approach is underpinned by deep suspicions, especially of fraud, in the sector. AI enables this suspicion and facilitates the creep of surveillance within humanitarian work. Mirca argued that it is precisely this culture of suspicion, and creation of attendant systems, that can contribute to a self-fulfilling process whereby earnest mistakes, bias in the systems, and misunderstandings are read as purposeful.
- Infrastructure and the codification of technocolonialism: The creation of technologically underpinned infrastructures means that increasingly technologies used in the humanitarian setting are interoperable with other external infrastructures. This means biases, misunderstandings and other automated decisions can impact individuals even once they are no longer part of the humanitarian system.
‘It is impossible to protect data once they enter these infrastructuring systems. We need a new term to describe the serious risks represented by this new profile of harm. Infrastructural violence allows us to capture how technologically mediated harm is ever more diffused and multiplied – it works in the background making it challenging for us to pinpoint properly.’
Reflecting on the salience of technocolonialism
In an especially challenging moment for the humanitarian sector, technocolonialism helps us ground the many and varied questions about humanitarian technology use. By situating humanitarian tech use within a wider analytical framework we can uncover and identify the entanglement of different tech tools. Event participants asked a wide range of questions about how the concept of technocolonialism and Mirca’s other ideas on mundane resistance and infrastructural violence relate to their work.
- On the research method and following the ‘trails of data’: Technocolonialism synthesises Mirca’s research across several different projects and includes learnings from collaborative study with colleagues and communities, ethnographic research and a subsequent data trails mapping process aimed at understanding how data flows across the humanitarian sector – how it’s used and where it goes, as well as digital ethnography.
- The importance of ‘mundane resistance’: Mirca’s presentation introduced the idea of mundane resistance as part of the ‘logic of resistance’. Participants resonated with the importance of observing and exercising small and consistent practices of resistance. Mirca was inspired by the black radical tradition and their observation of passive forms of resistance. By capturing small acts of refusal and rejecting the appropriation of tools for different purposes, the term mundane resistance calls attention to resistance in the context of power asymmetry. One person suggested that resisting the use of AI in their evaluation work was an example of this kind of mundane resistance.
- The tension between resistance and further exclusion: the hopefulness of mundane resistance was especially compelling to attendees, but one participant vocalised concerns that resistance might engender further marginalisation. Mirca acknowledged this challenge and shared examples from her new book on how civil society and communities collaborated to use technology in asserting the values that mattered to them. Most recently pushback on the use of biometrics in Ukraine evidences how acts of mundane resistance can constrain tech use.
Missed the event? You can watch a recording of Mirca’s presentation
We know there were many members of our working group who weren’t able to attend the event and so we have recorded the presentation for you. As part of our commitment to balancing open spaces with confidentiality and privacy of our attendees, we have not recorded the Q+A.
Next steps
Keep your eye out for a review of Mirca’s book where we dig deeper into the concept of Technocolonialism and what the framework can offer those working on problems related to Humanitarian AI + MERL.
If you have ideas for future events, a desire to speak at a Working Group meeting, AI for humanitarian MERL use cases (successes or failures!), or have a project you would like to reach out about, please contact Quito, our group lead.
—–
📌 Interested in taking part in similar discussions in the future? Make sure to join the NLP CoP, a community of over 1000 development and humanitarian practitioners working at the intersection of AI, digital development, and MERL.
You might also like
-
Join us on March 12th: “Tests of Large and Small Language Models on common evaluation tasks”, a webinar with Gerard Atkinson
-
Meet us (virtually!) at RightsCon next week
-
Getting Real About Artificial Intelligence: GenAI, Evaluation in International Development, and the Case for Caution
-
Key takeaways from our first Gender, AI and MERL Working Group meeting