Event recap: “Should we be using AI right now?” A conversation about the ethical challenges of the current moment


How is the increasing connection between Big Tech and the rise in authoritarianism impacting our decisions around the use of technology in our work? What do we make of the elimination of critical AI regulations regarding safety and guardrails, in spite of robust scientific consensus on how AI can exacerbate bias and discrimination? What options do we have in light of the environmental impact of AI and the impending climate crisis?

These are some of the questions we explored at a recent meeting at the Natural Language Processing Community of Practice (NLP-CoP) on May 08. A group of over 90 professionals from international development, humanitarian, and social impact sectors, as well as journalists, researchers, and technologists, came together to discuss how we are navigating the practical and ethical challenges around the use of Artificial Intelligence in the current context. 

We opted not to record this session so that participants would feel more confident about freely sharing their thoughts and challenges, but in this blog post, we summarize some of the key takeaways from the conversation. 

Technosolutionism, magical fixes, and lack of oversight 

During our call, participants talked about increased pressure to use AI in humanitarian, development, and government sectors. They said that AI and emerging technologies are often portrayed as magical solutions to problems that tech cannot actually solve, because the root issues of those problems are related to power and resources, processes, and people. These narratives are often based on an unjustifiable optimism about technologies and a lack of care in the implementation of new technologies. Much of this pressure stems from the tech industry pushing a narrative of efficiency, cost reduction, and solutionism. At the same time, many point out that there has not been sufficient oversight of the ways AI in the humanitarian, social impact, and development sectors can go wrong, until it does go wrong. 

The “efficiency” narrative doesn’t always hold up, participants said. Using tools in ways they’re not designed for, making decisions overly influenced by claims of increased efficiency without careful assessment of actual results, and limited oversight of implementation, are some of the components of a technosolutionist approach that does not necessarily lead to beneficial results, quite the opposite. Participants pointed out how this can lead to decisions that ultimately have harmful outcomes for people whose lives are impacted by government, development, and humanitarian organizations. As one of our speakers noted, “People in leadership positions are moving fast and breaking things, but when we’re talking about people’s lives, that isn’t a good thing.” Indiscriminate, often irresponsible, non-transparent use of AI is happening in spheres that are crucial to people’s lives, and “the scale at which errors have become both acceptable and extraordinarily harmful is something we haven’t seen before,” they concluded. 

Even when AI tools are potentially useful, we need more ways to uphold ethics, transparency, safeguards, and environmental sustainability 

Participants shared anecdotal accounts of how, in their experience, the use of AI in humanitarian, development and social impact sectors (and in NGOs more broadly) has grown in ad hoc ways, leaving many open questions regarding the ethical issues surrounding AI use in general (and specifically, in the sector). When we shared a poll asking participants if they are rethinking their use of AI in light of these challenges and the current moment, 64% (48 out of 75) of those who responded said they were.

Some of the main ethical concerns shared by participants included: the ways in which AI models can reproduce and amplify biases against women and marginalized groups, issues related to data sovereignty, the connection between some AI companies and the rise in authoritarianism, the inaccuracy of gen-AI outputs and disinformation, and the environmental impacts of AI. 

Even among some of those in the call who see AI tools as useful for their work, there was a perception that the sector lacks ethical frameworks, transparency, safeguards, and is not doing enough to mitigate or reduce environmental impact. One of our speakers, Kennedy Chelang’a, noted how there are regulations in some places or some institutions, but not everywhere. Many at the meeting shared the perception that though there are lots of principles, toolkits, and guidance related to responsible AI use, this doesn’t mean it’s all being successfully and ethically implemented. 

Another cause for concern relates to the growing hostility towards gender equity and gender justice, with many in the call expressing worries about using LLMs that may double down on gender bias and discriminate against certain gender identities and bring forward anti-feminist ideals. With guardrails being removed, there is concern that mainstream AI tools will become even more harmful and biased. Another speaker, Nadine Krish Spencer, from Chayn, voiced a concern about adequate AI models. “We found that there can’t be feminist AI, because there are no feminist LLMs. So we need to find feminist use cases for AI instead”. At Chayn, Nadine and her team are currently working to build an AI tool grounded in feminist values.

Their research on the use of AI for GBV found that AI could lead to re-victimising and disregard for survivors’ experiences. To date, AI agents haven’t been able to engage with survivors of GBV in trauma-informed and supportive ways – agents designed to receive reports of GBV can seem cold, impersonal, and use triggering language to survivors who need emotional support when reporting a situation that involves violence. As non-profit budgets are cut and claims of increased efficiency and alleged time and resource savings influence whether or not to use AI, we may end up overlooking potential harm. 

One of the challenges of the moment: choosing tools that are more aligned with our values and our needs as a sector

In the call, some of the speakers and participants expressed a desire to see AI that’s built in ethical ways, to opt for AI models and for technologies that aren’t harmful, that don’t reproduce or reinforce biases and injustice, that don’t generate such catastrophic environmental impacts, and that don’t contribute to the power of authoritarian regimes. 

That desire is surrounded by practical challenges: From the difficulty of finding information about viable alternatives, to feeling confident about switching, to influencing organizational culture regarding the adoption of other tools, to convincing decision-makers to use a different LLM that is not coming from Silicon Valley. As Jonas Norén pointed out, this is challenging, but not impossible, and informed decision-making around tech tools is key. 

The process of navigating the current moment is related to how to assess AI tools and models and finding viable solutions. In that sense, some of the most common questions participants are asking include: 

  • What options are available?
  • How do we successfully assess these other available options? 
  • What resources and evidence do we need to make decisions about alternative LLMs?
  • How do we navigate the funding challenges? 
  • How do we make the case to decision-makers that we should move to different LLM providers?

While many felt that these challenges have been exacerbated with the advent of commercial AI, some noted that we have experienced these same dilemmas in the past, for example when trying to make ethical choices about technology companies while knowing they participate in war or surveillance and when using social media that violates our privacy and contributes to mis- and disinformation.

***

Our intention with this NLP-CoP meeting was to create a space where community members could share complex, multifaceted questions they’re facing with their use of AI and LLMs in the current context. Though we have not found prescriptive answers to these challenges, it was meaningful to see that many of us are grappling with similar concerns. As the development and widespread use of AI grows at exponential rates, we hope to continue having these conversations about how to navigate the ethical challenges of using (or refusing to use!) AI. 

If you’re interested in having conversations like this one, consider joining our NLP-CoP!

Leave a Reply

Your email address will not be published. Required fields are marked *