Tips and tensions: What does it look like to design organizational AI policies (Part 1)
In November, The MERL Tech Initiative (MTI) hosted two events focusing on the creation of organisational AI policies for civil society organizations.
On November 3rd, we gathered online within our Community of Practice and were joined by over 150 practitioners interested in discussing the challenges of developing organization AI policies. Our speakers, Damini Satija (Director of Amnesty Tech), Danna Ingleton (Executive Director at HURIDOCS), and Dulcie Vousden (Head of Data Science at DataKind UK), generously shared their own experiences designing and implementing policies/guidelines for the use of AI in their organizations.
In this blog, we have collated some of the highlights from that conversation (and in the coming weeks, we’ll share reflections from our in-person Tech Salon in NYC, too).
Consulting teams on how they use AI, embracing multiple perspectives, and being open to the iterative nature of organizational AI policies
We kicked off our discussion with what is arguably one of the most complex things about creating organizational AI policies… How do you actually go about creating one in ways that resonate with how your team uses tech, align with your organizational values (and mission!), and make room for the inevitable shifts that will take place as technology continues to evolve?
Here are some of the strategies we mapped out during the conversations:
- Acknowledging that “AI” is not ONE coherent technology and that different uses may require different organizational positions. The reality is that, though the term “AI” has often been referred to as one type of tech, “AI” actually refers to many different types of technologies and not just one type of coherent set of technologies. In our conversation, embracing that complexity seems to be key to defining organizational AI policies. This can include things like asking your team questions to understand how they are actually using different tools, what tools they are using, and why. Equally important, as Danna shared, is trying to get a sense of what specific problems or challenges they are trying to solve when using different tools. This information can offer useful insight in terms of how different teams make use of AI and help shape your policy based on what ethical applications of AI may look like in each of those situations. In the case of our speakers (and many participants in the event), this was accomplished by sending our staff surveys, hosting discussions, and various consultations to understand how people used different types of AI tools (and why they did it!).
- Values and mission alignment. Many of us in the call are understandably concerned about the harm caused by Big Tech (and the AI systems they create) and how they are in complete opposition to the work we do through our organizations (many of the participants are working in human rights, the humanitarian sector, tech policy, digital rights, and more). In this context, trying to figure out what an AI policy should include and/or what can be considered acceptable uses of AI in our work is also about honouring organizations’ missions and programmatic work. For example, Damini explained that one of their main priorities in designing an organizational AI policy was ensuring their internal practices were aligned with the organization’s work on human rights analysis of AI systems and applications. In other words, ensuring that internal AI policies reflect the organization’s work dedicated to protecting human rights in relation to technology. As such, engaging in a period of multi-stakeholder reflection, principles-led conversations that build on concrete use-cases, and seeking to establish clarity on what is acceptable and not acceptable AI use can be crucial to laying the foundation for organizational AI policies that align with organizational values, mission, and programmatic work.
- “Move slowly and measured”. As Damini explained during the online event, the market frenzy around AI development and adoption can lead to rushed decisions, but it does not have to be this way. “Innovation can happen at a responsible pace – responsible and ethical principles don’t stifle innovation, they make room for the right kind of innovation”, said Damini. Our own experience of designing internal AI policy at MTI – and the experiences of our speakers – is one that embraced the complexity of the decisions to be made and acknowledged the work it takes to “reconcile the relative benefit and the known harm”, as Dulcie put it. For everyone reading this, ultimately, when it comes to designing AI policies, it’s ok if the process is slow, it’s ok to review multiple times, and it’s ok to take time to reach a final version of what your AI policy will look like. As Danna wrote elsewhere, when it comes to the implementation of AI tools, “sometimes, the smartest move is to step back” and not implement “just to ‘keep up’”.
In addition to organizational AI policies, we need to work collectively to demand better practices from tech companies
Identifying the red lines is an ongoing process, and the practical repercussions are not always easy to pinpoint. For most of the organizations in the call, it seems important to use tools that are human rights compliant, to take into consideration environmental concerns, labour implications of AI development, and issues of intellectual property. And many organizations in the event are committed to basing their decisions on these principles. However, in practice, it can feel like we have very limited agency to decide these things.
Due to certain companies’ market dominance over the infrastructure, services, and norms that shape our online lives, and how they are embedding their AI tools into software routinely used by organizations, it can feel like we have limited options in terms of making impactful decisions. Damini offered that creating differentiated guidance for discrete use of AI and use of AI integrated into systems can be a relevant aspect to take into consideration, and, at the same time, highlighted the importance of externally tackling structural issues alongside defining internal policies. Likewise, Dulcie pointed out how “there are harms we can control by our specific use and there are harms we can influence by advocating for better tech”.
Key takeaways shared by speakers:
“If I were to start now, the first thing I’d do is understand the landscape of the organization and acknowledge this is not a ‘one and done’ process”.
– Dulcie
Dulcie shared that, if she were to start a process now, the first thing would be to understand how people are using AI, what problems they are trying to solve, and what their red lines are. She also shared the AI Use Survey her staff used, and this guideline from Nten that provides a set of questions you can ask to guide your process.
“It’s not about having a static output.”
– Damini
Damini explained how having feedback loops and making room for iteration along the way can be key to ensuring your organizational AI policy stays relevant and responsive. Refreshing every few months, checking in often, and keeping things flexible, but values-aligned. This goes for both the principles-based guidance and technical implementation.
“Don’t start with the tech, start with the humans using it.”
– Danna
Danna offered that thinking about AI through the lens of “function” and why people are seeking to use it allows you not to be overly prescriptive and actually design policies that will provide actionable ethical guidelines based on what your team needs.
Coming soon: Tips and Tensions, Part 2
After our online meeting, we organized an in-person Tech Salon in NYC, also about designing organizational AI policies. We will share insights from that conversation in the coming weeks, including reflections on how internal capacity is key for the process of designing AI policies, experiences with navigating overarching ethical concerns, and the need to move with intentional caution.
You might also like
-
Event: What are the resources we need to navigate AI, gender and MERL?
-
Event recap: The Humanitarian AI Countdown and humanitarian knowledge production with Kristin Sandvik
-
Research Digest 2: State of AI Adoption and Competencies for Evaluators for Made in Africa AI in MERL
-
Bias in, bias out? How we’re understanding more about gender bias in LLMs
