How can we apply feminist frameworks to AI governance?
Could an intersectional feminist approach to participation, equity and justice help to address the social and political nature of structural inequalities that are baked into every aspect of the AI supply chain? We drew together a roomful of smart folks to discuss this at our September 19th Technology Salon in NYC: Can Feminist Frameworks improve AI Governance?
Our fab lead discussants were: Jacqueline Hart, Senior Strategy Advisor, Feminist Humanitarian Network; Eleonore Fournier-Tombs, Head of Anticipatory Action and Innovation, United Nations; Tazin Khan, Founder and Executive Director, Cyber Collective; and Callie Rojewski, Director of Strategic Programming, Cyber Collective joined as lead discussants. Lina Srivastava provided stellar moderation of this wide-ranging discussion. (Many, many thanks to Lina for stepping in for me at the last minute!) Our gracious hosts for the discussion were Karl Brown and Shirl Haley at Thoughtworks.
Here’s a summary of the conversation:
What is a Feminist Framework? How might we apply it to AI?
Feminist analysis obliges us to question power – where it’s held, by whom, and with what consequences, said one lead discussant, as she framed the discussion. Issues with gender and AI are not merely a technical problem. They are a manifestation of much longer, deeper, historical imbalances in power and its distribution that we find everywhere in our societies. Applied to data and AI, feminist frameworks challenge notions of objectivity. They spotlight the multiple ways in which knowledge is constructed, extracted, and utilized to reinforce unequal systems of power. Applying an intersectional, decolonial, anti-racist, feminist lens to AI and its uses helps us to analyze, develop governance approaches, and prevent and mitigate harms AI can cause.
Data and AI are currently owned and generated by the private sector, she said. So, “as feminists and activists, how do we engage with that, who are the key stakeholders, what are the levers for change? What resources can citizens, civil society, philanthropy, and government draw on to push for change? How do we engage with the supply chain of AI creation at various levels? Where does power sit at those levels? We need to think about structural inequalities, real world impact of AI on humans, and processes.” It’s an issue of understanding where power is located, and how we can activate justice within a neoliberal system.
Further, a feminist approach calls us to implement ethics, transparency, and accountability mechanisms. And these will play out differently in different places and have varied implications depending on these communities and where they are, who they are. So, a feminist, antiracist, decolonial frame would help us come to a kind of consensus about a process for ethics, transparency and accountability that could be applied by different communities regardless of the step in the supply chain, and regardless of where those communities are or the types of harms they aim to address.
What kinds of AI are we talking about? What are the issues?
There are huge concerns with bias in AI and large language models (LLMs) and ways that structural inequality is encoded in AI systems. As one lead discussant explained, Chat GPT (and other LLMs) are trained on two thousand years of narrative from Western civilization. “Google books, starting from the Greeks, a little bit of Wikipedia, random things from the internet. These narratives contain written and translated inequality that is fed into these models. The narratives have a clear division of gender roles: the private sphere is for women who do the housework and the childcare. The public sphere is where men are speaking, doing the work, and developing the technologies. There are strict boundaries between the public and private spheres in these narratives. So we’re training AI systems on old-school mentalities that we’ve been trying to get out of for the past 100 or so years.” BIPOC and queer communities are also excluded from or negatively represented in these datasets.
[Note: this explainer gives a great overview of natural language processing (NLP) and generative AI. The emergence of “large language models (LLMs)” and “transformer models” have largely fueled the current boom in AI.]
What are some examples of feminist frameworks in action?
Communities and movements — indigenous movement, feminist movement, labor movements — have been working together to address power imbalances for a very long time! They have been doing this work by looking at global issues, systemic issues, with multiple touchpoints and entry points. As one person explained, “social movements can help us collectivize around these issues and create common cause around this movement.” For example, domestic workers have been organizing, collectivizing, and creating change at community, policy, global standards, labor laws, and more. Autonomous feminist movements (those that are not part of a political party) are another example. Some of them have been successful at creating national policy frames and getting them adopted.
Another framework is the Convention on the Elimination of Discrimination against Women (CEDAW). The CEDAW could be applied to emerging technologies like AI. “There is a line in the CEDAW about stereotyping of genders as one being inferior to the other is a human right violation. So, if AI is stereotyping, it’s violating human rights, and so needs to be held accountable,” said one discussant. CEDAW is problematic though, because it was created in 1979 when only 2 genders were considered.
Another person raised the issue that the UN itself still addresses gender in a vacuum, and that “gender” is often still seen as “women” whereas a feminist perspective takes into account the many dimensions of inequality. It was raised that gender is a very challenging issue at the UN because the UN is a member state-driven organization, and what happens at the UN depends on whatever the majority of countries agree on. “Women’s rights as a binary are already contentious and gender rights as non-binary genders is even more challenging.”
Where should we start? How do we get people to care about shifting the power?
It’s important to meet people where they are. This is hugely challenging when it comes to AI. It’s so technical that people have a hard time understanding it. “We’re always talking to an echo chamber,” said one discussant, and that’s a problem. “Is Yassim over at the bodega understanding this? It’s gonna affect how he sends money home. It’s going to mean surveillance. This is all connected. Part of the solution is story telling. The way we get true feminist dialogue is to transfer the power from few to many. That’s what will allow real change to happen.”
Another lead discussant described the importance of taking advantage of social media and viral moments to reach digital natives. It’s critical to avoid leaving people with the overwhelming sense that the world is burning. “Our goal is to ask people, ‘how’s your mind after hearing all this info? What’s coming up for you? What are the tangible resources you can take away?’
Having a diverse team and knowing how to talk to people and who to talk to is key. “How do you talk to your mom about this? How do you meet people where they’re comfortable? How do we hold space for the nuances and help people think critically? We are looking for personal transformation but also for community activation and ecosystem work. We meet people where they are in their day-to-day lives, in their jobs. We feature non-profits who have experts in these different areas. That’s a reflection of the feminist framework – including folks in that narrative. Making sure people feel comfortable to show up with their whole self, not just their job persona.”
Translation and interpretation are another key tactic, as one discussant said. “Tech bros build these tools, and they only speak in revenue. You have to be able to talk to them about how their long-term scalability and scope will be impacted if they don’t consider the kinds of things we are talking about. You can talk about humanitarian stuff all you want but they don’t care. We need to push them in economic terms.”
Can change happen from within the companies that develop AI?
Participants were largely skeptical about major change happening inside of tech companies. Some shared their own slow or failed efforts to make change happen from within. Others brought up women of color AI ethicists like Timnit Gebru, who has been raising the alarm for at least 20 years and was fired from Google for speaking up.
“Those who were working within large tech companies and platforms were all pushed out in the last few years as they became more critical,” as one person put it. “It’s a lot about who gets to have a voice in Gen AI tools and in the tech sector. There is not a lot that women can do that if they are getting laid off or not getting paid. You have to feed your family, you can’t do this for free.”
A key issue preventing “change from within” is the position of humanitarian and socially-focused projects, which are usually housed in the corporate social responsibility (CSR) division where they are divorced from product roadmaps. Thus, socially focused change never gets rolled out at the heart of a product or platform.
“There’s an opportunity now for the social sector,” however, said one participant. “There’s going to be a huge explosion of startups. AI can’t be done in isolation. You can create a little app or a tool for Facebook, but if you want to use an LLM you have to have something like 175 billion parameters. That means you have to collaborate. Could groups like [the people attending the Salon] work with these new platforms and apps while also pushing and lobbying with the old school tech companies?”
Another person cautioned that B corporations often give a narrative of social change and social impact, but when it comes down to it, they do not hold to it because goals of scale and revenue are stronger. Adjacent to this conversation were criticisms about informal AI labor being framed as a social good. While some felt that social good companies like Karya, who engage people in microwork like tagging or providing voice recordings to develop AI in less dominant languages were a positive development. Others had major concerns.
“AI labor is becoming informal work. It is being presented as an informal labor success story. For example, refugees doing tagging. For me that is terrible. There is no job security, no benefits, no support if there’s any trauma.” Another person likened this to companies like Sama (formerly SamaSource, who was sued, alongside Meta, by Kenyan workers). “There is a language of social good, but all these companies have the same value proposition. They are relying on low paid workers who need a job, they are not addressing systemic unemployment.”
How can we keep working together on Feminist AI governance?
Participants had a lot more to say and remained conversing beyond the end time of the Salon. Jacqueline called on the group to continue the discussion and to organize together to build a stronger movement to address these issues.
We’ll continue on with this discussion at our October 23 Salon (save the date!) Our topic is: What would meaningful participation and accountability look like for AI?
Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic or provide funding support to Salons in NYC please get in touch!