Why gender inclusion in GenAI matters now more than ever
Guest post by Medhavi Hassija and Savita Bailur
The world stands at a pivotal moment in the development of AI. The recent Paris AI Action Summit underscored both the potential and the growing risks AI presents, with a notable shift from AI safety—focusing on preventing societal harm—to AI security, emphasis on national security and geopolitical competitiveness in AI advancements. This shift sidelines essential discussion on equality, inclusivity, and ethics. Our paper Gender Inclusion in GenAI: Why Does It Matter? aims to recenter the conversation examining how GenAI impacts women’s participation in key areas, notably economic empowerment, healthcare, and safety and security, while highlighting the urgent need to address biases, technology-facilitated gender-based violence, and digital divides that could further deepen existing inequalities.
GenAI represents a leap forward from previous iterations of AI by generating new content from text and images to video and code. This has sparked a growing excitement in its potential to transform industries. However, many real-time developments and our analysis suggest that the promise of GenAI is not for all. For example, Meta’s recent decision to lay off 5% of its workforce as part of a “shift towards AI initiatives” illustrates the disruptive impact of these technologies, raising questions about who stands to gain and who may be left behind. Our research highlights how factors such as digital exclusion, biased datasets, and insufficient representation in tech can compound the divide, hence underscoring an urgent need to prioritize gender inclusion in GenAI design and deployment.
The paper is structured around three key sections that showcase both the promise and risks of GenAI for women. The first section explores its potential to boost women’s economic empowerment, improve healthcare, and support women’s safety. The second section addresses the risks of GenAI, including biased data and privacy concerns that reinforce stereotypes and deepen the digital divide, and without consistent checks and balances, AI’s rapid growth risks amplifying inequalities and compromising women’s participation. The third and final section calls for action. We share key recommendations including increasing women’s representation in STEM, transparency in AI development, and strengthening regulations to prioritize gender equality.
There has been growing sentiment in the aftermath of the Paris AI Action Summit that innovation and competition trumped ethics, user safety and regulation, and security was more important than safety online. Now, more than ever, it is critical to maintain a strong voice on gender inclusivity in technology. If the Monitoring, Evaluation, Research and Learning (MERL) Sector is going to use AI for its work, it will be critical to consider the harmful gender biases inherent in both underlying data and design of AI tools and products as well as its application and wider effects on girls and women. The MERL Sector should also keep a close eye on how current policy shifts affect women and girls around the world. In launching this paper, we look forward to exploring these themes at the upcoming CSW and invite you all to join the NLP-CoP’s Gender, AI, and MERL Working Group for ongoing discussions on gender and AI. Through critical dialogue, we can push for inclusive and representative technologies that truly serve the needs and aspirations of women and girls worldwide.
Read the paper below and please reach out to savita@merltech.org if you’d like to talk more.
You might also like
-
Event recap: Humans in the machine – the impact of AI on workers
-
Evidence and Learning in the Context of Climate Change: Invitation to Action
-
Join us during CSW: we’re convening two Tech Salons about the rights of women’s and girls’ online, media and online violence
-
Join us on March 12th: “Tests of Large and Small Language Models on common evaluation tasks”, a webinar with Gerard Atkinson