Does GenAI Accelerate Gender Equity or Automate Inequality? A Discussion on GenAI in Women’s Economic Empowerment, Healthcare, and TFGBV
In our second meeting of the Gender, AI and MERL working group, Medhavi Hassija and I shared insights from our recent paper focusing on three gender-related use cases: GenAI and Women’s Economic Empowerment (WEE); GenAI and health, and GenAI and TF-GBV. Before the meeting, we also asked registered participants to share their key focus areas and concerns, and integrated them into the discussion.
We share our thoughts from the event below and welcome more discussion on what you took away from the event if you attended.
Why this conversation now?
Our conversation started with a reflection on why this working group on Gender, AI, and MERL is relevant now. A few points from that reflection include:
- There seems to be increasing use of AI in all walks of life, but the conversation often appears bifurcated into “doom” and “bloom” narratives Reid Hoffman speaks of. Here, we wanted a more nuanced conversation around the benefits to women as well as the challenges and risks. There is a risk of the digital divide becoming the AI divide, so we should look at risks but also how women can benefit from an AI-enabled future.
- Recent global gatherings still highlight mainstream discussions on AI rather than gender. We’ve noted before how inclusion and equity were absent at the Paris AI Summit. One participant mentioned that in two days of the European AI and Tech Summit, inclusion was not mentioned even once.
- A Harvard‑Berkeley‑Stanford meta‑analysis of 18 studies (143k respondents) finds women are ≈20 % less likely than men to use GenAI, even when access barriers are removed. The concern is that skills and wage gaps could widen as productivity gains accrue to early adopters.
- Participants discussed the criticality of the moment to bake equity, safety, and measurement into deployment before practices ossify, and that we should use this space to share practical experiences.
Women’s Economic Empowerment, Health, and TFGBV
While women are impacted in the same ways as men by GenAI as designers and users, we chose to focus on three domains that could be specific to women, partly due to the time constraints. We also chose practical examples in response to a request from those registered.
Three domains we probed
| Domain | Potential & pilots | Red‑flag risks & evidence gaps |
| Women’s Economic Empowerment (WEE) | Micro‑entrepreneurs could tap AI for marketing copy, inventory tips, or local‑language bookkeeping. Skill‑building chatbots lower training costs. Examples of WEE interventions include Shortlist and Digital Green’s Farmer.chat (agriculture). | Lower AI fluency and gaps in AI usage may translate into a disparity, reinforcing existing pay gaps. AI divide could amplify the existing gender digital divide. |
| Women & Health | There is a sliding scale of lower risk (healthcare inventory) to higher risk situations (mental health chatbots). Examples of pilots include Noora Health, Jacaranda Health, Viamo, and Dimagi.X-ray triage is often identified as a use case; we noted this is predictive AI rather than GenAI. | Historical male‑centric datasets skew diagnosis models; We have very few studies with evidence on whether such GenAI pilots are working in terms of scale, speed, and impact. |
| Tech‑Facilitated Gender‑Based Violence (TFGBV) | AI tools can flag deep‑fakes, hateful memes, and doxxing at scale. Example shared: Violetta, an MIT Solve‑backed chatbot offering relationship help and referral pathways (solve.mit.edu). | Detection models can miss contextual cues or mislabel survivor content. Hard trade‑offs between safety, privacy, and freedom of expression; using safety standards (see below for link). |
Cross‑cutting challenges surfaced
- AI literacy as an equity hinge. We discussed how, without targeted use cases or literacy, GenAI may entrench, rather than shrink, gender wage gaps. For example, while the meta-analysis cited above stated that women use AI less than men, we were reminded of something shared in a recent meeting of the Social and Behaviour Change (SBC) Working Group at the NLP-CoP, during which speakers from Viamo and Digital Green found women used AI more than men when there was relevant content and accessible means (See this blog post for more).
- A“black‑box” theory‑of‑change. Linda Raftree flagged that AI “solutions” operate with the same hype pattern we have seen for years in Tech for Dev work. People make the assumption that tech can solve social issues without considering wider structural gaps and societal norms. “Make a cool chatbot → ?? → personalisation → efficiency → fix society!” Real causal links (and metrics) are mostly missing and power, inequities and systemic issues are ignored.
- Evidence vacuum. The black box theory of change is related to the lack of evidence. Apart from small pilots – e.g., UNICEF’s comparison of GenAI vs. manual coding in humanitarian evaluation workflows – robust impact studies are rare.
- High‑ vs. low‑risk pathways. We agreed that narrowly scoped, explainable AI (e.g., decision trees for credit scoring) may be lower-risk for women, rather than fully generative chatbots in sensitive domains.
What “good” looks like (emerging guardrails)
- One participant noted that the best use case for AI is a very specific use case that has been tested for, in controlled ways. She made the point of picking a single friction point (e.g., automating hotline triage) and attaching crisp success metrics before scaling, using the example of CHAYN, a non-profit offering tools and support for healing from abuse and trauma, and experimenting with GenAI.
- While it should be obvious, we emphasized bringing end users into prompt-testing and model fine-tuning, so we could account for diverse use cases.
- We discussed using and refining common standards, such as UNFPA’s ten data‑privacy principles that offer a ready checklist for T-FGBV projects.
- Overall, our understanding from this session was that while we agree on inclusion in GenAI, we need to see more specific use cases and testing, so we can learn from each other. This is both in terms of positive and negative learnings. Another conclusion was that the field is changing fast, so we should be open to unlearning and learning (e.g. that women use GenAI less than men – perhaps this is only the case in certain situations). A third conclusion is that we should remember that gender is not homogenous, and that intersectionality remains critical. We look forward to future sessions where we can do more evidence sharing to avoid siloed pilots (see our call for participants below).
Shared resources from the chat
- Our paper is here and contains many more references from these three domains of WEE, health and TFGBV.
- Global Evidence on Gender Gaps and Generative AI working paper (hbs.edu).
- What safe applications for T-GBV might look like – MTI’s key questions to ask.
- UNFPA’s Safe & Ethical Use of Technology to Address GBV (unfpa.org).
- Violetta AI survivor‑support platform (MIT Solve)(solve.mit.edu); emerging content‑moderation models like PRUNE (AI detection of non‑consensual images).
UNICEF’s GenAI vs. traditional coding evaluation comparison(knowledge.unicef.org). - AI-powered platform PRUNE that utilises LLMs to detect, classify, and remove non-consensual and illegal content from the Internet.
- EU wide LLM-application aimed at detecting online hate but also considers the “wider context”
- The German Function that, with the help of AI and legal expertise, helps to systematically identify criminal content, pursue it legally, and hold the perpetrators accountable.
What’s next for the Gender, AI and MERL Working Group?
Join us as a speaker or suggest a theme for upcoming meetings
We are planning a series of monthly meetings, in which we hope to create and facilitate a space where we can have critical conversations with community members who are thinking about, researching, and/or working with Gender, MERL, and AI. In each meeting, we’d like to invite 1-2 guest speakers (from the working group and beyond) to share their learnings, insights, and challenges related to Gender, MERL, and AI.
Thanks to those who have gotten in touch so far with ideas to share! Lots of exciting sessions are coming up. Keep an eye on MTI’s blog to learn more about future events, and if you’re not part of the NLP-CoP yet, join here.
You might also like
-
Event: What are the resources we need to navigate AI, gender and MERL?
-
Event recap: The Humanitarian AI Countdown and humanitarian knowledge production with Kristin Sandvik
-
Research Digest 2: State of AI Adoption and Competencies for Evaluators for Made in Africa AI in MERL
-
Bias in, bias out? How we’re understanding more about gender bias in LLMs
