Girl Effect Guidelines on Ethical AI-Powered Social & Behaviour Change Chatbots
Originally published by Girl Effect
Girl Effect recently published exciting results from our early testing on if and how Generative AI can improve sexual and reproductive health outcomes for girls. Our experiments revealed that when we introduced GenAI into our Big Sis chatbot via the ‘ask me anything’ function, we increased product performance and user satisfaction significantly compared to our non-GenAI, classification-based system. GenAI users were 17.1% more likely to engage with key programmatic messaging (e.g. HIV prevention, mental well-being), 12.68% more likely to access service information (for example on the closest clinic), 11.87% more likely to return to use Big Sis and an impressive 200% increase in numbers of questions asked.
As detailed in our previous blog, ensuring the safety of our young female users was non-negotiable, for example, using Retrieval Augmented Generation (RAG) to ensure answers were rooted in our vetted Sexual and Reproductive Health (SRH) and Mental Health content. Additionally, the team implemented guardrails in line with core Girl Effect values to ensure that the answers were relevant, non-toxic, and included options to speak to real people.
However, understanding that ethical AI goes deeper than the answer quality of GenAI responses, Girl Effect commissioned partners The MERL Tech Initiative (MTI) to develop comprehensive and accessible guidelines to steer its work on AI-powered SRH chatbots. These were initially intended for internal staff, but we are happy to make them available publicly today.
The guidance, pitched at beginner to intermediate level (though feedback from staff and vendors suggests it also holds value for more technical readers)covers several key areas:
- How Girl Effect currently uses, and may in the future use, both traditional and Generative AI in its Social and Behaviour Change initiatives, with a focus on Sexual and Reproductive Health and Rights (SRHR) and mental health chatbots.
- Common definitions of key terms.
- A breakdown of key ethical risks — and opportunities — associated with AI-powered chatbots for sexual and mental health.
- High-level recommendations for advancing towards more ethical AI practices.
- More granular recommendations for staff members involved during programme inception, design, and evaluation, with a focus on accessible steps rather than hyper-technical solutions.
With the power of LLMs developing exponentially and work to fight some of the most pernicious effects of unethical AI, such as bias and hallucinations, well underway, this guidance will no doubt need to be updated before too long. However, because it also draws on long-standing best practices for responsible digital development, we expect that many of the recommendations will continue to be relevant for the foreseeable future. Advancing ethical AI often means reinforcing established principles such as participatory and human-centered design and Safety by Design rather than taking radically new steps to mitigate risks.
Key considerations covered by the guidance, unique to AI, of course, include addressing highly publicised issues such as bias. But the guidance also aims to provide teams with achievable steps over which they have more control. This includes factors like:
- Providing AI literacy training for staff working on AI-powered products, even those working in adjacent roles.
- Working proactively with funders to ensure time and budget are allocated to address ethical considerations while also establishing a clear risk appetite and a shared understanding of ethics-related priorities.
- Including ways for users to report fail rates and being transparent with users about failure rates.
The guidance itself was developed in a participatory fashion, as detailed in this write-up by our partners MTI. The risk with guidance such as this one, which forces readers to grapple with sometimes complex concepts and lofty ideals, is that it will gather virtual dust in some forgotten folder. By involving staff across the organisation in its development, we have attempted to ensure its relevance and accessibility — and therefore its likelihood of being used.
“At Girl Effect, we don’t see ethics and innovation as opposing forces — we see them as essential companions. As we integrate Generative AI into our work, especially in sensitive areas like sexual and reproductive health, our priority is to ensure that the technology respects, protects, and empowers the girls and young people we serve. These guidelines are a reflection of that commitment: grounded in our values, shaped by those most impacted, and designed to evolve alongside the technology itself.” Karina Rios Michel, Chief Creative & Technology Officer
We also engaged members of Girl Effect’s Youth Advisory Panel to explore how their understanding of AI — and their perception of its risks and priorities — aligned with ours. This valuable and insightful exercise helped us make targeted improvements to the guidance, such as including opportunities for AI literacy as part of GenAI-powered product roll out.
We are open to feedback and keen to engage in discussion with other organisations who are walking the tightrope of balancing programme demands, user needs, innovation, and ethical approaches to technology. We also have an operational version of the guidance available as a checklist, on request.
Read the guidance below or download it here.
You might also like
-
Tech Salon recap: listen more and shift away from Western-centric framing to better address online violence against women and girls
-
Savita Bailur joins MTI as a Core Collaborator
-
Reviewing Mirca Madianou’s new book, “Technocolonialism: When Tech for Good is Harmful”
-
Welcoming our new AI+Africa Lead for the NLP-CoP: Vari Matimba