New guide: key questions to ask before using GenAI for research on violence against women


This blog was written by Linda Raftree, The MERL Tech Initiative (MTI) and Elizabeth Dartnall, Sexual Violence Research Initiative (SVRI).

Using AI Responsibly for Research on Violence Against Women, by MTI and SVRI, March 2025

As Generative AI (GenAI) becomes more widely used, researchers are increasingly exploring its potential to assist with various tasks. While GenAI can be a valuable tool in some cases, it also presents ethical and practical risks that must be carefully assessed and mitigated before being applied in research. Moreover, growing concerns around GenAI’s broader ethical implications—including issues of equity, ownership, access, profit, bias, and power imbalances— are growing, particularly due to the way these systems are developed and trained. GenAI models rely on vast amounts of data sourced from publicly available content, often without clear consent or fair compensation for the original creators, raising important questions about accountability and fairness.

Overall, the research community has mixed feelings about AI. ‘Traditional’ AI excels at sorting and automating and has been used for data analysis in academic research. GenAI is a newer form of AI that became available to the general public in November 2022 upon the release of ChatGPT, followed by several other similar chatbots. GenAI is being used to support academic research and research-adjacent activities. While some researchers are embracing the use of GenAI for different kinds of research and research-adjacent tasks, others are exercising a great deal of caution or not using it at all due to ethical, safety and other concerns, some of which we outline below.

GenAI, Ethics, and Feminist Principles

For those applying a feminist lens to their work, and working on sensitive topics, it’s important to ask hard questions and develop critical GenAI literacy to make principled and pragmatic ethical and practical decisions. At the structural level, ethical questions related to how GenAI has been developed, the data it uses, and its effects on democracy, inequity, and the planet abound.

In their publication Engendering AI, the Afrofeminist organization Pollicy, highlights similar issues, including that “African women with intersecting identities stand to experience multiple levels of AI bias. Research has found that race, gender identity, sexual orientation, class, disability status and other identity markers directly influence one’s experiences with AI-driven technologies.” Pollicy flags that most AI tools are “designed by white men from the Global North; they are rarely assessed for their effects on women’s privacy and safety.”

Reece Rogers, a columnist who writes about AI for WIRED magazine put it bluntly: ethical AI doesn’t exist, “The ethics of generative AI use can be broken down to issues with how the models are developed—specifically, how the data used to train them was accessed—as well as ongoing concerns about their environmental impact. To power a chatbot or image generator, an obscene amount of data is required, and the decisions developers have made in the past—and continue to make—to obtain this data are questionable and shrouded in secrecy.”

The MERL Tech Initiative’s Natural Language Processing Community of Practice’s Ethics and Governance Working Group came to a similar conclusion in July 2024, asking Is it even possible to procure an ethical AI tool?

Given these fundamental concerns about the nature of GenAI, some consider that GenAI is totally misaligned with feminist principles and should not be used at all.

Research applications of GenAI

On the other end of the spectrum are researchers who are already experimenting with GenAI. In these cases, the hope is that using GenAI will save time and money, enhance research quality and efficiency, and enable greater inclusion and participation.

Some examples of how GenAI and Natural Language Processing (NLP) are being used for research and analysis include the detection of family violence in medical records, and risk assessment in domestic abuse cases. AI is also being used by child protection investigators to detect online child sexual abuse material. In this case, AI is trained to detect possible images of child sexual abuse in order to save the investigators from viewing material that can generate traumatic responses.  In addition, researchers are using GenAI to streamline data analysis, improve access to relevant research, and identify emerging trends.

While the use of GenAI is on the rise, institutions are in deep discussions around whether GenAI can be used for writing due to concerns about plagiarism, loss of creativity and critical thinking skills among students and academics themselves, and the generally low quality of AI-generated content.

In addition, concerns about replicability of LLM-based research are emerging, because as new models are developed and old models retired, it becomes impossible to reproduce research to verify outcomes. Journals have developed guidelines around where AI and GenAI can be used and the consequences of using it for writing and peer reviewing articles, for example. As Elsevier notes, these policies are primarily aimed at using AI during the writing process, and “not to the use of AI tools to analyze and draw insights from data as part of the research process.”

Researchers are also using GenAI tools and applications for research-adjacent tasks such as summarising reports, transcription, translation, brainstorming writing tasks, developing questions for key informant interviews, and helping design participatory workshop activities. Moreover, GenAI is making its way into online survey responses, indicating that research subjects are also using GenAI.

What are the rules in GenAI?

There seems to be a range of perspectives on GenAI. Some researchers are strongly opposed to its use, while others are highly enthusiastic and concerned about falling behind if emerging technologies are ignored. Many fall somewhere in between—curious but cautious, exploring its potential while grappling with ethical and practical uncertainties. This mix of perspectives has led some researchers to avoid disclosing their use of GenAI. It also means that many who use GenAI do so without shared principles, guidelines, or safeguards, leaving researchers uncertain about when and how it can or should be used.

As GenAI use proliferates – openly or under the radar – there is increasing demand for practical guidance on whether GenAI can be used in ways that minimise the risks and harms and maximise the benefits. Similar to social media and the Internet – technologies built and controlled by many of the same Big Tech companies that expose us to similar harms and extractive data practices and that contribute to the degeneration of the planet – it’s becoming more and more difficult to completely avoid using GenAI. It is being embedded into most of the platforms that knowledge workers use, and soon it may be difficult to avoid.

Critical AI Literacy

Researchers should develop critical literacy around GenAI so that they can make informed choices about its use, including in research on violence against women. The EU AI Act explains the importance of AI literacy efforts for creators, users and people on whom AI is used, mandating that this literacy should focus on equipping people with greater knowledge about protecting fundamental rights; protecting health and safety; enabling democratic control in the context of AI; helping everyone involved to make informed decisions regarding AI systems; understanding the correct application of technical elements during the AI system’s development; applying protective measures during AI systems’ use; interpreting an AI system’s output; helping affected persons understand how decisions taken with the assistance of AI will have an impact on them; and learning about the benefits, risks, safeguards, rights, and obligations about the use of AI systems (see Article 4, Recital 20).

To support AI literacy among researchers working on violence against women, the Sexual Violence Research Initiative (SVRI) and The MERL Tech Initiative have developed a guidance document. We hope the guide can support researchers to build critical literacy to help us make decisions about the use of GenAI tools. In the guide, we lay out big picture, structural questions that researchers should be asking before using GenAI for violence against women research (or other sensitive topics) as well as practical ways to mitigate risks if using GenAI in their work.

The rapid pace at which GenAI models, tools and features advance makes guidance itself a moving target. The political landscapes are also shifting, increasing mistrust in Big Tech and Big AI companies and their role in creating polarization and enabling anti-democratic forces in society. Our guidance recommendations will shift and change as GenAI becomes more sophisticated and as socio-political contexts change. For now, we see the guide as a first step towards setting some parameters for safe use of GenAI, especially for grassroots, participatory research and research-adjacent work.

Assessing GenAI Risks

A first step when considering using GenAI is ensuring the proper safeguards are in place. In our guide we suggest five key questions researchers should ask if thinking about using GenAI for research on violence against women. If it’s possible to answer ‘yes’ to all the questions below and the appropriate safeguards are in place, it’s likely that GenAI can be used responsibly (though big picture, structural concerns remain). Please see the draft guide for more details on key risks to be aware of as you work through these questions.

1) Is there a clear benefit in using GenAI that traditional methods cannot provide?

  • Does GenAI provide unique benefits that traditional research methods cannot?
  • Would using AI significantly improve efficiency, accuracy, or accessibility?
  • Would using GenAI significantly improve the experience of research participants?
  • Can you justify the use of GenAI (versus a traditional method) for this process?

2) Does GenAI use comply with ethical guidelines, survivor consent, and data privacy laws?

  • Does the use of GenAI for your proposed purposes align with research ethics standards?
  • Can the data be fully anonymised before using GenAI?
  • Are there legal and ethical approvals in place?
  • Have your research participants explicitly consented to GenAI’s involvement?
  • Does your GenAI use comply with national and international privacy laws?
  • Are safeguards in place to protect research participant data?
  • Are we sufficiently taking on our duty of care?

3) Are human researchers actively involved in reviewing, validating, and taking responsibility for GenAI-generated results?

  • Are humans actively reviewing GenAI-generated findings before publication?
  • Is there a process to correct errors, biases, or inaccuracies in GenAI outputs?
  • Is GenAI being used to assist researchers and not replace expert judgment?
  • Are researchers transparent about GenAI’s role in their work?

4) Can the risks of using GenAI be sufficiently mitigated? (see the full guide for a list of key risks)

  • Are there ways to reduce or eliminate the risks?
  • Can human oversight prevent harmful GenAI outputs?
  • Do you have a plan in place for checking and correcting GenAI’s biases?
  • Have you checked with research participants to learn what risks they are most concerned about?

5) Is there a process in place for making and documenting decisions about proceeding with GenAI?

Tips for getting started with responsible use of GenAI for research

As you start with your own research exploration of GenAI, some of the following tips might be useful. This list is fleshed out more fully in the draft guide.

  1. Ensure your research is ethical and follows globally accepted ethical and safety guidelines.
  2. Start small, always triple-check the outputs, and make sure there is sufficient human oversight.
  3. Do not upload survivor data or other sensitive information into GenAI tools.
  4. Find ways to mitigate bias (note: because of the inherent biases in GenAI, it’s likely that not all biases can be addressed)
  5. Disclose the use of GenAI in your research.

Approaching GenAI with critical curiosity can help researchers to both engage and learn more about it, whilst avoiding doing harm. We offer this guidance as a starting point to those beginning their GenAI and research journey and a reflection opportunity for those who have already walked down the path. Ultimately it is important to manage expectations about what GenAI will be able to do in terms of rigorous research methods and outcomes. GenAI may be very useful for some tasks and far less useful in others — the key is figuring out what those safe, beneficial applications of GenAI may be while ensuring it does not jeopardize survivors’ safety and confidentiality.

We encourage all readers to share their feedback with the SVRI using this feedback form.

Illustration by Vectorjuice – www.freepix.com

Leave a Reply

Your email address will not be published. Required fields are marked *