Qualitative researchers are not OK: adapting to the use of AI by research respondents


How are qualitative researchers navigating the use of AI tools by research participants? This month, the Ethics and Governance Working Group (EGWG) at the Natural Language Processing Community of Practice (NLP-CoP) is hosting a meeting focused on the use of AI tools by research respondents. We’ll discuss how this impacts qualitative research, researchers themselves, but also acknowledging respondent agency, and long-standing issues with ‘authenticity’ and power within research relationships.

On June 30, at 10 am ET/4 pm CET, we are hosting an informal, open discussion on these topics, and we would love for you to join us. Register here.

Image by DALL-E.

Back in 2024, Becky Zelikson wrote about how survey respondents seemed to be increasingly using generative AI to respond to remote research questions. In raising key questions about authenticity, representation, and the integrity of qualitative data, Zelikson’s article presents considerations for qualitative researchers who may want to identify AI-generated responses and handle them with care and ethical rigor. 

Over at the EGWG, we have talked about how AI can be used to analyse qualitative data for MERL, but now, we have also noted how AI seems to be increasingly used by respondents as part of remote research activities as well (a practice which became more prevalent since Covid forced us to make these activities digital). We have found that research respondents in the Global South and beyond are using AI to polish their answers, or perhaps even come up with answers entirely, citing convenience, but also out of pressure to present a ‘correct’ answer. Other practitioners have noted that research participants have also learned to ‘game’ the system, actively trying to identify remunerated research opportunities and using GenAI to provide answers.

As researchers, we are curious to explore together the complexities and dilemmas that this poses: How do we manage this new paradigm? How do we check whether data is meaningful any longer? Is authenticity a myth anyway, with the various power dynamics that are inherent to research projects, especially those conducted in the Global South by researchers in the Global North? Is it as big a deal as it first appears? What can be learned from similar challenges in other fields, like education?  

Join us for what promises to be a lively and important discussion.

The Ethics and Governance Working Group (EGWG) at the NLP-CoP focuses on emerging ethics and governance issues and themes, keeping tabs on AI ethics and governance policies and guidance, exploring participation and accountability to individuals and communities, and the application of feminist frameworks for AI Ethics and Governance. Learn more and join here.

Leave a Reply

Your email address will not be published. Required fields are marked *