MERL Tech at the 2023 AEA Conference
The American Evaluation Association (AEA) annual conference is coming up in Indianapolis, Indiana October 9-14, 2023. Some 3,000 evaluation practitioners from across the globe will gather to share experiences under the theme: “The Power of Story.” There are hundreds of amazing sessions to choose from. Here are a few that MERL Tech and the NLP-CoP are involved in that we hope might spark your interest.
Thursday, October 12
10.15-11.15 – How can AI Language Models help evaluators tell better stories? (Room 104)
This think tank, organized by Stephanie Coker, Paul Jasper, Kerry Bruce, and Linda Raftree will be an interactive session on AI Language Models. These new approaches bring incredible opportunities to scale qualitative analysis to tell powerful and representative stories of social change, but biases, opacity of coding, and rules orienting AI software and tools make them imperfect at best and dangerous at worst. Before moving into breakout groups, discussants will provide a brief overview of the basics of the newest AI language models, how they work, and offer advice and caution to evaluators on applying these models to their work. Discussants will orient session participants on popular Large Language Model (LLM) tools (e.g. ChatGPT, Bard, LLaMa) and relevant applications (e.g. sentiment analysis, theme identification and aggregation in qualitative data) of AI Language Models in the field of monitoring, evaluation, and learning.
Breakout groups will cover key questions like ‘what are the best uses for LLMs in evaluation?’, ‘what are the key challenges with LLM for evaluation?’, and ‘how can evaluators address ethical issues of LLMs?’. Groups will share back key insights in plenary before hearing final remarks from the discussants. Much of the session will borrow from the early lessons and explorations of a cross-regional, multidisciplinary, 200+ member-strong Community of Practice working to understand the needs, opportunities, and risks of emerging types of machine learning and AI language models in evaluation practice.
12.30-2.30 – Lunchtime Meet up! (Punch Bowl Social)
The Integrating Tech into Evaluation TIG (ITE TIG) and MERL Tech are hosting a lunchtime meet up at Punch Bowl Social at Circle Center Mall. Come hang out with us! Click here or scan the QR code below to sign up.
2.30-3.30 – Introducing new information technologies to development partners (Room 201)
At this panel, presenters will look at challenges and good practices with getting donors, governments, front line workers and humanitarian organizations to use emerging technologies in their evaluation work.
- Kerry Bruce. Trying to get donors and implementing partners to use machine learning in evaluation: what works and what does not?
- Swapnil Shekhar. Enabling better usage of data and technology through ground-up engagement with frontline workers, state and large-scale public intent data.
- Michael Bamberger. Introducing big-data technologies to strengthen humanitarian programs: Stories from the field.
- Kecia Bertermann (pending)
5.00-6.00 – Presidential Strand: Sources and consequences of bias in big data-based evaluations (Grand Ballroom 7)
This panel will examine the causes and consequences of bias when evaluations are based on big data and data analytics. The session will begin with the presentation of a framework for identifying the main types of biases in big data. Three presenters will then tell stories about (a) the use of big data in different evaluation contexts: (b) using a data-driven Diversity, Equity and Inclusion (DEI) approach to navigating the bias terrain and (c) MERL Tech’s Natural Language Processing Community of Practice (a group of academics, data scientists, and evaluators from NGOs, UN and bilateral organizations). Each presentation will describe the benefits of using big data, the kinds of bias that can be introduced, and the consequences of the bias in terms of under-representation of vulnerable groups, as well as the operational and methodological consequences. The focus on bias is important, because some advocates claim that big data is more “objective” than conventional evaluation methods because sources of human bias are excluded. While big data and data analytics are powerful tools, the presentations will show that this claim of greater objectivity is overstated. Following the presentations, the panelists will discuss common themes and issues across sectors and ways to address each of the sources of bias.
Panelists include:
- Michael Bamberger. introducing the sources of bias framework.
- Pete York. Addressing hidden biases: Enhancing equity through causal modeling in big data evaluations.
- Linda Raftree. Stories of Natural Language Processing (NLP): exciting possibilities and concerning challenges.
- Dr. Randal Pinkett. Navigating the bias terrain: A data-driven DEI TM approach to evaluating social programs.
Friday, October 13
10.15-11.15 – Emerging AI in evaluation: Implications and opportunities for practice and business (Room 103)
This session, will be chaired by Bianca Montrose-Moorhead and Sarah Mason. Authors of four articles from the fall edition of New Directions for Evaluation will present their papers on emerging AI approaches in evaluation. These include:
- Aileen M. Reid. Centering Equity: The Role of Evaluation and Evaluators in an AI World. Presenter
- Zach Tilton. Developing Critical AI Literacy in Evaluation Education and Capacity Building.
- Nina Sabarre, Kathleen Doll, and Sahiti Bhaskara. Using AI to Disrupt Business as Usual In Evaluation Firms
- Linda Raftree. Watch out! Risks of using Natural Language Processing AI for Evaluation