AI at the AEA: Meet your NLP-CoP peers at these sessions!


This year’s American Evaluation Association (AEA) Conference takes place in Portland, Oregon from October 21-26 under the theme of Amplifying and Empowering Voices in Evaluation. The MERL Tech Initiative and NLP-CoP members are organizing multiple sessions at this year’s conference. We’ve pulled together a list of sessions that have an AI focus below. See the full program for more details. Be sure to sign up for the Tech and Eval Mixer on Wednesday evening, right after the poster session!

AI and Evaluation focused sessions

Workshop: AI-Enabled Evaluation Methods
Date: Tuesday, October 22, 2024, Time: 9:00-4:00 PM

Join Zach Tilton and Linda Raftree for a comprehensive workshop on integrating generative AI into evaluation practices. Artificial intelligence has and will continue to augment the landscape of knowledge work—including program evaluation — and practitioners need to stay up to date on emerging practices for integrating AI into their work. This session includes mini-modules on GenAI basics, ethical principles, prompt engineering, chatbots, and AI-assisted multi-method analysis. Participants will gain hands-on skills and foundational knowledge to responsibly use AI in their evaluation work.

Session: How engaging an AI “stakeholder at the table” can help expand and amplify collective knowledge
Date: Wednesday, October 23, 2024, Time: 4:15-5:15 PM

Jewlya Lynn of Policy Solve is running a session on AI in evaluation practice and systems change. While many use AI as an efficiency tool; it can be so much more. This session will explore how to engage AI as a thought partner at the table with other voices when evaluations are seeking to understand how, why and under what conditions change is happening in complex, dynamic systems. We will build on the participatory, inclusive approaches highlighted by the Causal Pathways Initiative and explore how AI is a tool for collectively strengthening contextual and causal knowledge.

Session: Ghosts in the evaluation machine: ethics, data protection, meta-evaluation, and evaluation quality in the age of Artificial Intelligence
Date: Wednesday, October 23, 2024, Time: 4:15-5:15 PM

This session, led by Michael A. Harnar, Alex Robinson, Michael Osei, Zach Tilton and Shadrock Roberts delves into the juxtaposition of AI safety and AI ethics research camps, emphasizing the latter’s focus on the societal and ecological risks posed by current AI technologies, including their potential to perpetuate bias and inequality. It probes into how the data pools underpinning AI tools, reflecting a multitude of human voices and values, affect the integrity of evaluation processes and outcomes. By examining the influence of algorithmic bias and value representation on evaluative quality, this session aims to uncover the voices and values amplified or sidelined by AI in evaluation. You’ll hear insights from evaluators, data scientists, AI ethicists, and privacy professionals about theoretical, empirical, and practical aspects of GenAI-enabled evaluation practice.

Evening Side Event: The MERL Tech Initiative and Mercy Corps are hosting an Eval & Tech Mixer!
Date: Wednesday, October 23, 2024, Time: 7:00-9:00 PM

The MERL Tech Initiative and Mercy Corps are excited to present the Eval & Tech Mixer, a lively conclusion to the annual American Evaluation Association poster session. Join us for an evening where evaluation expertise mingles with tech savvy in a relaxed, social atmosphere. Space is limited, so be sure to RSVP now.

Keynote Plenary: Generative AI: Navigating the Ethical Frontier in Evaluation
Date: Thursday, October 24, 2024, Time: 8.30-10:00 AM

Meredith Blair Pearlman will moderate a keynote session on the main stage with Aileen Reid, Olivia Deich, Zach Tilton, and Linda Raftree covering the transformative impact of Generative AI on evaluation practices from various angles, including participation, gender, climate, youth and others. Panelists will examine opportunities and challenges along with the ethical implications of AI integration, sharing personal insights on the balance between its benefits and risks. Attendees will gain a deeper understanding of how AI can be both a powerful tool and a challenge in evaluation, and what it means to integrate this technology responsibly in their work.

Session: We Have Knowledge Gaps on GenAI for Social and Behavior Change (SBC) Programming: Let’s Develop a Research and Evaluation Agenda Together!
Date: Thursday, October 24, 2024 Time: 11:30-12:30 PM

Join Linda Raftree, Nicola Harford, Anastasia Mirzoyants, and Stephanie Coker as they discuss the potential of AI tools like GenAI, NLP, and LLMs in SBC programming. Since 2021, iMedia and the MERL Tech Initiative have been convening and learning on the theme of evaluating digital SBC with Gates Foundation Partners and the wider MERL Tech community. At this session, we will share what we have learned so far and involve participants in reviewing the emerging research agenda, identifying additional critical knowledge gaps, and discussing ways to collaboratively address them. Participants will leave the session with an updated understanding of the Emerging AI landscape for evaluation of SBCC and the critical questions that remain to be answered.

Session: Amplifying New Perspectives: Bridging Innovation and Inclusivity In an AI-Enhanced Evaluation
Date: Friday, October 25, 2024, Time: 1:00-2:00 PM

Join Jennifer P. Villalobos, Zach Tilton, Tarek Azzam, Linda Raftree, and Hanna Camp as they lead an interactive session on leveraging AI to democratize evaluation. This session provides a timely discussion aimed at invigorating participants by integrating fresh perspectives and voices in a conversation about AI-influenced evaluation practices, particularly from emerging evaluators. This discussion focuses on inclusivity, equity, and the role of AI in amplifying underrepresented voices. Participants will explore innovative ideas for an inclusive future in AI-influenced evaluation and contribute to building a dynamic evaluation community.

Session: Theory-Driven Evaluation, the Future is Now! Leveraging Artificial & Augmented Intelligence to Better Understand and Articulate Program Theory
Date: Friday, October 25, 2024, Time: 2:30-3:30 PM

Join Stewart Donaldson and Charles Gasper, along with discussants David Fetterman and Jessica Renger, as they present the evolution of theory-driven evaluation and demonstrate innovative tools for integrating AI into program theory development. This session will explore cutting-edge methods for building consensus around causal linkages and visually articulating program theories, with a focus on how AI can empower diverse voices in the process. Participants will leave with fresh insights on applying AI to enhance program design and evaluation while considering key values and cautions.

Are you running sessions that cover MERL Tech and/or AI? Let us know in the comments!

Leave a Reply

Your email address will not be published. Required fields are marked *