Meet our team at AEA Evaluation Conference in November

The AEA Evaluation Conference brings together 2,000+ evaluation professionals from multiple disciplines and backgrounds to learn best practices, discover new techniques, and implement lessons learned from others in the field. Evaluation 2025 is taking place in Kansas City, between November 10 and 15. This year’s theme, Engaging Communities, Sharing Leadership, is all about fostering inclusive dialogue, co-creating knowledge, and driving meaningful transformation.
The MERL Tech Initiative will be there, and you are welcome to join us. Whether you are keen on having critical discussions about AI tools in evaluation or would like to come to our exhibition booth for a chat, we’d love to see you there.
Check out Evaluation 2025’s full schedule and register here to join us there.
Join us for critical conversations about AI and evaluation
What’s new about AI? Emerging frameworks for designing, assessing, and evaluating AI tools and vendors and AI-enabled programs and evaluations
Thursday, November 13, 10:15 AM – 11:15 AM CST
Despite its known challenges, including bias, lack of transparency, and issues with data quality, Artificial Intelligence (AI) is becoming part and parcel of both programs and evaluations. Frameworks that help identify and mitigate potential harms and set standards for responsible use of AI are an important way for the sector to begin coalescing around a set of expectations for grant applications and program design that include AI components, AI vendors, AI tools, and AI for monitoring, evaluation, research and learning (MERL). At this panel, we’ll share a set of frameworks that can be adapted when considering AI in different contexts, roles, and stages in the program and MERL life cycles. You’ll leave the session with tools you can adapt for your own work and a clearer understanding of how and when to use them.
Exploring emerging AI as subject and object in democracy-focused evaluation: Updating our shared perspective
Friday, November 14, 2025 9:45 AM – 10:45 AM CST
As Generative AI (GenAI) tools have become increasingly accessible, their implications for democracy and democracy evaluation have multiplied. In these discussions, contradictory opinions exist side-by-side: on the one hand AI has the capacity to destabilise the very foundations of democracy; on the other, AI tools may supercharge evaluation and increase access to democratic processes. Navigating these binaries is not easy; but the mainstreaming of generative AI raises fundamental questions for democracy practitioners and evaluators. This Think Tank session will take a closer look at original research conducted by the MERL Tech Initiative on AI in MERL for democracy programming. Together with Veronica Olazabal, we will revisit our initial 2024 research, led by Quito Tsui, in light of recent seismic shifts regarding both democracy and technology. Through a lively and hands-on debate about how to understand the use of AI in democracy related MERL we seek to establish a shared updated understanding of how to approach this critical question. Learn more about this research here and here.
Visit our exhibition booth for a chat
Our team will set up a booth during Evaluation 2025! We’ll be at booth 205, near the Network Lounge. You can stop by for a chat, learn more about us and share more about your work. We are looking forward to catching up with many of you.
Stop by our happy hour with The Foundation Review and the Philanthropy Working Group of the NLP-CoP
The NLP-CoP’s Philanthropy Working Group will be co-hosting a reception with The Foundation Review on Wednesday, November 12, from 6-7.30pm at the conference venue. We will celebrate the launch of the upcoming Foundation Review’s special issue on Philanthropy and AI and enjoy some drinks and light snacks. RSVP here to receive more information on the location. Space is limited.
You might also like
-
Event: What are the resources we need to navigate AI, gender and MERL?
-
Event recap: The Humanitarian AI Countdown and humanitarian knowledge production with Kristin Sandvik
-
Research Digest 2: State of AI Adoption and Competencies for Evaluators for Made in Africa AI in MERL
-
Bias in, bias out? How we’re understanding more about gender bias in LLMs
