June 1-5: Glocal Evaluation Week 2026
Glocal Evaluation Week is a unique knowledge-sharing event, connecting a global community of people across sectors and regions. Over the course of a week, participants from all over the world join events – in their neighborhood or across the ocean – to learn from each other on a vast number of topics and themes. Allowing participants insight into how their work fits in with regional monitoring and evaluation (M&E) ecosystems and the larger international M&E community, Glocal helps to inspire and energize a global movement – individuals and organizations who value the power of evidence to improve people’s lives.
The 2026 theme, “Evaluation, Evidence, and Trust in the Age of AI,” highlights the growing importance of ensuring that emerging technologies strengthen—rather than undermine—the credibility, rigor, and use of evidence in decision-making.
Our team at The MERL Tech Initiative is participating!
- What happens when you invite evaluators to experiment with AI in public? You get data. On June 3rd, Zach Tilton (MTI) joins Zachary Grays (The American Evaluation Association) for a reflection about AEA/MTI Virtual Hackathon 2.0. We’ll dig into what participants built, what they critiqued, and where they got stuck as clues into real value, real gaps, and real concerns shaping AI-enabled evaluation practice. We’ll open with a participant show-and-tell, where builders, critics, and collaborators share what they made and what they learned. Then we’ll zoom out: What patterns emerged across submissions? What does this experiment tell us about where AI fits — and doesn’t — in evaluation work? And what should the field do with that?
- Our core collaborator Cathy Richards is participating of Global Evaluation Week as a guest during a technical panel, entitled: AI and Evaluation in Complex Contexts: Climate Resilience, Disaster Response and Humanitarian Action, taking place on June 4. This session will explore how artificial intelligence tools can support the evaluation of complex systems, particularly in areas such as climate resilience, biodiversity, and disaster risk management—priority domains for Caribbean Small Island Developing States.
- Our core collaborator Vari Magodo-Matimba is leading the panel “Critical approaches to AI from African and Indigenous perspectives“, taking place on June 05. Drawing from recent work on Made in Africa AI for evaluation and the Wolastoq Indigenous Evaluation Principles, this conversation centers foundational questions related to how African and Indigenous evaluation practitioners are navigating choices related to adopting, integrating, and resisting AI on their own terms, grounded in the shared recognition that practitioner agency, epistemic justice, and cultural sovereignty are not peripheral concerns but core principles of responsible evaluation per se, and in the responsible use of AI for evaluation. During the panel, African and Indigenous evaluation practitioners will co-interrogate what critical and responsible AI means for a practice rooted in lived experience, community accountability, and localization. Other speakers include Dr. Nicole Bowman, President at Bowman Performance Consulting, Dr. Nicole Tujague, Founding Director, The Seedling Group Consultants, Tracie Benally, Co-Founder, Emergence Circle, and Director of Community Insights & Special Projects, One Generation Fund.
