Recap: Transformative Evaluation Practice Through the Lens of Artificial Intelligence
In an era characterized by unprecedented challenges such as climate change, inequality, and security threats, the call for transformational change is louder than ever. On a mission to address these challenges, a recent 2-hour webinar delved into the pivotal role of artificial intelligence (AI) in revolutionizing evaluation practice.
This event brought together a diverse range of young and emerging evaluators with mid-career evaluators to discuss how they are using and supporting the integration of AI in strengthening individual evaluation capacity and organizational evaluation functions.
The webinar, hosted on June 5th, 2024 by the Sandbox group – part of the Natural Language Processing Community of Practice –, was designed as a mixture of discussions, demos, case studies, and participant interactions. It aimed to explore AI’s potential to support evaluation practitioners, participants, and consumers of evaluation, including policymakers, in leveraging this disruptive technology to contribute to transformational change. With a focus on the fundamentals of generative AI technology and frameworks for responsible use, the event highlighted both the promises and perils of AI in evaluation.
Speakers and Presentations
The event was facilitated by Zach Tilton and Mutsa Chiyamakobvu and featured several speakers, each presenting on various aspects of AI in evaluation:
- Loic Nsabimana: Loic Nsabimana discussed the challenges African evaluators face in leveraging AI for evaluation, highlighting issues such as skill acquisition, lack of structured data ecosystems, and inadequate government policies. He proposed solutions like improving training programs, investing in data management systems, and developing ethical guidelines to harness AI’s potential for transformative evaluation practices in Africa. (See Loic’s talk and slides)
- Lydiane Mbia: Lydiane Mbia discussed the potential of AI to enhance program planning in evaluations, emphasizing the importance of considering various evaluation stages, particularly the planning stage. She highlighted the applicability of AI tools like machine learning and large language models in automating tasks such as document review, systematic synthesis, and stakeholder communication, ultimately enhancing the evaluation process. (See Lydiane’s talk)
- Theophilus Courage Gbayou: Theophilus Courage Gbayou discussed his organization’s transition from conventional monitoring and evaluation (M&E) practices to AI-driven digital M&E practices. He highlighted how AI has revolutionized their work by enhancing efficiency, accuracy, and effectiveness in tracking progress, measuring impact, and making data-driven decisions, ultimately improving program quality. (See Theophilus’s talk and slides)
- Chineme Anowai: Chineme Anowai discussed using AI to evaluate the impact of climate change on various sectors, particularly education, by developing an algorithm that analyzes correlations between climate variables and educational outcomes. He emphasized the importance of real-time and localized data for precise, climate-specific evaluations and highlighted the need for comprehensive data storage to improve the efficiency and effectiveness of future evaluations. (See Chineme’s talk and slides)
- Harsh Anuj: Harsh Anuj provided an overview of how the Independent Evaluation Group at the World Bank is experimenting with data science techniques, including machine learning and AI, to enhance evaluation processes. He highlighted the potential benefits such as increased efficiency, validity, and breadth of evaluative work, while also addressing the challenges and ethical concerns associated with using large language models for tasks like text classification and summarization. (See Harsh’s talk and slides)
- Zach Tilton: Zach Tilton discussed using custom chatbots for theory of change work in evaluations, highlighting their potential to enhance efficiency and participant engagement. He emphasized the benefits of integrating AI into evaluation processes, shared findings and recommendations based on field observations, and encouraged experimentation with AI tools to build critical AI literacy and improve evaluation practices. (See Zach’s talk and slides)
- Ngesa Maturu: Ngesa Maturu demonstrated how AI and technology can streamline end-to-end monitoring, evaluation, learning, and impact assessment processes. By using a custom-built platform that integrates tools for data collection, transcription, translation, and analysis, organizations can manage and analyze qualitative and quantitative data more efficiently, allowing them to focus on deriving insights and measuring impact. (See Ngesa’s talk)
- Mutsa Chiyamakobvu: Mutsa Chiyamakobvu explained how individuals, regardless of their technical expertise, can contribute to AI model development by participating in various layers of the process, such as data collection and community engagement. Highlighting organizations like the Lacuna Fund and Masakane, she emphasized the importance of inclusive community building and the creation of datasets, particularly for marginalized populations, to advance AI models that address critical global issues. (See Mutsa’s talk and slides)
Next Steps
- Be sure to join the NLP-CoP if you’d like to stay connected and receive more information about events like this.