No-Code AI Hackathon: Share Your Voice in Community Rating
The submission phase of the AEA and MERL Tech Initiative’s No-Code AI Hackathon has officially closed, and we’re excited to announce that the community rating phase is now open! We received an impressive array of innovative solutions from evaluation practitioners looking to leverage AI to address various challenges in evaluation practice.
Your Vote Matters
From November 5-15, we invite the evaluation community to participate in rating these solutions. Your votes, along with expert panel assessments, will help determine the winners based on innovation, usability, relevance to evaluation practice, ethical considerations, and scalability. Read on to see the hackathon submissions, then be sure to cast your votes.
Meet the Submissions
We’re thrilled to showcase the diverse range of solutions submitted to the hackathon. Here are the innovative tools created by our participants:
- Carrie’s Inductive Code Creator (CICC): Streamlines qualitative analysis by generating inductive codes from interview or focus group transcripts, separating themes by interviewer and participant suggestions.
- M&E Boost Coach: Assists emerging evaluators with competency assessment and career development planning, connecting them with relevant M&E capacity-building resources.
- Participatory AI-powered Social Network Analysis (PASNA): Focuses on mitigating bias in family planning program evaluation by engaging urban male youth in Uganda.
- Internal Evaluators Rock: Provides guidelines for building inclusive and equitable internal evaluation plans that maximize result utility.
- Culturally Responsive Evaluation for Latines: Offers specialized guidance for Latine evaluators and emerging professionals working on Latine-focused program evaluations.
- RedReflexionar: A multilingual chatbot helping grantee organizations assess their organizational capacity and receive targeted improvement recommendations.
- STEM Education Program Logic Model Generator: Accelerates the development of logic models for STEM education programs, saving valuable time for evaluation staff.
- Compare Reports: Analyzes and illustrates changes over time in focus group data, specifically designed for grant project leadership teams.
- AVA Social Epi Assistant: Addresses the need for historical, social, and political context in program evaluation.
- Logic Model More Like Large Language Model: Assists both evaluators and non-evaluators in developing comprehensive logic models from existing program information.
- Developmental Evaluation Partner: Guides evaluators and practitioners in understanding and applying developmental evaluation across different contexts.
- EvalTalk: Transforms standard reporting into conversational updates, facilitating cross-project learning and comprehensive portfolio analysis.
- Plain Language and Inclusive Surveys: Helps evaluators improve survey questions to align with plain language standards and inclusive practices.
- Another Evaluation is Possible: Challenges traditional evaluation approaches by promoting transformational Monitoring, Evaluation, Accountability, and Learning (MEAL) practices.
What’s Next?
- Community Rating and Expert Panel Review Period: November 5-15. Take the survey to rate each solution.
- Hackathon Debrief, Show-and-Tell, and Winner Announcement: Join us on November 19th at 12pm ET during the monthly Sandbox Webinar for the announcement of winners. Register here!
This hackathon demonstrates the evaluation community’s creativity and commitment to leveraging AI tools to enhance evaluation practice. We’re excited to see which solutions resonate most with the community and our expert panel and to debrief with you and the community during our webinar on the 19th.
Stay tuned for more updates, and don’t forget to cast your votes!