No-Code AI Hackathon: Share Your Voice in Community Rating


The submission phase of the AEA and MERL Tech Initiative’s No-Code AI Hackathon has officially closed, and we’re excited to announce that the community rating phase is now open! We received an impressive array of innovative solutions from evaluation practitioners looking to leverage AI to address various challenges in evaluation practice.

Your Vote Matters

From November 5-15, we invite the evaluation community to participate in rating these solutions. Your votes, along with expert panel assessments, will help determine the winners based on innovation, usability, relevance to evaluation practice, ethical considerations, and scalability. Read on to see the hackathon submissions, then be sure to cast your votes. 

Meet the Submissions

We’re thrilled to showcase the diverse range of solutions submitted to the hackathon. Here are the innovative tools created by our participants:

  1. Carrie’s Inductive Code Creator (CICC): Streamlines qualitative analysis by generating inductive codes from interview or focus group transcripts, separating themes by interviewer and participant suggestions.
  2. M&E Boost Coach: Assists emerging evaluators with competency assessment and career development planning, connecting them with relevant M&E capacity-building resources.
  3. Participatory AI-powered Social Network Analysis (PASNA): Focuses on mitigating bias in family planning program evaluation by engaging urban male youth in Uganda.
  4. Internal Evaluators Rock: Provides guidelines for building inclusive and equitable internal evaluation plans that maximize result utility.
  5. Culturally Responsive Evaluation for Latines: Offers specialized guidance for Latine evaluators and emerging professionals working on Latine-focused program evaluations.
  6. RedReflexionar: A multilingual chatbot helping grantee organizations assess their organizational capacity and receive targeted improvement recommendations.
  7. STEM Education Program Logic Model Generator: Accelerates the development of logic models for STEM education programs, saving valuable time for evaluation staff.
  8. Compare Reports: Analyzes and illustrates changes over time in focus group data, specifically designed for grant project leadership teams.
  9. AVA Social Epi Assistant: Addresses the need for historical, social, and political context in program evaluation.
  10. Logic Model More Like Large Language Model: Assists both evaluators and non-evaluators in developing comprehensive logic models from existing program information.
  11. Developmental Evaluation Partner: Guides evaluators and practitioners in understanding and applying developmental evaluation across different contexts.
  12. EvalTalk: Transforms standard reporting into conversational updates, facilitating cross-project learning and comprehensive portfolio analysis.
  13. Plain Language and Inclusive Surveys: Helps evaluators improve survey questions to align with plain language standards and inclusive practices.
  14. Another Evaluation is Possible: Challenges traditional evaluation approaches by promoting transformational Monitoring, Evaluation, Accountability, and Learning (MEAL) practices.

What’s Next?

  • Community Rating and Expert Panel Review Period: November 5-15. Take the survey to rate each solution. 
  • Hackathon Debrief, Show-and-Tell, and Winner Announcement: Join us on November 19th at 12pm ET during the monthly Sandbox Webinar for the announcement of winners. Register here!

This hackathon demonstrates the evaluation community’s creativity and commitment to leveraging AI tools to enhance evaluation practice. We’re excited to see which solutions resonate most with the community and our expert panel and to debrief with you and the community during our webinar on the 19th. 

Stay tuned for more updates, and don’t forget to cast your votes!

Leave a Reply

Your email address will not be published. Required fields are marked *