MERL Tech Session Picks for AEA 2022


It’s American Evaluation Association conference time again this week. This year’s conference is in New Orleans, and many who are involved with the MERL Tech community will be attending. All three conference themes are well aligned with topics and issues that the MERL Tech community has been grappling with since its inception. The third theme (digital data and technology) is especially aligned with MERL Tech.

The AEA Presidential strand committee lays out that “as evaluators, we must ensure we remain relevant in this new, data-rich and tech-oriented world…. New data sources, data types, and data tools have material influence on the landscape for evaluation. How do these trends affect our evaluation practice? What new issues emerge that must be considered for evaluation theory and practice? What becomes easier? What becomes harder?”

While the theme of digital data and technology is closest to MERL Tech, it’s critical to think about how MERL Tech interrelates with the other two conference themes: equity, social justice, and decolonization; and new actors and social finance; and how any use of technology should be done responsibly.

Veronical Olazabal, current AEA president, outlines why these themes matter for 2022:

The conference program offers a huge number of sessions across these themes. It can be a little overwhelming to navigate through the list of sessions. Paul Jasper, Principal Consultant and Data Innovation Lead at the Oxford Policy Management, got in touch on Friday to share his list of MERL Tech-themed sessions and I’ve added a few that I picked out from the list as well. We thought it might be useful to share our findings (see below).

What about you? Have you selected your sessions? Have we missed anything? Are there non-tech-related sessions that would be vital for MERL Tech enthusiasts to attend? We’d love to know what you’re planning to attend this year. Please drop any additional sessions we should include on the list in the comments. Hope to see some of you this week at the AEA. It’s thrilling to see such thoughtful engagement with technology and digital data in MERL and we’re excited to learn from you all!

Wednesday, November 9

Wednesday: 4.15-5.15pm

  • Building Global Capacity for Monitoring & Evaluation on Multi-purpose Cash Assistance in Humanitarian Programs through Digitalization and Cross-organizational Collaboration. This session will bring together evaluators and technical advisors from Save the Children (SC), International Rescue Committee (IRC), and Mercy Corps (MC) to discuss the reasons, process, and results from the development of a digital toolkit to monitor and evaluate Multi-purpose Cash Assistance in humanitarian programs. The panel will describe how input was collected from humanitarian implementers across the world, coordination and alignment of the tools with the development of Grand Bargain humanitarian indicators, and results from the piloting and use of the tools in complex humanitarian crises. The session will also highlight a successful example of cross-organizational collaboration to create tools that can serve the wider humanitarian community, in a modular, digital format, which can be easily adapted to local contexts and requirements, and utilized in low-resource settings. Results and data from initial uses of the tool will also be presented.
  • Speakers:  Qundeel Khattak, Nick Anderson, Alex Tran, Will Ratcliffe
  • Room: Celestin F
  • Creating a Big Data Analytics Platform Using National Datasets to Evaluate the Impact of Community Investments. Programmatic, organizational, and policy/advocacy investments directly or indirectly attempt to improve communities. Evaluating the impact of these investments has been challenging; the field lacked the technological capacity to gather and analyze the data required. The confluence of the following vital advances makes this no longer true: (1) open access to reliable, valid national community-level data (e.g., the US Census Bureau’s American Community Survey (ACS); local nonprofit and philanthropic financial revenue and program expenditure data from the IRS; mapping datasets like Google Maps; etc.); (2) powerful machine learning algorithms that can be trained to conduct complex geospatial longitudinal quasi-experimental observational studies of the impact of investments to improve community well-being; and (3) the ability to use community identity metrics to find and mitigate bias. This session shares how an automated big data analytics platform has been developed and used to evaluate the impact of community investments for achieving equitable well-being.
  • Speakers: Pete York and Miriam Sarwana
  • Room: Celestine A
  • Decolonising data collection; tips from practice. There is growing recognition that common research methodologies are rooted in colonial practices that are extractive, dehumanizing and obscure the contributions of subjects especially those based in the global south. Individuals and communities are repeatedly robbed of agency, their knowledge and experiences serving the agendas of others. Evaluation can contribute to this exploitative model, or be conducted in a way that challenges these systems and turns subjects into partners. Working with survivors of conflict in Uganda, we explored alternative methods of data collection that move beyond the tokenistic. This panel focuses on practical skills and shares insights for participatory data collection, including, taking cognizance of cultural, gender and social barriers to participation; building transferrable skills of participants as data collectors alongside financial remuneration; transparency; and effective feedback. Deliberate effort is required for real inclusion when people’s lived experiences are the subject of data collection.
  • Speakers: Marianne Akemu, Sarah Kasande
  • Room: Strand 11B

Wednesday: 5.30-6.30pm

  • Amplifying the role of community researchers in participatory evaluation from design to data collection and analysis. Funders and stakeholders have increasingly turned to participatory methods as a way to advance equity in evaluation practice. This panel draws on our experience (re)shaping participatory methods at the Institute for Community Health (ICH) by building the capacity of different communities to participate in every phase of a project: from design, to data collection and analysis and meaning-making. We share strategies, opportunities, and challenges involved in making the tools of evaluation more accessible to audiences outside of our industry, as well as lessons learned from engaging participants in hybrid, virtual, and in-person settings. Throughout our session, we also comment on how traditional administrative constructs on evaluation projects are often at odds with making these efforts truly participatory and community-driven. Together, our learnings point to ways forward for evaluators and stakeholders aiming to deepen community participation, leadership, and equity in their work. 
  • Speakers: Julia Curbera, Jeffrey Desmarais, Danielle Chun, Sharon Touw, Laura McElhern, Sarah Jalbert
  • Room: Strand 11B
  • Low-Resource, High-Impact Digital Storytelling for Inclusive and Contextualized Evaluation of Community-Based Coalitions. Evaluations of community-based coalitions can be challenging, requiring the right resources and skills to produce reports that are relevant and responsive to all stakeholder needs and perspectives. This session will share a toolkit and lessons learned from a low-resource, high-impact digital storytelling pilot evaluating programs led by community-based coalitions. We will demonstrate a process to identify and include the multiple perspectives, unique needs, and changing contexts of coalition stakeholders and the potential uses for the evaluation findings. We will also share how participants can use readily accessible technology to produce storytelling videos and digital reports that can be used to: Communicate inclusive evaluation findings; Re-energize communities towards their shared purpose; Increase capacity of community members to articulate the impacts of their collective efforts; and Inspire further contributions to the coalition-led efforts.
  • Speakers: Courtney Barnard, Jammie Josephson, Lauren Purvis
  • Room: Celestin B
  • Participatory Evaluation using PhotoVoice. Are you interested in participatory evaluation and needs assessment? Are you interested in diversity, equity and inclusion? Are you intersted in evaluating youth programs? Are you interested in PhotoVoice and consider images as data? If the answer is yes to one or more of the above questions, then my workshop will help you build skills in participatory evaluation. You will learn how using images, develop a digital narrative of the project and its impact, and use SHOWeD for sense-making will help increase evaluation participation while increasing participant diversity, equity and inclusion. You will learn  how participatory photography can help you identify needs as well as assets and understand nuanced stories of programmatic impact.
  • Speaker: Madhawa Palihapitiya
  • Room: Celestine G 
  • Using Photovoice to capture and reflect youth voice and experiences and influence strategic and programmatic directions. In 2021, Georgia Campaign for Adolescent Power and Potential (GCAPP), a statewide organization dedicated to empowering youth to make healthy choices and develop into productive citizens, leveraged the power of digital images and used Photovoice, an innovative tool that asks people to “represent their lives, point of view and experience using photographs and narratives (Wang & Burris, 1996).” GCAPP’s forty Youth Advisory Council (YAC) members identified mental health as the top issue facing youth. Using prompts and the Photovoice SHOWed method, YAC members captured images that reflected barriers to and facilitators of positive mental health and developed narratives to describe the images. Through several group discussions, YAC members identified key themes, coded narratives and produced an exhibition depicting the key issues and recommendations for strategic and programmatic direction. In this demonstration, you will learn how to use Photovoice to identify needs and strategies, evaluate programs and empower your community.
  • Speakers: Jennifer Balentine, Kiara Shoulders
  • Room: Strand 12B
  • Using Vignettes in Experimental Evaluations. Vignettes are commonly used in experimental research to examine attitudes, perceptions, and decision-making in social contexts. However, they may also be used as part of program evaluations to assess the need for or use of a program. Therefore, this demonstration will guide attendees through the method and development of experiments that use vignettes to test factors that may affect the use of social programs. The demonstration will specifically illustrate how evaluators selected the factors, created vignettes, and sampled and assigned participants to experimental conditions in a study about how problem-solving courts decide whether to allow someone to use medications for opioid use disorder (MOUD). Alternative methods and advantages and disadvantages of the specific methods in the example will also be discussed. The demonstration will focus on methods for developing and using vignettes, rather than evaluation outcomes so that it is useful to evaluators in a variety of fields.
  • Speakers: M.H. Clark, Barbara Andraka-Christou, Danielle Atkins, Jill Viglione, Brandon del Pozo, Rchel Totaram, Fatem Ahmed
  • Room: Strand 1

Thursday, November 10

Thursday: 10:15-11:15am

  • Digital Storytelling for Social Change: Amplifying the Voices of Parents of Children with Autism through Action Research. This presentation will focus on community-based participatory research approaches examining how parents of children with disabilities—specifically with autism spectrum disorder—gain access to federal programs, social services, and support. Empowering these parents to attain access to services can dramatically improve the quality of life for a child. Utilizing action research can help parents share their experiences through digital storytelling, revealing a cycle of feeling heard, connected, and motivated to leverage their stories for advocacy initiatives. As people share and connect their stories, the process may ultimately lead to action in addressing important challenges and barriers expressed through their stories. Evaluating digital platforms is important to supporting new learning and exploring solutions. This research is supported by the University of the Incarnate Word’s Graduate Studies Action Research Working Group in the Dreeben School of Education and funded by the Social Security Administration’s ARDRAW Small Grant Program and Robert Wood Johnson Foundation.
  • Speakers: Michelle Vasquez
  • Room: Strand 13A
  • Technology Enabled Girl Ambassadors: Designing a Tech-Based, Girl-Powered Research Methodology. In 2015, Girl Effect launched their Technology Enabled Girl Ambassadors methodology. Co-created with girls and young women aged 18-24, TEGA (Technology Enabled Girl Ambassadors) is a mobile based research methodology deployed by a network of adolescent girls to collect real-time insights into the lives of their peers and community stakeholders. This unique approach unlocks the open and honest conversations that may otherwise be lost or not included in traditional research. TEGA networks were developed with a feminist framework to empower in and out of school adolescent girls to gather contextually resonant research that brings to light the needs of young girls. Over 450 TEGAs have been trained, conducting over 25,000 interviews across seven countries (India, Rwanda, Tanzania, Bangladesh, Malawi, Nigeria, and the USA) on topics such as: nutrition, economic empowerment, health and wellbeing, vaccines, mobile access, social media and education.
  • Speaker: Ntasha Bhardwaj
  • Room: Strand 10A
  • The Data Revolution in Low- and Middle-Income Countries: Opportunities and Challenges for Evaluations. The World Development Report 2021 argues that, ‘innovations resulting from the creative new uses of data could prove to be one of the most life-changing events of this era.’ The same holds for evaluations in Low- and Middle-Income Countries. In this session, we argue that evaluations are being affected across the data-value chain: data production, management, analysis, and dissemination are all changing. We exemplify this by presenting three case studies of evaluations implemented by Oxford Policy Management that made use of new types of data (social media data, satellite imagery), new analytical methods (natural language processing, machine learning), and interactive ways of disseminating results. These cases studies show how the skillset needed to implement evaluations in this new data ecosystem is changing – and increasingly so. We end by discussing challenges that this poses to the profession across the world, including the risk of increasing inequalities and north-south power imbalances. 
  • Speakers: David Yamron and Paul Jasper
  • Room: Bolden 5
  • Transforming the Tedious into Timely Systems: Using Google Workspace to Automate Reporting. In this day and age, we have so much technology at our fingertips but we may not know how to meaningfully integrate this technology into our evaluation processes. This demonstration session will tap into the free system of Google Workspace (formerly Google Suite) and show you a way to automate your reports so that you can focus your mental energy on the more important aspects of synthesis and engaging in dialogue with invested groups around how to use the data. This process is especially useful when you are using the same survey multiple times (e.g., post-PD survey, multi-site evaluations). Additionally, while evaluations often focus on end users like funders and CEOs, this process centers equity by focusing on getting actionable data back to the on-the-ground practitioners (e.g., PD facilitator, site-level staff) so that they can use that information to inform and improve their work.  
  • Speakers: Brittany Hilte
  • Room: Empire C
  • When ML gets it right: The use of machine learning and administrative data to cost-effectively assess major initiatives targeting inequity. This panel highlights how valuable program administrative data combined with external datasets can be leveraged to conduct complex evaluations rigorously and cost-effectively, highlighting a case of the success of a 10-year evaluation of a National Science Foundation initiative that promotes underrepresented 2-year college STEM students’ academic persistence. This evaluation consisted of connecting longitudinal program administrative data (LSAMP 2-year college activities data) with external student (National Student Clearinghouse) and community-level (American Community Survey) data, applying quasi-experimental modeling techniques and machine learning algorithms to generate evidence of whether students are equitably getting what works. Panelists will discuss the utility and potential of the program administrative data system for real-time monitoring and evaluation, the funding agency community’s response to the findings, the importance of context and equity in creating programmatic change, and concluding with the implications of the approach as an automated, time- and cost-effective, and repeatable process for future evaluations.
  • Speakers: Francis Carter-Johnson, Miriam Sarwana, Peter York
  • Room: Strand 3

Thursday: 2.15-3.15pm

  • Can Artificial Intelligence Identify Evaluation Findings That Advance Equity: USAID’s Mixed Experience Applying Machine Learning to Ten Years of Evaluation Reports. In response to an Executive Order On Advancing Racial Equity and Support for Underserved Communities, USAID assessed its progams to identify barriers for underserved groups. This Roundtable Session aims to share USAID’s experience commissioning a study using Natural Language Processing to identify barriers or outcomes related to racial and ethnic equity across ten years of over 2,600 USAID program evaluation reports. This study is one of the first instances at USAID to produce a large quantity of machine readable evaluation text data and code from a natural language processing algorithm that USAID aims to make available to others. Discussion questions will include how can evaluators best use AI to identify new insights from existing information, especially to dismantle barriers to equity? What challenges have organizations or evaluators faced in using AI to gain insights to inform programs and actions aimed at advancing equity? What ethical considerations apply?
  • Speakers: Elizabeth Roen, Jerome Gallagher
  • Room: Bolden 3
  • Shadow Banning & Evaluation: How Evaluators Should Push Back on Social Media Expansion Initiatives to (Re)Shape the Field. Many evaluation projects require reporting of social media engagement metrics. However, as social media platforms are corporate entities that prioritize shareholder/partner profits, the algorithms remain elusive. In this Think Tank session, the presenters will discuss that in 2021, 66% of all Facebook posts about trans issues came from sources such as Breitbart Media and related sites. These findings will be explored as an illustrative challenge for evaluators’ need to prioritize equity, social justice, and decolonization in their work, as such topics are evidently shadow banned in social media. Led by chairperson Imara Jones, an Emmy and Peabody award-winning Black Trans journalist, and co-facilitated by evaluators who specialize in leading community building among BIPOC LBGTQIA+ initiatives; this session will focus on (re)shaping evaluation through addressing shadow banning in social media reporting and how to advocate for transformative data metrics among funders and collaborators.
  • Speaker: Marcel Foster
  • Room: Bolden 2
  • Zooming in for context: Applications of geospatial mapping tools to visualize data, evaluate, and provide capacity building for two public health initiatives. A geographic information system (GIS) is a powerful system and tool, integrating maps and data so that patterns and relationships can be analyzed with geographical context, thereby improving public health planning, decision-making, communication, and program evaluation. This demonstration session will describe GIS concepts and how this interactive and engaging tool can add new dimensions to evaluating and understanding the health and needs of a community. We will describe our approach to visualizing data using two software, Tableau and ArcGIS. We will present two case studies (a CDC-funded research study investigating tobacco social norms in Jackson, MS; and another project with 21 community-based organizations working to improve maternal and infant health inequities in NYC) to show how we collaboratively identified and collated data measures and designed visually appealing maps to support capacity building and education efforts, engage stakeholders, and inform program improvements. Further, successes, challenges, and lessons learned will be discussed.
  • Speakers: Hannah Yazdi, Emily Leung, Lindsay Senter, Michelle Gerka
  • Room: Celestine G

Thursday: 3.30-4.30pm

  • African perspectives on responsible data governance in M&E. The COVID-19 pandemic accelerated the adoption of digital platforms and tools. This has offered huge benefits for monitoring and evaluation, including the emergence of new and diverse data sources and the possibility of conducting remote M&E. At the same time, the abundance of data being collected (with and without the knowledge of individuals), the public’s increasing awareness of the dangers of expanding tracking and automated decision making, and the trend towards adoption of national data privacy regulations have all put a spotlight on the importance of responsible data governance. What does all this mean for M&E practitioners?  In 2021, a guide on responsible data governance in M&E was developed to assist in answering questions related data governance in the African context and providing guidance to M&E practitioners and stakeholders. In the session we will discuss findings and the role of evaluators as responsible data stewards and users.
  • Speakers: John Folly Akwetey, Jerusha Govender, Linda Raftree, Dugan Fraser
  • Room: Empire C
  • Anything your dashboard software can do mine can do better: exploring the relative affordances and challenges of Tableau and R Shiny. What data visualization tools are best suited to creating interactive dashboards for engaging and informing stakeholders? What are the affordances and challenges of tools that are available to us?  During this session, staff at the University of Michigan’s Center for Education Design, Evaluation, and Research (CEDER) will chart our experience developing parallel pilot dashboards for the same brief in both Tableau and R Shiny. We will provide some context, including the scope of our dashboard design pilot, the specifics of the evaluation project that we designed the dashboards for, and some design features we hoped to operationalize.  We will focus on these specific design features, and demonstrate how we built these using Tableau and R Shiny. We will look across these tools to compare the ease of use, customizability, and required time investment. Attendees will leave equipped with a better sense of the feasibility and advantages of using each tool.
  • Speakers: Cathy Hearn
  • Room: Celestine E
  • Blending Orthodox Mixed Methods with Machine Learning to Evaluate Programs Combating Cross-Border Trafficking in Persons (TIP). Evaluation is crucial in identifying who needs protection from TIP, understanding TIP scale, and seeking accountability towards anti-trafficking interventions stakeholders. A wide variety of methods to evaluate TIP have been implemented, but their success rests on a critical appraisal of their suitability to different trafficking contexts and the assumptions that they rest on. Each evaluation methodology is built on several key assumptions. These assumptions range across several domains, from how data is collected to how stigmatized TIP survivors are to the appropriateness to the local context. Both, selecting the appropriate evaluation methodological mixture, as well as decolonizing this mixture to best suit the contextual nature of interventions, constitute the secret recipe for methodologically sound anti-trafficking program evaluations. This roundtable presents International Justice Mission’s latest evaluation innovations to combat cross-border TIP programs in Eastern Europe and Southeast Asia, poses evaluation methodological questions, and seeks to learn from participants’ best practices.
  • Speakers: Michael Joseph, Nana Dagadu, Peter Williams
  • Room: Strand 8
  • How can the transition to a digital platform reshape traditional program evaluation? In this think tank, we engage in rich discussion around the integration of digital platforms to reshape—and even transform—program evaluation. We offer our own experience as a launching point: From 2015-20, we assessed early care and education (ECE) programs’ wellness-related policies, systems, and environments using a hardcopy tool with limited scope. We saw modest uptake by ECEs and little improvement in scores. We then transitioned to a comprehensive online platform designed to support ECEs’ self-assessment, action planning, and implementation. Within a year, completed assessments nearly tripled, and scores increased significantly. Based on this and attendees’ experiences, we will explore: What types of digital platforms are relevant to program evaluation? How can digital platforms enhance evaluative capacity and use of findings? How can digital platforms reshape our roles, approaches, and methodological designs? What criteria can help evaluators decide whether and how to go digital?
  • Speakers: Theresa le Gros, Laurel Jacobs, Aviva Starr, Bonnie Williams
  • Room: Bolden 5
  • Use Cases of Data Science for Consultancy.  Methodologies aided by data science approaches have gained momentum and attention in policymaking as promising new ways of capturing evidence of economic and societal phenomena. In this expert lecture, we present various data science techniques that have proven insightful and advantageous to source, analyse and present evidence for public policy in research and innovation. A series of projects commissioned by public European institutions – such as the European Commission, national ministries, research funding agencies, among others – and implemented by Technopolis Group will be presented with their respective approaches showcasing different methodological approaches (i.e. analysis of unstructured information on websites, analyses of text answers in large consultations, data linking, network analyses, interactive reports and data visualisations), their strengths and limitations, and the resources made available for the implementation.
  • Speakers: Z Maria del Carmen Calatrava Moreno
  • Room: Celestin H

Thursday: 4.45-5.45pm

  • The Challenges of Applying Technological and Digital Data to Evaluation: Lessons from the BEWERS Project Evaluation Study in Southern Kaduna, Nigeria. The Evaluation of Christian Aid’s Building Early Warning and Early Response Systems for conflict management (BEWERS) project faced a host of challenges in deploying digital solutions to Evaluating the impact of the project in target communities. BEWERS was a peace initiative implemented in Southern Kaduna in Kaduna state Nigeria. It targeted poor and hard-to-reach-communities in Kaura LGA where there have been frequent violent clashes between nomadic herdsmen and host communities. While the monitoring component of BEWERS was driven by community members who are trained and supported to monitor and collect early warning data using digital solutions and engaging  stakeholders to ensure the situation doesn’t escalate into violent clashes, this presentation discusses how the challenges encountered in implementing this approach from a digital perspective hoping to share lessons and trigger discussions on how projects in poor and isolated communities can benefit from the value addition of engaging Digital solutions in Evaluations.
  • Speakers: Blessing Christopher
  • Room: Strand 4
  • Trends in African MERL Tech: Insights from a Landscape Scan. In 2022, we conducted a landscape study, funded by the Mastercard Foundation Impact Labs, on how existing and emerging technologies are used for Monitoring, Evaluation, Research and Learning (MERL) in African contexts. We identified partners, initiatives and solutions that support MERL with a focus on equity and inclusion, youth and community empowerment, indigenous knowledge, real-time decisions, and future-focused scenarios. We will share our findings, including broad continental trends and country specific case studies and what these suggest for the wider field of evaluation. This will be a learning- and sharing session exploring how technological innovations are influencing and enriching the MERL practice and how these approaches, if designed and implemented responsibly, can support equity, inclusion, participation, and evaluative processes rooted in context. We’ll invite participants to share their own experiences and to consider ways that decolonization, new players, and emerging technologies are influencing evaluation on the African continent and globally.
  • Speakers: Dugan Fraser, John Folly Akwetey, Jerusha Govender, Linda Raftree
  • Room: Celestin C

Friday, November 11

Friday: 8.00-9.00am

  • Democratizing Evaluation using Participatory Video. Participatory video evaluation (PVE) promotes democratization of the evaluation process, putting decision making power in the hands of project participants and promoting peer to peer exchange, producing meaningful learning products and creating space for participants to use their own voices to share their experience. This approach is complimentary to community accountability mechanisms as they both place the participant voice at the center of the process. War Child Canada has worked with participants in our education and livelihoods projects in Democratic Republic of Congo, South Sudan, and Uganda to apply PVE to identify results and learning opportunities from our programming. These experiences, and findings from an external evaluation, will form the basis for discussion of the benefits and limitations of PVE, its applications in complex environments, and opportunities to further embed this methodology as a common evaluation practice globally.
  • Speakers: Morganne King-Wale, Dylan Diggs
  • Room: Bolden 5
  • The Violence of Data: The role of evaluators in reducing harm in our work. Many evaluators aspire to conduct evaluations that improve inequities in a population, community, or system of interest. However, we too often overlook the potential violence that our evaluation processes and results may cause.  How data is generated, when data is generated, where data is generated, who generates data, and for whom it is generated are all dictated by systemic power dynamics rooted in colonization, bias, ownership, and politics. If we do not address these inherent power dynamics at the onset of our evaluative work, we risk generating half-truths and may cause harm to the people and spaces we intend to benefit. This roundtable will share and generate new ideas for how we, as evaluators, can more thoroughly explore these power dynamics present throughout the evaluation process.
  • Speakers: Morganne King-Wale, Dylan Diggs
  • Room: Bolden 5

Friday: 9.15-10.15

  • “Help! I’m lost in a virtual breakout room!”: navigating stakeholder engagement in virtual environments. In recent years, the role of some external evaluators for state agencies has expanded to include facilitating strategic planning. In this demonstration, we will walk through our process of facilitating stakeholder engagement for three state health plans in a virtual setting. Each process was tailored to the context of existing state infrastructure, stakeholder interest, and timeline constraints. Facilitation methods ranged from asynchronous guidebooks and feedback forms to synchronous virtual workshops utilizing an online, interactive whiteboard tool to capture stakeholder input. We’ll discuss what worked, what didn’t, and how we are working to address equity and inclusivity in stakeholder engagement for state planning. Participants will leave this session with a flow chart of considerations in selecting facilitation methods and a toolbox of virtual facilitation methods to draw from. While our examples are specific to state health planning, these tools can be used for a broader range of contexts.
  • Speakers: Melissa Haynes, Anne Schwalbe, Gabriel Anderson, Ellen Squires, Kate LaVelle
  • Room: Bolden 6
  • 25 Ways to Make More Inclusive Data Visualization. Let’s call it like it is: Data visualization is overwhelmingly white, cis-gendered, and able-bodied. This shows up in the constitution of data visualizers (especially leadership) and in the practices carried out every day by all evaluators who visualize. The chart types we choose, the way we think about color and text, the decision to create viz in a software program – these are choices that we’ll rethink in this session. You’ll leave with at least 25 immediately implementable strategies to be more inclusive and a list of the Black, Brown, queer, and disabled visualizers to follow as you work to become ever better.
  • Speakers: Stephanie Evergreen
  • Room: Celestin E
  • ITE2 – New technology tools for evaluation: new applications and new challenges. (4 papers) Integrating client based electronic systems to improve tracking and evaluation of targeted community led DREAMS initiatives: lessons from Eswatini; Data-Driven Engagement and Evaluation to Increase Diversity; Two Years with No Pants: What We Have Learned about Technology in Post-Pandemic Evaluation; Why you shouldn’t blindly trust AI.
  • Speakers: John Baek (Chair), Sandile Ginindza, Hacer karamese, Brittney Thomas, Sharon Han, Stefanie Acost-Ramirez, Laura Bartlett, Ashi Asikin-Germager, Francesco MazzeoRinaldi
  • Room:  Celestin B

Friday: 10.30-12pm

  • Enhancing Humanitarian Cash & Voucher Assistance Programming through interoperable technologies – An evaluation of integrating mobile data collection tools, biometrics, and electronic voucher payment platforms. Humanitarian cash and voucher assistance (CVA) programs must operate at unprecedented scale to meet the growing needs of displaced and conflict affected persons. To keep up with growing humanitarian needs, humanitarian actors such as Mercy Corps have integrated various different technologies into CVA programs to enhance their efficiency and effectiveness as well as provide greater insights into how CVA program activities can be adjusted to better meet the needs of displaced and conflicted affected persons. Join us for an interactive roundtable discussion where we will present and discuss methods used to evaluate the implementation of an interoperable technology stack (mobile data collection, electronic voucher distribution systems, biometric fingerprinting) and the results of those technologies in enhancing the efficiency and effectiveness of key phases and overall reach of Mercy Corps’ humanitarian CVA programs in northeast Nigeria. We hope to further discussions of evaluative methods of technologies in humanitarian programs.  
  • Speakers: Alex Tran
  • Room: Strand 13A
  • Incorporating Geospatial data tools into evaluations. This cross-sector session will provide (1) a primer on how geospatial data can supplement evaluations to uncover new and interesting conclusions and (2) a toolkit of online geographic resources available for use by evaluators with varying levels of geographic analysis knowledge. Attendees receive the toolkit, which includes descriptions of accessible tools and their uses, benefits and drawbacks, links to the tools, and recommendations for training and getting started with each tool. Examples of these tools include: Basic options accessible to those with minimal training, like Google Maps, where any user can create a map locating points of interest. Moderate options like the Data Basin project, a free science-based mapping and analysis platform that can be used to compile ready-made map layers to tell a story. Advanced tools like QGIS and ArcGIS, which require training but offer powerful analytical capabilities for computing spatial statistics.  
  • Speakers: Katie Butler, Katherine Haverstock Tanouem Tracey Bain, Marcel Foster, Amanda Aragon
  • Room: Empire C

Friday: 3.15-4.15pm

  • Machine Learning Evidence Generation for Equity: How Program Experts, Administrative Data, and Machine Learning Algorithms Can Collaborate to Produce Timely, Rigorous Evaluations that Reduce Bias. Program administrative data are proliferating, allowing direct service organizations to use this data to evaluate their programs. Data science has advanced rapidly, including the use of machine learning (ML) algorithms for prediction, prescription, and evaluation. However, we all know that data capturing the actions, transactions, and decisions of humans are biased, resulting in algorithms that perpetuate and even amplify these biases. So, how do we use the technological advances offered by machine learning algorithms for evaluating programs, without perpetuating biases? This session will share how direct service organizations are conducting rigorous machine learning evaluations by intentionally partnering evaluators, practice/program experts, and data scientists, with program experts in the lead, to train algorithms to quasi-experimentally evaluate what works, for whom, and in what contexts, while minimizing all types of selection and identity biases inherent to the data. This panel session will include sharing two real-world case examples.  
  • Speakers: Peter York, Stephen Shimshock, Kelly Fitzsimmons, Sarah Di Trola, Erika Van Buren, Kate Ryan
  • Room: Empire C
  • Designing Interactive Dashboards in Excel and Google Sheets using the PARCS Method. Learn how to create an interactive dashboard in Excel and Google Sheets using the PARCS Method! This skill building workshop illuminates new ways to utilize two widely accessible programs to build professional, interactive dashboards that (re)shape the way clients and stakeholders engage with their data. This skill-building workshop will display best practices in dashboard design, and demonstrate a 5-step process to build an interactive dashboard: Pivot, Analyze, Rename, Chart, and Slice. Using sample survey data, attendees will learn how to create a professional looking dashboard that allows stakeholders to slice-and-dice their data in seconds. This workshop offers participants a cost- effective way to (re)shape a static table into a dynamic dashboard that allows for quick and easy filtering of the data. Attendees will walk away from the session with the basic building blocks to design their own interactive dashboards using low-cost (or no cost) tools. Free sample dashboards are provided.  
  • Speakers:  Shelly Engelman and Tom Withee
  • Room: Celestin G

Saturday, November 12

Saturday: 8.00-9.00am

  • Beyond the Buzzword: Visualizing Collaboration Using Interactive Mapping Software. Collaboration has become a buzzword. While it is a goal of many projects and a common grant objective, it can be challenging to operationalize, measure, and evaluate the strength and patterns of collaborative efforts in an accessible and actionable way. In this roundtable discussion, we will explore the use of mapping software in two program evaluations that centered collaboration by empowering communities to integrate real-time data into their collaborative processes. We utilized the Levels of Collaboration Scale to map the strength of relationships in an anti-human trafficking project and used interactive software to map inter-agency referrals in a county-wide family strengthening initiative. We will infuse collaboration into this discussion by tackling important questions: How can collaboration maps increase engagement, mutual learning, and power-sharing? What are the potential risks of this type of visualization? Where are our gaps in understanding related to collaboration?
  • Speakers: Lauren Alessi, Marc Winokur, Sunil Butler
  • Room: Strand 13A
  • So many platforms, so little time: Making the most of your options for data collection, management, and reporting. Evaluators, especially those in university settings, often have a range of digital data collection platforms to choose from. This demonstration will compare and contrast features of two commonly available tools, Qualtrics and REDCap, and share how our university-based evaluation team decides when to use which platform. We’ll also highlight some of our favorite features in each of these platforms. Favorite time-saving tricks include: using your digital data collection tool in an area with no Wi-Fi; automating reminders to folks who haven’t completed your survey; linking multiple instruments on a longitudinal study; letting participants select which language they’d like to take the survey in from a single link; automating summaries of qualitative data from your open-ended questions, and building an instantaneously-updating report when you need a rapid turnaround on data (without needing to know anything about APIs or other dashboard software!).    
  • Speakers: Madeleine deBlois, Rachel Leih, Kara Tanoue, Rachel Gildersleeve
  • Room: Strand 3

Saturday: 9.15-10.15am

  • Considerations and Implications for Modernizing Data Collection and Management Systems. Digitization of data and modern technologies can transform evaluation practice by making data more accessible and data management systems more efficient. As evaluators and data scientists in global public health move towards modernized platforms and processes for data collection, management, and visualization, we must consider the needs of our partners and anticipate the implications of having more modern systems for all partners at all levels. Ensuring data management systems increase efficiency and support our partners in reporting, accessing, and using their data is a critical consideration for ethical evaluation practice.
  • Speakers: Samantha Cruise, Anjum Mandani, Samantha Gross, Blessings Chisunkha, Manon Billaud, Meredith Pinto, Danique Gigger, Marian Creasy, Anja Minnick
  • Room: Empire C

Saturday: 10.30-12.00pm

  • Big data and evaluation: addressing potential sources of bias affecting the understanding of equity and social justice. The data collection capacity and analytical power of big data  makes it  inevitable that these tools and techniques  will become a standard part of the evaluators toolbox. However, an issue of concern is how the use of big data creates new sources of bias that evaluators must understand and address. Importantly, these biases can result in lack of attention to, or the misunderstanding of issues affecting equity and racial justice.   Data scientists work with administrative and other secondary data that were originally collected for a different purpose, and the data may be used without fully understanding how and why it was collected, and how appropriate it is for addressing a particular policy issue.   The session will provide a framework for identifying sources of bias at each stage of the evaluation cycle, particularly as they affect issues of equity, and will propose guidelines on how to address these biases. 
  • Speakers: Jerusha Govender, Peter York, Oscar Garcia, Miriam Saran
  • Room: Empire A

Leave a Reply

Your email address will not be published. Required fields are marked *