Tag Archives: monitoring

Let’s discuss Responsible Data in M&E at the Africa Evaluation Indaba!

Indaba is an isiZulu and isiXhosa word for an important meeting held by leaders in South Africa to discuss critical matters. This past week, I’ve been listening in at the Africa Evaluation Indaba, organized by The University of the Witwatersrand and CLEAR Anglophone Africa. Critical matters have indeed been discussed!

We will delve into one such topic on Tuesday, 24 November, 12-13.30 Central African Time during the launch of the Responsible Data in Monitoring and Evaluation (RDiME) Alliance.

The RDiME Alliance is a community of practice that will work on data governance in the African context, with a focus on Monitoring and Evaluation (M&E). It is part of a CLEAR-AA and MERL Tech initiative to convene a group of interested M&E professionals and data governance experts in order to dig deeper into this topic and to work together on guidance for evaluators related to responsible data governance and management. It builds on discussions that CLEAR-AA and MERL Tech hosted in June about responsible data, remote monitoring, and use of administrative data during COVID.

At the upcoming session, we will discuss ways that M&E practitioners can improve data management and how they can play a role in improving data governance practices at the institutional and national levels. We will open the floor for discussion and consultation on priority areas and gaps in the practical aspects of responsible data management as well as in data governance processes that improve accountability.

Following the Indaba, will draft up a plan that lays out how we can best offer training, guidance and support for the M&E community with relation to responsible data management and data governance.  We also plan to develop a set of guidance documents on responsible data governance and M&E together with the RDiME Working Group, which is made up of a group of experts who have data governance, data protection, and evaluation-related expertise and experience. We hope the RDiME Alliance’s work will support government evaluation efforts as well as civil society organizations and evaluation firms.

Register here to attend the RDiME Launch and Discussion at the Indaba!

What makes the Africa Indaba Evaluation conversations so exciting (for me, at least!) is that they are framed within a lens of decolonization and transformation. This past week, topics included:

  • “Transforming Evaluation: The Race, Power, Gender and Class Struggle,” with speakers covering questions like: how do we locate evaluation within the historical context of asymmetrical global power relations and aid dependency? What needs to be done to dismantle systems and structures so that evaluation does not become complicit in entrenching existing inequalities? (Monday 16 November)
  • The Made in Africa (MAE) Evaluation approach which arose out of the quest for contextually relevant approaches, methods that emphasize the centrality of contextual relevance and place importance on indigenous knowledge systems. (Tuesday 17 November)
  • The launch of the Global Evaluation Initiative, (GEI) which aims to offer better coordination of evaluation resources and to support local and international organizations working in the area of evaluation. (Wednesday 18 November)

(Recordings of these sessions will be available soon).

Join in this coming week (November 23-26, 2020) for more sessions!

  • Monday you can find out more about AfrED, an increasingly comprehensive database on evaluation projects, studies, agencies and actors in Africa.
  • Tuesday, as noted, is our RDiME Alliance Launch and discussion
  • Wednesday will cover Adaptive Management and Climate Change
  • Thursday will be a wrap-up session to discuss the long-term road ahead for evaluation in the African context.

Register for the RDiME Alliance discussion and launch on Tuesday, November 24! (This same link will allow you to join in any of the sessions)

See the full Africa Evaluation Indaba agenda for sessions, timings and speaker names.

Join the Responsible Data in MERL Initiative in Anglophone Africa

MERL Tech and CLEAR Anglophone Africa hosted three “gLocal evaluation” events in late June under the theme “How to conduct Digital MERL during COVID-19.” These covered the topics of responsible use of data, remote monitoring, and administrative data use.

Key aspects coming out of the events were the need for 1) guidance on data governance and 2) orientation on responsible data practices. Both policy and practice need to be contextualized for the African context and aimed at supporting African monitoring, evaluation, research and learning (MERL) practitioners in their work.

As a follow-on activity, CLEAR Anglophone Africa is calling on M&E practitioners to join up to be a part of this responsible data project for African MERL Practitioners. CLEAR Anglophone Africa and MERL Tech will be collaborating on this responsible data initiative.

Click here to register your interest in the Responsible Data project!

For more information, watch the video from CLEAR Anglophone Africa, and sign up to participate!

 

Emerging Technologies: How Can We Use Them for MERL?

Guest post from Kerry Bruce, Clear Outcomes

A new wave of technologies and approaches has the potential to influence how monitoring, evaluation, research and learning (MERL) practitioners do their work. The growth in use of smartphones and the internet, digitization of existing data sets, and collection of digital data make data increasingly available for MERL activities. This changes how MERL is conducted and, in some cases, who conducts it.

We recently completed research on emerging technologies for use in MERL as part of a wider research project on The State of the Field of MERL Tech.

We hypothesized that emerging technology is revolutionizing the types of data that can be collected and accessed and the ways that it can be processed and used for better MERL. However, improved research on and documentation of how these technologies are being used is required so the sector can better understand where, when, why, how, and for which populations and which types of MERL these emerging technologies would be appropriate.

The team reviewed the state of the field and found there were three key new areas of data that MERL practitioners should consider:

  • New kinds of data sources, such as application data, sensor data, data from drones and biometrics. These types of data are providing more access to information and larger volumes of data than ever before.
  • New types of systems for data storage.  The most prominent of these was the distributed ledger technologies (also known as blockchain) and an increasing use of cloud and edge computing.  We discuss the implications of these technologies for MERL.
  • New ways of processing data, mainly from the field of machine learning, specifically supervised and unsupervised learning techniques that could help MERL practitioners manage large volumes of both quantitative and qualitative data.

These new technologies hold great promise for making MERL practices more precise, automated and timely. However, some challenges include:

  • A need to clearly define problems so the choice of data, tool, or technique is appropriate
  • Non-representative selection bias when sampling
  • Reduced MERL practitioner or evaluator control
  • Change management needs to adapt how organizations manage data
  • Rapid platform changes and difficulty with assessing the costs
  • A need for systems thinking which may involve stitching different technologies together

To address emerging challenges and make best use of the new data, tools, and approaches, we found a need for capacity strengthening for MERL practitioners, greater collaboration among social scientists and technologists, a need for increased documentation, and a need for the incorporation of more systems thinking among MERL practitioners.

Finally there remains a need for greater attention to justice, ethics and privacy in emerging technology.

Download the paper here!

Read the other papers in the series here!

New Research! The State of the Field of MERL Tech, 2014-2019

The year 2020 is a compelling time to look back and pull together lessons from five years of convening hundreds of monitoring, evaluation, research, and learning and technology practitioners who have joined us as part of the MERL Tech community. The world is in the midst of the global COVID-19 pandemic, and there is an urgent need to know what is happening, where, and to what extent. Data is a critical piece of the COVID-19 response — it can mean the difference between life and death. And technology use is growing due to stay-at-home orders and a push for “remote monitoring” and data collection from a distance.

At the same time, we’re witnessing (and I hope, also joining in with) a global call for justice — perhaps a tipping point — in the wake of decades of racist and colonialist systems that operate at the level of nations, institutions, organizations, the global aid and development systems, and the tech sector. There is no denying that these power dynamics and systems have shaped the MERL space as a whole, and the MERL Tech space as well.

Moments of crisis tend to test a field, and we live in extreme times. The coming decade will demand a nimble, adaptive, fair, and just use of data for managing complexity and for gaining longer-term understanding of change and impact. Perhaps most importantly, in 2020 and beyond, we need meaningful involvement of stakeholders at every level and openness to a re-shaping of our sector and its relationships and power dynamics.

It is in this time of upheaval and change that we are releasing a set of four papers that aim to take stock of the field from 2014-2019 as launchpad for shaping the future of MERL Tech. In September 2018, the papers’ authors began reviewing the past five years of MERL Tech events to identify lessons, trends, and issues in this rapidly changing field. They also reviewed the literature base in an effort to determine what we know, what we yet need to understand about technology in MERL, and what are the gaps in the formal literature. No longer is this a nascent field, yet it is one that is hard to keep up with, given that it is fast paced and constantly shifting with the advent of new technologies. We have learned many lessons over the past five years, but complex political, technical, and ethical questions remain.

The State of the Field series includes four papers:

MERL Tech State of the Field: The Evolution of MERL Tech: Linda Raftree, independent consultant and MERL Tech Conference organizer.

 

What We Know About Traditional MERL Tech: Insights from a Scoping Review: Zach Tilton, Michael Harnar, and Michele Behr, University of Western Michigan; Soham Banerji and Manon McGuigan, independent consultants; and Paul Perrin, Gretchen Bruening, John Gordley and Hannah Foster, University of Notre Dame; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Big Data to Data Science: Moving from “What” to “How” in the MERL Tech SpaceKecia Bertermann, Luminate; Alexandra Robinson, Threshold.World; Michael Bamberger, independent consultant; Grace Lyn Higdon, Institute of Development Studies; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Emerging Technologies and Approaches in Monitoring, Evaluation, Research, and Learning for International Development Programs: Kerry Bruce and Joris Vandelanotte, Clear Outcomes; and Valentine Gandhi, The Development CAFE and Social Impact.

Through these papers, we aim to describe the State of the Field up to 2019 and to offer a baseline point in time from which the wider MERL Tech community can take action to make the next phase of MERL Tech development effective, responsible, ethical, just, and equitable. We share these papers as conversation pieces and hope they will generate more discussion in the MERL Tech space about where to go from here.

We’d like to start or collaborate on a second round of research to delve into areas that were under-researched or less developed. Your thoughts are most welcome on topics that need more research, and if you are conducting research about MERL Tech, please get in touch and we’re happy to share here on MERL Tech News or to chat about how we could work together!

Big Data to Data Science: Moving from ‘What’ to ‘How’ in MERL

Guest post by Grace Higdon

Big data is a big topic in other sectors but its application within monitoring and evaluation (M&E) is limited, with most reports focusing more on its potential rather than actual use. Our paper,  “Big Data to Data Science: Moving from ‘What’ to ‘How’ in the MERL Tech Space”  probes trends in the use of big data between 2014 and 2019 by a community of early adopters working in monitoring, evaluation, research, and learning (MERL) in the development and humanitarian sectors. We focus on how MERL practitioners actually use big data and what encourages or deters adoption.

First, we collated administrative and publicly available MERL Tech conference data from the 281 sessions accepted for presentation between 2015 and 2019. Of these, we identified 54 sessions that mentioned big data and compared trends between sessions that did and did not mention this topic. In any given year from 2015 to 2019, 16 percent to 26 percent of sessions at MERL Tech conferences were related to the topic of big data. (Conferences were held in Washington DC, London, and Johannesburg).

Our quantitative analysis was complemented by 11 qualitative key informant interviews. We selected interviewees representing diverse viewpoints (implementers, donors, MERL specialists) and a range of subject matter expertise and backgrounds. During interviews, we explored why an interviewee chose to use big data, the benefits and challenges of using big data, reflections on the use of big data in the wider MERL tech community, and opportunities for the future.

Findings

Our findings indicate that MERL practitioners are in a fragmented, experimental phase, with use and application of big data varying widely, accompanied by shifting terminologies. One interviewee noted that “big data is sort of an outmoded buzzword” with practitioners now using terms such as ‘artificial intelligence’ and ‘machine learning.’ Our analysis attempted to expand the umbrella of terminologies under which big data and related technologies might fall. Key informant interviews and conference session analysis identified four main types of technologies used to collect big data: satellites, remote sensors, mobile technology, and M&E platforms, as well as a number of other tools and methods. Additionally, our analysis surfaced six main types of tools used to analyze big data: artificial intelligence and machine learning, geospatial analysis, data mining, data visualization, data analysis software packages, and social network analysis.

Barriers to adoption

We also took an in-depth look at barriers to and enablers of use of big data within MERL, as well as benefits and drawbacks. Our analysis found that perceived benefits of big data included enhanced analytical possibilities, increased efficiency, scale, data quality, accuracy, and cost-effectiveness. Big data is contributing to improved targeting and better value for money. It is also enabling remote monitoring in areas that are difficult to access for reasons such as distance, poor infrastructure, or conflict.

Concerns about bias, privacy, and the potential for big data to magnify existing inequalities arose frequently. MERL practitioners cited a number of drawbacks and limitations that make them cautious about using big data. These include lack of trust in the data (including mistrust from members of local communities); misalignment of objectives, capacity, and resources when partnering with big data firms and the corporate sector; and ethical concerns related to privacy, bias, and magnification of inequalities. Barriers to adoption include insufficient resources, absence of relevant use cases, lack of skills for big data, difficulty in determining return on investment, and challenges in pinpointing the tangible value of using big data in MERL.

Our paper includes a series of short case studies of big data applications in MERL. Our research surfaced a need for more systematic and broader sharing of big data use cases and case studies in the development sector.

The field of Big Data is rapidly evolving, thus we expect that shifts have happened already in the field since the beginning of our research in 2018. We recommend several steps for advancing with Big Data / Data Science in the MERL Space, including:

  1. Consider. MERL Tech practitioners should examine relevant learning questions before deciding whether big data is the best tool for the MERL job at hand or whether another source or method could answer them just as well.
  2. Pilot testing of various big data approaches is needed in order to assess their utility and the value they add. Pilot testing should be collaborative; for example, an organization with strong roots at the field level might work with an agency that has technical expertise in relevant areas.
  3. Documenting. The current body of documentation is insufficient to highlight relevant use cases and identify frameworks for determining return on investment in big data for MERL work. The community should do more to document efforts, experiences, successes, and failures in academic and gray literature.
  4. Sharing. There is a hum of activity around big data in the vibrant MERL Tech community. We encourage the MERL Tech community to engage in fora such as communities of practice, salons, events, and other convenings, and to seek less typical avenues for sharing information and learning and to avoid knowledge silos.
  5. Learning. The MERL Tech space is not static; indeed, the terminology and applications of big data have shifted rapidly in the past 5 years and will continue to change over time. The MERL Tech community should participate in new training related to big data, continuing to apply critical thinking to new applications.
  6. Guiding. Big data practitioners are crossing exciting frontiers as they apply new methods to research and learning questions. These new opportunities bring significant responsibility. MERL Tech programs serve people who are often vulnerable — but whose rights and dignity deserve respect. As we move forward with using big data, we must carefully consider, implement, and share guidance for responsible use of these new applications, always honoring the people at the heart of our interventions.

Download the full paper here.

Read the other papers in the State of the Field of MERL Tech series.

Use of Administrative Data for the COVID-19 Response

Administrative data is that which is collected as part of regular activities that occur during program implementation. It has not been tapped sufficiently for learning and research. As the COVID-19 pandemic advances, how might administrative data be used to help with the COVID response, and other national or global pandemics.

At the final event in the MERL Tech and CLEAR-Anglophone Africa series for  gLOCAL Evaluation Week, we were joined by Kwabena Boakye, Ministry of Monitoring and Evaluation, Ghana; Bosco Okumu, National Treasury and Planning, Kenya; Stephen Taylor, Department of Basic Education, South Africa; and Andrea Fletcher, Cooper-Smith.

The four panelists described the kinds of administrative or “routine” data they are using in their work. For example, in Kenya educational records, client information from financial institutions, hospital records of patients, and health outcomes are being used to plan and implement actions related to COVID-19 and to evaluate the impact of different COVID-related policies that governments have put in place or are considering. In Malawi, administrative data is combined with other sources such as Google mobility data to understand how migration might be affecting the virus’ spread. COVID-19 is putting a spotlight on weaknesses and gaps in existing administrative data systems.

Watch the video here:

Listen to just the audio from the event here:

Summary:

Benefits of administrative data include that:

  • Data is generated through normal operations and does not require an additional survey to create it
  • It can be more relevant than a survey because it covers a large swath of the entire population
  • It is an existing data source during COVID when it’s difficult to collect new data
  • It can be used to create dashboards for decision-makers at various levels

Challenges include:

  • Data sits in silos and the systems are not designed to be interoperable
  • Administrative data may leave out those who are not participating in a government program
  • Data sets are time-bound to the life of the program
  • Some administrative data systems are outdated and have poor quality data that is not useful for decision-making or analysis
  • There is a demand for beautiful dashboards and maps but there is insufficient attention to the underlying data processes that would be needed to produce this information so that it can be used
  • Real-time data is not possible when there is no Internet connectivity
  • There is insufficient attention to data privacy and protection, especially for sensitive data
  • Institutions may resist providing data if weakness are highlighted through the data or they think it will make them look bad

Recommendations for better use of administrative data in the public sector:

  • Understand the data needs of decision-makers and build capacity to understand and use data systems
  • Map the data that exists, assess its quality, and identify gaps
  • Design and enact policies and institutional arrangements, tools, and processes to make sure that data is organized and interoperable.
  • Automate processes with digital tools to make them more seamless.
  • Focus on enhancing underlying data collection processes to improve the quality of administrative data; this includes making it useful for those who provide the data so that it is not yet another administrative burden with no local value.
  • Assign accountability for data quality across the entire system.
  • Learn from the private sector, but remember that the public sector has different incentives and goals.
  • Rather than fund more research on administrative data, donors should put funds into training on data quality, data visualization, and other skills related to data use and data literacy at different levels of government.
  • Determine how to improve data quality and use of existing administrative data systems rather than building new ones.
  • Make administrative data useful to those who are inputting it to improve data quality.

Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Remote Monitoring in the Time of Coronavirus

On June 3,  MERL Tech and CLEAR-Anglophone Africa hosted the second of three virtual events for gLOCAL Evaluation Week. At this event, we heard from Ignacio Del Busto, IDInsight, Janna Rous, Humanitarian Data, and Ayanda Mtanyana, New Leaders, on the topic of remote monitoring.

Data is not always available, and it can be costly to produce. One challenge is generating data cheaply and quickly to meet the needs of decision-makers within the operational constraints that enumerators face. Another is ensuring that the process is high quality and also human-centered, so that we are not simply extracting data. This can be a challenge when there is low connectivity and reach, poor networks capacity and access, and low smartphone access. Enumerator training is also difficult when it must be done remotely, especially if enumerators are new to technology and more accustomed to doing paper-based surveys.

Watch the video below.

Listen to just the audio from the session here.

Some recommendations arising from the session included:

  • Learn and experiment as you try new things. For example, tracking when and why people are dropping off a survey and finding ways to improve the design and approach. This might be related to the time of the call or length of the survey.
  • It’s not only about phone surveys. There are other tools. For example, WhatsApp has been used successfully during COVID-19 for collecting health data.
  • Don’t just put your paper processes onto a digital device. Instead, consider how to take greater advantage of digital devices and tools to find better ways of monitoring. For example, could we incorporate sensors into the monitoring from the start? At the same time, be careful not to introduce technologies that are overly complex.
  • Think about exclusion and access. Who are we excluding when we move to remote monitoring? Children? Women? Elderly people? We might be introducing bias if we are going remote. We also cannot observe if vulnerable people are in a safe place to talk if we are doing remote monitoring. So, we might be exposing people to harm or they could be slipping through the cracks. Also, people self-select for phone surveys. Who is not answering the phone and thus left out of the survey?
  • Consider providing airtime but make sure this doesn’t create perverse incentives.
  • Ethics and doing no harm are key principles. If we are forced to deliver programs remotely, this involves experimentation. And we are experimenting with people’s lives during a health crisis. Consider including a complaints channel where people can report any issues.
  • Ensure data is providing value at the local level, and help teams see what the whole data process is and how their data feeds into it. That will help improve data quality and reduce the tendency to ‘tick the box’ for data collection or find workarounds.
  • Design systems for interoperability so that the data can overlap, and the data can be integrated with other data for better insights or can be automatically updated. Data standards need to be established so that different systems can capture data in the same way or the same format;
  • Create a well-designed change management program to bring people on board and support them. Role modeling by leaders can help to promote new behaviors.

Further questions to explore:

  • How can we design monitoring to be remote from the very start? What new gaps could we fill and what kinds of mixed methods could we use?
  • What two-way platforms are most useful and how can they be used effectively and ethically?
  • Can we create a simple overview of opportunities and threats of remote monitoring?
  • How can we collect qualitative data, e.g., focus groups and in-depth interviews?
  • How can we keep respondents safe? What are the repercussions of asking sensitive questions?
  • How can we create data continuity plans during the pandemic?


Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Using Data Responsibly During the COVID-19 Crisis

Over the past decade, monitoring, evaluation, research and learning (MERL) practices have become increasingly digitalized. The COVID-19 pandemic has caused that the process of digitalization to happen with even greater speed and urgency, due to travel restrictions, quarantine, and social distancing orders from governments who are desperate to slow the spread of the virus and lessen its impact.

MERL Tech and CLEAR-Anglophone Africa are working together to develop a framework and guidance on responsible data management for MERL in the Anglophone African context. As part of this effort, we held three virtual events in early June during CLEAR’s gLOCAL Evaluation Week.

At our June 2 event, Korstiaan Wapenaar, Genesis Analytics, Jerusha Govender, Data Innovator, and Teki Akkueteh, Africa Digital Rights Hub, shared tips on how to be more responsible with data.

Data is a necessary and critical part of COVID-19 prevention and response efforts to understand where the virus might appear next, who is most at risk, and where resources should be directed for prevention and response. However we need to be sure that we are not putting people at risk of privacy violations or misuse of personal data and to ensure that we are managing that data responsibly so that we don’t unnecessarily create fear or panic.

Watch the video below:

Listen to the audio from the session here:

Session summary:

  • MERL Practitioners have clear responsibilities when sharing, presenting, consuming and interpreting data. Individuals and institutions may use data to gain prestige, and this can allow bias to creep in or to justify government decisions. Data quality is critical for informing decisions, and information gaps create the risk of misinformation and flawed understanding. We need to embrace uncertainty and the limitations of the science, provide context and definitions so that our sources are clear, and ensure transparency around the numbers and the assumptions that are underpin our work.
  • MERL Practitioners should provide contextual information and guidance on how to interpret the data so that people can make sense of it in the right way. We should avoid cherry picking data to prove a point, and we should be aware that data visualization carries power to sway opinions and decisions. It can also influence behavior change in individuals, so we need to take responsibility for that. We also need to find ways to visualize data for lay people and non-technical sectors.
  • Critical data is needed, yet it might be used in negative or harmful ways, for example, COVID-related stigmatization that can affect human dignity. We must not override ethical and legal principles in our rush to collect data. Transparency around data collection processes and use are also needed, as well as data minimization. Some might be taking advantage of the situation to amass large amounts of data for alternative purposes, which is unethical. Large amounts of data also bring increased risk of data breaches. When people are scared, such as in COVID times, they will be willing to hand over data. We need to ensure that we are providing oversight and keeping watch over government entities, health facilities, and third-party data processors to ensure data is protected and not misused.
  • MERL Practitioners are seeking more guidance and support on: aspects of consent and confidentiality; bias and interference in data collection by governments and community leaders; overcollection of data leading to fatigue; misuse of sensitive data such as location data; potential for re-identification of individuals; data integrity issues; lack of encryption; and some capacity issues.
  • Good practices and recommendations include ethical clearance of data and data assurance structures; rigorous methods to reduce bias; third party audits of data and data protection processes; localization and contextualization of data processes and interpretation; and “do no harm” framing.

Download reports:

Read about the other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Research Opportunity: Harm and the M&E Cycle

We are looking for a researcher to undertake desk-based research into how harm has been defined and integrated into monitoring and evaluation cycles. Please see the Terms of Reference and submit your short proposal by July 5, 2020, or read more about this initiative below.

Monitoring and evaluation practitioners are in a privileged position where they have the opportunity to listen and hear the voices and stories of the people that aid and development agencies work with. These professionals often determine what gets counted and what counts. Yet, practical guidance for commissioners, managers, and evaluators on managing harm is limited. The above graphic shows just some of the areas where the monitoring and evaluation process could contribute to harm.

Our privileged position as M&E practitioners brings with it the responsibility to do no harm. We need to be aware of how we might create or exacerbate harm and also how we might overlook harm due to our positions of power.  Evaluators need to play a strong role in identifying areas where M&E can cause harm and develop mitigation strategies to prevent or reduce that potential harm. There has been patchy recognition about the variety of potential harms that can arise from both action and inaction of an evaluator and others involved in monitoring and evaluation processes. There is also a wider discussion to be had around evaluation as a whole and its inherent power dynamics that can lead to, enable, or obfuscate different types of harm and which play a role in determining what is considered to be harmful.

Over the past two years, a group of senior M&E practitioners* has been reflecting on harm in M&E. In the course of this work we’ve organized conversations and collective reflection workshops, think pieces, reports on priority areas and presentations at M&E conferences. The group now looks to build these actions into a practitioner-orientated publication. The research being commissioned aims to further map harms that arise within monitoring and evaluation practice.

As part of this publication, we are looking for a researcher to take a deeper look at how harm has been defined and if and how “do no harm” approaches have been integrated into M&E cycles.

Potential questions for this research include:

  1.  What definition, association, or conception (or definitions, associations, or conceptions) of harm emerge from M&E literature and practice?
  2. Who are the key social actors who interact in M&E cycles?
  3. What strategies for addressing, preventing or reducing these harms have emerged and how successful have these been?

Please see the full Terms of Reference and instructions for submitting your application if you are interested in conducting this research. The deadline for submissions is Sunday July 5th. 

*The group of M&E practitioners who are working together on this topic includes: Stephen Porter, Evaluation Strategy Advisor – Independent Evaluation Group, World Bank; Veronica Olazabal, Senior Adviser and Director, Measurement, Evaluation and Organizational Performance – The Rockefeller Foundation; Prof. Rodney Hopson, Department of Educational Psychology – University of Illinois; Linda Raftree, Convener of MERL Tech; Adj. Prof Dugan Fraser, Director of the Centre for Learning on Evaluation and Results Anglophone Africa – University of the Witwatersrand.

8 Ways to Adapt Your M&E During the COVID-19 Pandemic

Guest post from Janna Rous. Original published here.

So, all of a sudden you’re stuck at home because of the new coronavirus.  You’re looking at your M&E commitments and your program commitments.  Do you put them all on hold and postpone them until the coronavirus threat has passed and everything goes back to normal?  Or is there a way to still get things done”!?  This article reviews 8 ways you can adapt your M&E during the pandemic.

Here are a few ideas that you and your team might consider doing to make sure you can stay on track (and maybe even IMPROVE your MEAL practices) even if you might currently be in the middle of a lockdown, or if you think you might be going into a lockdown soon:

1. Phone Call Interviews instead of In-Person Interviews

Do you have any household assessments or baseline surveys or post-distribution monitoring that you had planned in the next 1 to 3 months? Is there a way that you can carry out these interviews by phone or WhatsApp calls?  This is the easiest and most direct way to carry on with your current M&E plan.  Instead of doing these interviews face-to-face, just get them on a call.  I’ve created a checklist to help you prepare for doing phone call interviews – click here to get the “Humanitarian’s Phone Call Interview Checklist”.  Here are a few things you need to think through to transition to a phone-call methodology:

  • You need phone numbers and names of people that need to be surveyed. Do you have these?  Or is there a community leader who might be able to help you get these?
  • You also need to expect that a LOT of people may not answer their phone. So instead of “sampling” people for a survey, you might want to just plan on calling almost everyone on that list.
  • Just like for a face-to-face interview, you need to know what you’re going to say. So you need to have a script ready for how you introduce yourself and ask for consent to do a phone questionnaire.  It’s best to have a structured interview questionnaire that you follow for every phone call, just like you would in a face-to-face assessment.
  • You also need to have a way to enter data as you ask the questions. This usually depends on what you’re most comfortable with – but I recommend preparing an ODK or KoboToolbox questionnaire, just like you would for an in-person survey, and filling it out as you do the interview over the phone.  I find it easiest to enter the data into KoboToolbox “Webform” instead of the mobile app, because I can type information faster into my laptop rather than thumb-type it into a mobile device.  But use what you have!
  • If you’re not comfortable in KoboToolbox, you could also prepare an Excel sheet for directly entering answers – but this will probably require a lot more data cleaning later on.
  • When you’re interviewing, it’s usually faster to type down the answers in the language you’re interviewing in. If you need your final data collection to be in English, go back and do the translation after you’ve hung up the phone.
  • If you want a record of the interview, ask if you can record the phone call. When the person says yes, then just record it so you can go back and double check an answer if you need to.
  • Very practically – if you’re doing lots of phone calls in a day, it is easier on your arm and your neck if you use a headset instead of holding your phone to your ear all day!

2. Collect Videos & Photos Directly from Households and Communities

When you’re doing any in-person MEAL activities, you’re always able to observe evidence. You can look around and SEE impact, you don’t just hear it through an interview or group discussion.  But when you’re doing M&E remotely, you can’t double-check to see what impact really looks like.  So I recommend:

  • Connect with as many beneficiaries and team members as possible through WhatsApp or another communication app and collect photos and videos of evidence directly from them.
  • Video – Maybe someone has a story of impact they can share with you through video. Or if you’re overseeing a Primary Health Care clinic, perhaps you can have a staff member walk you through the clinic with a video so you can do a remote assessment.
  • Pictures – Maybe you can ask everyone to send you a picture of (for example) their “hand washing station with soap and water” (if you’re monitoring a WASH program). Or perhaps you want evidence that the local water point is functioning.

3. Programme Final Evaluation

It’s a good practice to do a final evaluation review when you reach the end of a program.  If you have a program finishing in the next 1-3 months, and you want to do a final review to assess lessons learned overall, then you can also do this remotely!

  • Make a list of all the stakeholders that would be great to talk to: staff members, a few beneficiaries, government authorities (local and/or national), other NGOs, coordination groups, partner organizations, local community leaders.
  • Then go in search of either their phone numbers, their email addresses, their Skype accounts, or their WhatsApp numbers and get in touch.
  • It’s best if you can get on a video chat with as many of them as possible – because it’s much more personal and easy to communicate if you can see one another’s faces! But if you can just talk with audio – that’s okay too.
  • Prepare a semi-structured interview, a list of questions you want to talk through about the impact, what went well, what could have gone better. And if there’s anything interesting that comes up, don’t worry about coming up with some new questions on the spot or skipping questions that don’t make sense in the context.
  • You can also gather together any monitoring reports/analysis that was done on the project throughout its implementation period, plus pictures of the interventions.
  • Use all this information to create a final “lessons learned” evaluation document. This is a fantastic way to continually improve the way you do humanitarian programming.

4. Adapt Your Focus Group Discussion Plan

If everyone is at home because your country has imposed a lockdown, it will be very difficult to do a focus group discussion because….you can’t be in groups!  So, with your team decide if it might be better to switch your monitoring activity from collecting qualitative data in group discussions to actually just having one-on-one interviews on the phone with several people to collect the same information.

  • There are some dynamics that you will miss in one-to-one interviews, information that may only come out during group discussions. (Especially where you’re collecting sensitive or “taboo” data.) Identify what that type of information might be – and either skip those types of questions for now, or brainstorm how else you could collect the information through phone-calls.

5. Adapt Your Key Informant Interviews

If you normally carry out Key Informant Interviews, it would be a great idea to think what “extra” questions you need to ask this month in the midst of the coronavirus pandemic.

  • If you normally ask questions around your program sector areas, think about just collecting a few extra data points about feelings, needs, fears, and challenges that are a reality in light of Covid-19. Are people facing any additional pressures due to the epidemic? Or are there any new humanitarian needs right now? Are there any upcoming needs that people are anticipating?
  • It goes without saying that if your Key Informant Interviews are normally in person, you’ll want to carry these out by phone for the foreseeable future!

6. What To Do About Third Party Monitoring

Some programs and donors use Third Party Monitors to assess their program results independently.  If you normally hire third party monitors, and you’ve got some third party monitoring planned for the next 1-3 months, you need to get on the phone with this team and make a new plan. Here are a few things you might want to think through with your third party monitors:

  • Can the third party carry out their monitoring by phone, in the same ways I’ve outlined above?
  • But also think through – is it worth it to get a third party monitor to assess results remotely? Is it better to postpone their monitoring?  Or is it worth it to carry on regardless?
  • What is the budget implication? If cars won’t be used, is there any cost-savings?  Is there any additional budget they’ll need for air-time costs for their phones?
  • Make sure there is a plan to gather as much photo and video evidence as possible (see point 2 above!)
  • If they’re carrying out phone call interviews it would also be a good recommendation to record phone calls if possible and with consent, so you have the records if needed.

7. Manage Expectations – The Coronavirus Pandemic May Impact Your Program Results.

You probably didn’t predict that a global pandemic would occur in the middle of your project cycle and throw your entire plan off.  Go easy on yourself and your team!  It is most likely that the results you’d planned for might not end up being achieved this year.  Your donors know this (because they’re probably also on lockdown).  You can’t control the pandemic, but you can control your response.  So proactively manage your own expectations, your manager’s expectations and your donor’s expectations.

  • Get on a Skype or Zoom call with the project managers and review each indicator of your M&E plan. In light of the pandemic, what indicator targets will most likely change?
  • Look through the baseline numbers in your M&E plan – is it possible that the results at the END of your project might be worse than even your baseline numbers? For example, if you have a livelihoods project, it is possible that income and livelihoods will be drastically reduced by a country-wide lockdown.  Or are you running an education program?  If schools have been closed, then will a comparison to the baseline be possible?
  • Once you’ve done a review of your M&E plan, create a very simple revised plan that can be talked through with your program donor.

8. Talk To Your Donors About What You Can Do Remotely

When you’re on the phone with your donors, don’t only talk about revised program indicators.

  • Also talk about a revised timeframe – is there any flexibility on the program timeframe, or deadlines for interim reporting on indicators? What are their expectations?
  • Also talk about what you CAN do remotely. Discuss with them the plan you have for carrying on everything possible that can be done remotely.
  • And don’t forget to discuss financial implications of changes to timeframe.