Tag Archives: learning

Use of Administrative Data for the COVID-19 Response

Administrative data is that which is collected as part of regular activities that occur during program implementation. It has not been tapped sufficiently for learning and research. As the COVID-19 pandemic advances, how might administrative data be used to help with the COVID response, and other national or global pandemics.

At the final event in the MERL Tech and CLEAR-Anglophone Africa series for  gLOCAL Evaluation Week, we were joined by Kwabena Boakye, Ministry of Monitoring and Evaluation, Ghana; Bosco Okumu, National Treasury and Planning, Kenya; Stephen Taylor, Department of Basic Education, South Africa; and Andrea Fletcher, Cooper-Smith.

The four panelists described the kinds of administrative or “routine” data they are using in their work. For example, in Kenya educational records, client information from financial institutions, hospital records of patients, and health outcomes are being used to plan and implement actions related to COVID-19 and to evaluate the impact of different COVID-related policies that governments have put in place or are considering. In Malawi, administrative data is combined with other sources such as Google mobility data to understand how migration might be affecting the virus’ spread. COVID-19 is putting a spotlight on weaknesses and gaps in existing administrative data systems.

Watch the video here:

Listen to just the audio from the event here:

Summary:

Benefits of administrative data include that:

  • Data is generated through normal operations and does not require an additional survey to create it
  • It can be more relevant than a survey because it covers a large swath of the entire population
  • It is an existing data source during COVID when it’s difficult to collect new data
  • It can be used to create dashboards for decision-makers at various levels

Challenges include:

  • Data sits in silos and the systems are not designed to be interoperable
  • Administrative data may leave out those who are not participating in a government program
  • Data sets are time-bound to the life of the program
  • Some administrative data systems are outdated and have poor quality data that is not useful for decision-making or analysis
  • There is a demand for beautiful dashboards and maps but there is insufficient attention to the underlying data processes that would be needed to produce this information so that it can be used
  • Real-time data is not possible when there is no Internet connectivity
  • There is insufficient attention to data privacy and protection, especially for sensitive data
  • Institutions may resist providing data if weakness are highlighted through the data or they think it will make them look bad

Recommendations for better use of administrative data in the public sector:

  • Understand the data needs of decision-makers and build capacity to understand and use data systems
  • Map the data that exists, assess its quality, and identify gaps
  • Design and enact policies and institutional arrangements, tools, and processes to make sure that data is organized and interoperable.
  • Automate processes with digital tools to make them more seamless.
  • Focus on enhancing underlying data collection processes to improve the quality of administrative data; this includes making it useful for those who provide the data so that it is not yet another administrative burden with no local value.
  • Assign accountability for data quality across the entire system.
  • Learn from the private sector, but remember that the public sector has different incentives and goals.
  • Rather than fund more research on administrative data, donors should put funds into training on data quality, data visualization, and other skills related to data use and data literacy at different levels of government.
  • Determine how to improve data quality and use of existing administrative data systems rather than building new ones.
  • Make administrative data useful to those who are inputting it to improve data quality.

Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Remote Monitoring in the Time of Coronavirus

On June 3,  MERL Tech and CLEAR-Anglophone Africa hosted the second of three virtual events for gLOCAL Evaluation Week. At this event, we heard from Ignacio Del Busto, IDInsight, Janna Rous, Humanitarian Data, and Ayanda Mtanyana, New Leaders, on the topic of remote monitoring.

Data is not always available, and it can be costly to produce. One challenge is generating data cheaply and quickly to meet the needs of decision-makers within the operational constraints that enumerators face. Another is ensuring that the process is high quality and also human-centered, so that we are not simply extracting data. This can be a challenge when there is low connectivity and reach, poor networks capacity and access, and low smartphone access. Enumerator training is also difficult when it must be done remotely, especially if enumerators are new to technology and more accustomed to doing paper-based surveys.

Watch the video below.

Listen to just the audio from the session here.

Some recommendations arising from the session included:

  • Learn and experiment as you try new things. For example, tracking when and why people are dropping off a survey and finding ways to improve the design and approach. This might be related to the time of the call or length of the survey.
  • It’s not only about phone surveys. There are other tools. For example, WhatsApp has been used successfully during COVID-19 for collecting health data.
  • Don’t just put your paper processes onto a digital device. Instead, consider how to take greater advantage of digital devices and tools to find better ways of monitoring. For example, could we incorporate sensors into the monitoring from the start? At the same time, be careful not to introduce technologies that are overly complex.
  • Think about exclusion and access. Who are we excluding when we move to remote monitoring? Children? Women? Elderly people? We might be introducing bias if we are going remote. We also cannot observe if vulnerable people are in a safe place to talk if we are doing remote monitoring. So, we might be exposing people to harm or they could be slipping through the cracks. Also, people self-select for phone surveys. Who is not answering the phone and thus left out of the survey?
  • Consider providing airtime but make sure this doesn’t create perverse incentives.
  • Ethics and doing no harm are key principles. If we are forced to deliver programs remotely, this involves experimentation. And we are experimenting with people’s lives during a health crisis. Consider including a complaints channel where people can report any issues.
  • Ensure data is providing value at the local level, and help teams see what the whole data process is and how their data feeds into it. That will help improve data quality and reduce the tendency to ‘tick the box’ for data collection or find workarounds.
  • Design systems for interoperability so that the data can overlap, and the data can be integrated with other data for better insights or can be automatically updated. Data standards need to be established so that different systems can capture data in the same way or the same format;
  • Create a well-designed change management program to bring people on board and support them. Role modeling by leaders can help to promote new behaviors.

Further questions to explore:

  • How can we design monitoring to be remote from the very start? What new gaps could we fill and what kinds of mixed methods could we use?
  • What two-way platforms are most useful and how can they be used effectively and ethically?
  • Can we create a simple overview of opportunities and threats of remote monitoring?
  • How can we collect qualitative data, e.g., focus groups and in-depth interviews?
  • How can we keep respondents safe? What are the repercussions of asking sensitive questions?
  • How can we create data continuity plans during the pandemic?


Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Using Data Responsibly During the COVID-19 Crisis

Over the past decade, monitoring, evaluation, research and learning (MERL) practices have become increasingly digitalized. The COVID-19 pandemic has caused that the process of digitalization to happen with even greater speed and urgency, due to travel restrictions, quarantine, and social distancing orders from governments who are desperate to slow the spread of the virus and lessen its impact.

MERL Tech and CLEAR-Anglophone Africa are working together to develop a framework and guidance on responsible data management for MERL in the Anglophone African context. As part of this effort, we held three virtual events in early June during CLEAR’s gLOCAL Evaluation Week.

At our June 2 event, Korstiaan Wapenaar, Genesis Analytics, Jerusha Govender, Data Innovator, and Teki Akkueteh, Africa Digital Rights Hub, shared tips on how to be more responsible with data.

Data is a necessary and critical part of COVID-19 prevention and response efforts to understand where the virus might appear next, who is most at risk, and where resources should be directed for prevention and response. However we need to be sure that we are not putting people at risk of privacy violations or misuse of personal data and to ensure that we are managing that data responsibly so that we don’t unnecessarily create fear or panic.

Watch the video below:

Listen to the audio from the session here:

Session summary:

  • MERL Practitioners have clear responsibilities when sharing, presenting, consuming and interpreting data. Individuals and institutions may use data to gain prestige, and this can allow bias to creep in or to justify government decisions. Data quality is critical for informing decisions, and information gaps create the risk of misinformation and flawed understanding. We need to embrace uncertainty and the limitations of the science, provide context and definitions so that our sources are clear, and ensure transparency around the numbers and the assumptions that are underpin our work.
  • MERL Practitioners should provide contextual information and guidance on how to interpret the data so that people can make sense of it in the right way. We should avoid cherry picking data to prove a point, and we should be aware that data visualization carries power to sway opinions and decisions. It can also influence behavior change in individuals, so we need to take responsibility for that. We also need to find ways to visualize data for lay people and non-technical sectors.
  • Critical data is needed, yet it might be used in negative or harmful ways, for example, COVID-related stigmatization that can affect human dignity. We must not override ethical and legal principles in our rush to collect data. Transparency around data collection processes and use are also needed, as well as data minimization. Some might be taking advantage of the situation to amass large amounts of data for alternative purposes, which is unethical. Large amounts of data also bring increased risk of data breaches. When people are scared, such as in COVID times, they will be willing to hand over data. We need to ensure that we are providing oversight and keeping watch over government entities, health facilities, and third-party data processors to ensure data is protected and not misused.
  • MERL Practitioners are seeking more guidance and support on: aspects of consent and confidentiality; bias and interference in data collection by governments and community leaders; overcollection of data leading to fatigue; misuse of sensitive data such as location data; potential for re-identification of individuals; data integrity issues; lack of encryption; and some capacity issues.
  • Good practices and recommendations include ethical clearance of data and data assurance structures; rigorous methods to reduce bias; third party audits of data and data protection processes; localization and contextualization of data processes and interpretation; and “do no harm” framing.

Download reports:

Read about the other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

8 Ways to Adapt Your M&E During the COVID-19 Pandemic

Guest post from Janna Rous. Original published here.

So, all of a sudden you’re stuck at home because of the new coronavirus.  You’re looking at your M&E commitments and your program commitments.  Do you put them all on hold and postpone them until the coronavirus threat has passed and everything goes back to normal?  Or is there a way to still get things done”!?  This article reviews 8 ways you can adapt your M&E during the pandemic.

Here are a few ideas that you and your team might consider doing to make sure you can stay on track (and maybe even IMPROVE your MEAL practices) even if you might currently be in the middle of a lockdown, or if you think you might be going into a lockdown soon:

1. Phone Call Interviews instead of In-Person Interviews

Do you have any household assessments or baseline surveys or post-distribution monitoring that you had planned in the next 1 to 3 months? Is there a way that you can carry out these interviews by phone or WhatsApp calls?  This is the easiest and most direct way to carry on with your current M&E plan.  Instead of doing these interviews face-to-face, just get them on a call.  I’ve created a checklist to help you prepare for doing phone call interviews – click here to get the “Humanitarian’s Phone Call Interview Checklist”.  Here are a few things you need to think through to transition to a phone-call methodology:

  • You need phone numbers and names of people that need to be surveyed. Do you have these?  Or is there a community leader who might be able to help you get these?
  • You also need to expect that a LOT of people may not answer their phone. So instead of “sampling” people for a survey, you might want to just plan on calling almost everyone on that list.
  • Just like for a face-to-face interview, you need to know what you’re going to say. So you need to have a script ready for how you introduce yourself and ask for consent to do a phone questionnaire.  It’s best to have a structured interview questionnaire that you follow for every phone call, just like you would in a face-to-face assessment.
  • You also need to have a way to enter data as you ask the questions. This usually depends on what you’re most comfortable with – but I recommend preparing an ODK or KoboToolbox questionnaire, just like you would for an in-person survey, and filling it out as you do the interview over the phone.  I find it easiest to enter the data into KoboToolbox “Webform” instead of the mobile app, because I can type information faster into my laptop rather than thumb-type it into a mobile device.  But use what you have!
  • If you’re not comfortable in KoboToolbox, you could also prepare an Excel sheet for directly entering answers – but this will probably require a lot more data cleaning later on.
  • When you’re interviewing, it’s usually faster to type down the answers in the language you’re interviewing in. If you need your final data collection to be in English, go back and do the translation after you’ve hung up the phone.
  • If you want a record of the interview, ask if you can record the phone call. When the person says yes, then just record it so you can go back and double check an answer if you need to.
  • Very practically – if you’re doing lots of phone calls in a day, it is easier on your arm and your neck if you use a headset instead of holding your phone to your ear all day!

2. Collect Videos & Photos Directly from Households and Communities

When you’re doing any in-person MEAL activities, you’re always able to observe evidence. You can look around and SEE impact, you don’t just hear it through an interview or group discussion.  But when you’re doing M&E remotely, you can’t double-check to see what impact really looks like.  So I recommend:

  • Connect with as many beneficiaries and team members as possible through WhatsApp or another communication app and collect photos and videos of evidence directly from them.
  • Video – Maybe someone has a story of impact they can share with you through video. Or if you’re overseeing a Primary Health Care clinic, perhaps you can have a staff member walk you through the clinic with a video so you can do a remote assessment.
  • Pictures – Maybe you can ask everyone to send you a picture of (for example) their “hand washing station with soap and water” (if you’re monitoring a WASH program). Or perhaps you want evidence that the local water point is functioning.

3. Programme Final Evaluation

It’s a good practice to do a final evaluation review when you reach the end of a program.  If you have a program finishing in the next 1-3 months, and you want to do a final review to assess lessons learned overall, then you can also do this remotely!

  • Make a list of all the stakeholders that would be great to talk to: staff members, a few beneficiaries, government authorities (local and/or national), other NGOs, coordination groups, partner organizations, local community leaders.
  • Then go in search of either their phone numbers, their email addresses, their Skype accounts, or their WhatsApp numbers and get in touch.
  • It’s best if you can get on a video chat with as many of them as possible – because it’s much more personal and easy to communicate if you can see one another’s faces! But if you can just talk with audio – that’s okay too.
  • Prepare a semi-structured interview, a list of questions you want to talk through about the impact, what went well, what could have gone better. And if there’s anything interesting that comes up, don’t worry about coming up with some new questions on the spot or skipping questions that don’t make sense in the context.
  • You can also gather together any monitoring reports/analysis that was done on the project throughout its implementation period, plus pictures of the interventions.
  • Use all this information to create a final “lessons learned” evaluation document. This is a fantastic way to continually improve the way you do humanitarian programming.

4. Adapt Your Focus Group Discussion Plan

If everyone is at home because your country has imposed a lockdown, it will be very difficult to do a focus group discussion because….you can’t be in groups!  So, with your team decide if it might be better to switch your monitoring activity from collecting qualitative data in group discussions to actually just having one-on-one interviews on the phone with several people to collect the same information.

  • There are some dynamics that you will miss in one-to-one interviews, information that may only come out during group discussions. (Especially where you’re collecting sensitive or “taboo” data.) Identify what that type of information might be – and either skip those types of questions for now, or brainstorm how else you could collect the information through phone-calls.

5. Adapt Your Key Informant Interviews

If you normally carry out Key Informant Interviews, it would be a great idea to think what “extra” questions you need to ask this month in the midst of the coronavirus pandemic.

  • If you normally ask questions around your program sector areas, think about just collecting a few extra data points about feelings, needs, fears, and challenges that are a reality in light of Covid-19. Are people facing any additional pressures due to the epidemic? Or are there any new humanitarian needs right now? Are there any upcoming needs that people are anticipating?
  • It goes without saying that if your Key Informant Interviews are normally in person, you’ll want to carry these out by phone for the foreseeable future!

6. What To Do About Third Party Monitoring

Some programs and donors use Third Party Monitors to assess their program results independently.  If you normally hire third party monitors, and you’ve got some third party monitoring planned for the next 1-3 months, you need to get on the phone with this team and make a new plan. Here are a few things you might want to think through with your third party monitors:

  • Can the third party carry out their monitoring by phone, in the same ways I’ve outlined above?
  • But also think through – is it worth it to get a third party monitor to assess results remotely? Is it better to postpone their monitoring?  Or is it worth it to carry on regardless?
  • What is the budget implication? If cars won’t be used, is there any cost-savings?  Is there any additional budget they’ll need for air-time costs for their phones?
  • Make sure there is a plan to gather as much photo and video evidence as possible (see point 2 above!)
  • If they’re carrying out phone call interviews it would also be a good recommendation to record phone calls if possible and with consent, so you have the records if needed.

7. Manage Expectations – The Coronavirus Pandemic May Impact Your Program Results.

You probably didn’t predict that a global pandemic would occur in the middle of your project cycle and throw your entire plan off.  Go easy on yourself and your team!  It is most likely that the results you’d planned for might not end up being achieved this year.  Your donors know this (because they’re probably also on lockdown).  You can’t control the pandemic, but you can control your response.  So proactively manage your own expectations, your manager’s expectations and your donor’s expectations.

  • Get on a Skype or Zoom call with the project managers and review each indicator of your M&E plan. In light of the pandemic, what indicator targets will most likely change?
  • Look through the baseline numbers in your M&E plan – is it possible that the results at the END of your project might be worse than even your baseline numbers? For example, if you have a livelihoods project, it is possible that income and livelihoods will be drastically reduced by a country-wide lockdown.  Or are you running an education program?  If schools have been closed, then will a comparison to the baseline be possible?
  • Once you’ve done a review of your M&E plan, create a very simple revised plan that can be talked through with your program donor.

8. Talk To Your Donors About What You Can Do Remotely

When you’re on the phone with your donors, don’t only talk about revised program indicators.

  • Also talk about a revised timeframe – is there any flexibility on the program timeframe, or deadlines for interim reporting on indicators? What are their expectations?
  • Also talk about what you CAN do remotely. Discuss with them the plan you have for carrying on everything possible that can be done remotely.
  • And don’t forget to discuss financial implications of changes to timeframe.

 

What’s Happening with Tech and MERL?

by Linda Raftree, Independent Consultant and MERL Tech organizer

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before roll-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Join us for MERL Tech DC, Sept 5-6th!

MERL Tech DC: Taking Stock

September 5-6, 2019

FHI 360 Academy Hall, 8th Floor
1825 Connecticut Avenue NW
Washington, DC 20009

We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.

The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.

Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.

Conference strands:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.

Submit your session ideas, register to attend the conference, or reserve a demo table for MERL Tech DC now!

You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL Tech London 2018 Agenda is out!

We’ve been working hard over the past several weeks to finish up the agenda for MERL Tech London 2018, and it’s now ready!

We’ve got workshops, panels, discussions, case studies, lightning talks, demos, community building, socializing, and an evening reception with a Fail Fest!

Topics range from mobile data collection, to organizational capacity, to learning and good practice for information systems, to data science approaches, to qualitative methods using mobile ethnography and video, to biometrics and blockchain, to data ethics and privacy and more.

You can search the agenda to find the topics, themes and tools that are most interesting, identify sessions that are most relevant to your organization’s size and approach, pick the session methodologies that you prefer (some of us like participatory and some of us like listening), and to learn more about the different speakers and facilitators and their work.

Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here!)

View the MERL Tech London schedule & directory.

 

MERL Tech 101: Google forms

by Daniel Ramirez-Raftree, MERL Tech volunteer

In his MERL Tech DC session on Google Forms, Samhir Vesdev from IREX led a hands-on workshop on Google Forms and laid out some of the software’s capabilities and limitations. Much of the session focused on Google Forms’ central concepts and the practicality of building a form.

At its most fundamental level, a form is made up of several sections, and each section is designed to contain a question or prompt. The centerpiece of a section is the question cell, which is, as one would imagine, the cell dedicated to the question. Next to the question cell there is a drop down menu that allows one to select the format of the question, which ranges from multiple-choice to short answer.


At the bottom right hand corner of the section you will find three dots arranged vertically. When you click this toggle, a drop-down menu will appear. The options in this menu vary depending on the format of the question. One common option is to include a few lines of description, which is useful in case the question needs further elaboration or instruction. Another is the data validation option, which restricts the kinds of text that a respondent can input. This is useful in the case that, for example, the question is in a short answer format but the form administrators need the responses to be limited numerals for the sake of analysis.

The session also covered functions available in the “response” tab, which sits at the top of the page. Here one can find a toggle labeled “accepting responses” that can be turned off or on depending on the needs for the form.

Additionally, in the top right corner this tab, there are three dots arranged vertically, and this is the options menu for this tab. Here you will find options such as enabling email notifications for each new response, which can be used in case you want to be alerted when someone responds to the form. Also in this drop down, you can click “select response destination” to link the Google Form with Google Sheets, which simplifies later analysis. The green sheets icon next to the options drop-down will take you to the sheet that contains the collected data.

Other capabilities in Google Forms include the option for changing the color scheme, which you can access by clicking the palette icon at the top of the screen. Also, by clicking the settings button at the top of the screen you can limit the response amount to restrict people’s ability to skew the data by submitting multiple responses, or you can enable response editing after submission to allow respondents to go in and correct their response after submitting it.

Branching is another important tool in Google Forms. It can be used in the case that you want a particular response to a question (say, a multiple choice question) to lead the respondent to another related question only if they respond in a certain way.

For example, if in one section you ask “did you like the workshop?” with the answer options being “yes” and “no,” and if you want to know what they didn’t like about the workshop only if they answer “no,” you can design the sheet to take the respondent to a section with the question “what didn’t you like about the workshop?” only in the case that they answer “no,” and then you can design the sheet to bring the respondent back to the main workflow after they’ve answered this additional question.

To do this, create at least two new sections (by clicking “add section” in the small menu to the right of the sections), one for each path that a person’s response will lead them down. Then, in the options menu on the lower right hand side select “go to section based on answer” and using the menu that appears, set the path that you desire.

These are just some of the tools that Google Forms offers, but with just these it is possible to build an effective form to collect the data you need. Samhir ended with a word of caution that Google has been known to shut down popular apps, so you should be wary about building an organization strategy around Google Forms.

M&E Squared: Evaluating M&E Technologies

by Roger Nathanial Ashby, Co-Founder & Principal Consultant, OpenWise

The universe of MERL Tech solutions has grown exponentially. In 2008 monitoring and evaluating tech within global development could mostly be confined to mobile data collection tools like Open Data Kit (ODK), and Excel spreadsheets to analyze and visualize survey data. In the intervening decade a myriad of tools, companies and NGOs have been created to advance the efficiency and effectiveness of monitoring, evaluation, research and learning (MERL) through the use of technology. Whether it’s M&E platforms or suites, satellite imagery, remote sensors, or chatbots, new innovations are being deployed every day in the field.

However, how do we evaluate the impact when MERL Tech is the intervention itself? That was the question and task put to participants of the “M&E Squared” workshop at MERL Tech 2017.

Workshop participants were separated into three groups that were each given a case study to discuss and analyze. One group was given a case about improving the learning efficiency of health workers in Liberia through the mHero Health Information System (HIS). The system was deployed as a possible remedy to some of the information communication challenges identified during the 2014 West African Ebola outbreak. A second group was given a case about the use of RapidPro to remind women to attend antenatal care (ANC) for preventive malaria medicine in Guinea. The USAID StopPalu project goal was to improve the health of infants by increasing the percent of women attending ANC visits. The final group was given a case about using remote images to assist East African pastoralists. The Satellite Assisted Pastoral Resource Management System (SAPARM) informs pastoralists of vegetation through remote sensing imagery so they can make better decisions about migrating their livestock.

After familiarizing ourselves with the particulars of the case studies, each group was tasked to present their findings to all participants after pondering a series of questions. Some of the issues under discussion included

(1) “How would you assess your MERL Tech’s relevance?”

(2) “How would you evaluate the effectiveness of your MERL Tech?”

(3) “How would you measure efficiency?” and

(4) “How will you access sustainability?”.

Each group came up with some innovative answers to the questions posed and our facilitators and session leads (Alexandra Robinson & Sutyajeet Soneja from USAID and Molly Chen from RTI) will soon synthesize the workshop findings and notes into a concise written brief for the MERL Tech community.

Before the workshop closed we were all introduced to the great work done by SIMLab (Social Impact Lab) in this area through their SIMLab Monitoring and Evaluation Framework. The framework identifies key criteria for evaluating M&E including:

  1. Relevance – The extent to which the technology choice is appropriately suited to the priorities and capacities of the context of the target group or organization.
  2. Effectiveness – A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.
  3. Efficiency – Measure of the outputs (qualitative and quantitative) in relation to the inputs.
  4. Impact – The positive and negative changed produced by technology introduction, change in a technology tool, or platform on the overall development intervention (directly or indirectly; intended or unintended).
  5. Sustainability – Measure of whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn.
  6. Coherence – How related is the technology to the broader policy context (development, market, communication networks, data standards & interoperability mandates, and national & international law) within which the technology was developed and implemented.

While it’s unfortunate that SIMLab stopped most operations in early September 2017, their exceptional work in this and other areas lives on and you can access the full framework here.

I learned a great deal in this session from the facilitators and my colleagues attending the workshop. I would encourage everyone in the MERL Tech community to take the ideas generated during this workshop and the great work done by SIMLab into their development practice. We certainly intend to integrate much of these insights into our work at OpenWise. Read more about “The Evidence Agenda” here on SIMLab’s blog.