MERL Tech News

The future of development evaluation in the age of big data

Screen Shot 2017-07-22 at 1.52.33 PMBy Michael Bamberger, Independent Evaluation Consultant. Michael has been involved in development evaluation for 50 years and recently wrote the report: “Integrating Big Data into the Monitoring and Evaluation of Development Programs” for UN Global Pulse.

We are living in an increasingly quantified world.

There are multiple sources of data that can be generated and analyzed in real-time. They can be synthesized to capture complex interactions among data streams and to identify previously unsuspected linkages among seemingly unrelated factors [such as the purchase of diapers and increased sales of beer]. We can now quantify and monitor ourselves, our houses (even the contents of our refrigerator!), our communities, our cities, our purchases and preferences, our ecosystem, and multiple dimensions of the state of the world.

These rich sources of data are becoming increasingly accessible to individuals, researchers and businesses through huge numbers of mobile phone and tablet apps and user-friendly data analysis programs.

The influence of digital technology on international development is growing.

Many of these apps and other big data/data analytics tools are now being adopted by international development agencies. Due to their relatively low cost, ease of application, and accessibility in remote rural areas, the approaches are proving particularly attractive to non-profit organizations; and the majority of NGOs probably now use some kind of mobile phone apps.

Apps are widely-used for early warning systems, emergency relief, dissemination of information (to farmers, mothers, fishermen and other groups with limited access to markets), identifying and collecting feedback from marginal and vulnerable groups, and permitting rapid analysis of poverty. Data analytics are also used to create integrated data bases that synthesize all of the information on topics as diverse as national water resources, human trafficking, updates on conflict zones, climate change and many other development topics.

Table 1: Widely used big data/data analytics applications in international development

Application

Big data/data analytics tools

Early warning systems for natural and man-made disasters
  • Analysis of Twitter, Facebook and other social media
  • Analysis of radio call-in programs
  • Satellite images and remote sensors
  • Electronic transaction records [ATM, on-line purchases]
Emergency relief
  • GPS mapping and tracking
  • Crowd-sourcing
  • Satellite images
Dissemination of information to small farmers, mothers, fishermen and other traders
  • Mobile phones
  • Internet
Feedback from marginal and vulnerable groups and on sensitive topics
  • Crowd-sourcing
  • Secure hand-held devices [e.g. UNICEF’s “U-Report” device]
Rapid analysis of poverty and identification of low-income groups
  • Analysis of phone records
  • Social media analysis
  • Satellite images [e.g. using thatched roofs as a proxy indicator of low-income households]
  • Electronic transaction records
Creation of an integrated data base synthesizing all the multiples sources of data on a development topic
  • National water resources
  • Human trafficking
  • Agricultural conditions in a particular region


Evaluation is lagging behind.

Surprisingly, program evaluation is the area that is lagging behind in terms of the adoption of big data/analytics. The few available studies report that a high proportion of evaluators are not very familiar with big data/analytics and significantly fewer report having used big data in their professional evaluation work. Furthermore, while many international development agencies have created data development centers within the past few years, many of these are staffed by data scientists (many with limited familiarity with conventional evaluation methods) and there are weak institutional links to agency evaluation offices.

A recent study on the current status of the integration of big data into the monitoring and evaluation of development programs identified a number of reasons for the slow adoption of big data/analytics by evaluation offices:

  • Weak institutional links between data development centers and evaluation offices
  • Differences of methodology and the approach to data generation and analysis
  • Issues concerning data quality
  • Concerns by evaluators about the commercial, political and ethical nature of how big data is generated, controlled and used.

(Linda Raftree talks about a number of other reasons why parts of the development sector may be slow to adopt big data.)

Key questions for the future of evaluation in international development…

The above gives rise to two sets of questions concerning the future role of evaluation in international development:

  • The future direction of development evaluation. Given the rapid expansion of big data in international development, it is likely there will be a move towards integrated program information systems. These will begin to generate, analyze and synthesize data for program selection, design, management, monitoring, evaluation and dissemination. A possible scenario is that program evaluation will no longer be considered a specialized function that is the responsibility of a separate evaluation office, rather it will become one of the outputs generated from the program data base. If this happens, evaluation may be designed and implemented not by evaluation specialists using conventional evaluation methods (experimental and quasi-experimental designs, theory-based evaluation) but by data analysts using methods such as predictive analytics and machine learning.

Key Question: Is this scenario credible? If so how widespread will it become and over what time horizon? Is it likely that evaluation will become one of the outputs of an integrated management information system? And if so is it likely that many of the evaluation functions will be taken over by big data analysts?

  • The changing role of development evaluators and the evaluation office. We argued that currently many or perhaps most development evaluators are not very familiar with big data/analytics, and even fewer apply these approaches. There are both professional reasons (how evaluators and data scientists are trained) and organizational reasons (the limited formal links between evaluation offices and data centers in many organizations) that explain the limited adoption of big data approaches by evaluators. So, assuming the above scenario proves to be at least partially true, what will be required for evaluators to become sufficiently conversant with these new approaches to be able to contribute to how big data/focused evaluation approaches are designed and implemented? According to Pete York at Communityscience.com, the big challenge and opportunity for evaluators is to ensure that the scientific method becomes an essential part of the data analytics toolkit. Recent studies by the Global Environmental Faciity (GEF) illustrate some of the ways that big data from sources such as satellite images and remote sensors can be used to strengthen conventional quasi-experimental evaluation designs. In a number of evaluations these data sources used propensity score matching to select matched samples for pretest-posttest comparison group designs to evaluate the effectiveness of programs to protect forest cover or reserves for mangrove swamps.

Key Question: Assuming there will be a significant change in how the evaluation function is organized and managed, what will be required to bridge the gap between evaluators and data analysts? How likely is it that the evaluators will be able to assume this new role and how likely is it that organizations will make the necessary adjustments to facilitate these transformations?

What do you think? How will these scenarios play out?

Note: Stay tuned for Michael’s next post focusing on how to build bridges between evaluators and big data analysts.

Below are some useful references if you’d like to read more on this topic:

Anderson, C (2008) “The end of theory: The data deluge makes the scientific method obsolete” Wired Magazine 6/23/08. The original article in the debate on whether big data analytics requires a theoretical framework.

Bamberger, M., Raftree, L and Olazabal, V (2016) The role of new information and communication technologies in equity–focused evaluation: opportunities and challenges. Evaluation. Vol 22(2) 228–244 . A discussion of the ethical issues and challenges with new information technology

Bamberger, M (2017) Integrating big data into the monitoring and evaluation of development programs. UN Global Pulse with support from the Rockefeller Foundation. Review of progress in the incorporation of new information technology into development programs and the opportunities and challenges of building bridges between evaluators and big data specialists.

Meier , P (2015) Digital Humanitarians: How big data is changing the face of humanitarian response. CRC Press. A review, with detailed case studies, of how digital technology is being used by NGOs and civil society.

O’Neill, C (2016) The weapons of math destruction: How big data increases inequality and threatens democracy.   How widely-used digital algorithms negatively affect the poor and marginalized sectors of society. Crown books.

Petersson, G.K and Breul, J.D (editors) (2017) Cyber society, big data and evaluation. Comparative policy evaluation. Volume 24. Transaction Publications. The evolving role of evaluation in cyber society.

Wolf, G The quantified self [TED Talk]  Quick overview of the multiple self-monitoring measurements that you can collect on yourself.

World Bank (2016). Digital Dividends. World Development Report. Overview of how the expansion of digital technology is affecting all areas of our lives.

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Better or different or both?

by Linda Raftree, Independent Consultant and MERL Tech Organizer

As we delve into why, when, where, if, and how to incorporate various types of technology and digital data tools and approaches into monitoring, evaluation, research and learning (MERL), it can be helpful to think about MERL technologies from two angles:

  1. Doing our work better:  How can new technologies and approaches help us do what we’ve always done — the things that we know are working and having an impact — but do them better? (E.g., faster, with higher quality, more efficiently, less expensively, with greater reach or more inclusion of different voices)
  2. Doing our work differently:  What brand new, previously unthinkable things can be done because of new technologies and approaches? How might these totally new ideas contribute positively to our work or push us to work in an entirely different way.

Sometimes these two things happen simultaneously and sometimes they do not.  Some organizations are better at Thing 1, and others are set-up well to explore Thing 2. Not all organizations need to feel pressured into doing Thing 2; however, and sometimes it can be a distraction from Thing 1. Some organizations may be better off letting early adopters focus on Thing 2 and investing their own budgets and energy in Thing 1 until innovations have been tried and tested by the early adopters. Organizations may also have staff members or teams working on both Thing 1 and Thing 2 separately. Others may conceptualize this as process or pathway moving from Thing 2 to Thing 1, where Thing 2 (once tested and evaluated) is a pipeline into Thing 1.

Here are some potentially useful past discussions on the topic of innovations within development organizations that flesh out some of these thoughts:

Many of the new tools and approaches that were considered experimental 10 years ago have moved from being “brand new and innovative” to simply “helping us do what we’ve always done.” Some of these earlier “innovations” are related to digital data and data collection and processing, and they help us do better monitoring, evaluation and research.

On the flip side, monitoring, evaluation and research have played a key role in helping organizations and the sector overall learn more about how, where, when, why and in what contexts these different tools and approaches (including digital data for MERL) can be adopted. MERL on ICT4D and Digital Development approaches can help calibrate the “hype cycle” and weed out the shiny new tools and approaches that are actually not very effective or useful to the sector and highlight those that cause harm or put people at risk.

There are always going to be new tools and approaches that emerge. Humanitarian and development organizations, then, need to think strategically about what kind of organization they are (or want to be) and where they fit on the MERL Tech continuum between Thing 1 and Thing 2.

What capacities does an organization have for working on Thing 2 (brand new and different)? When and for how long should an organization focus on Thing 1, building on what it knows is working or could work, keeping an eye on the early adopters who are working on Thing 2. When does an organization have enough “proof” to start adopting new tools and approaches that seem to add value? How are these new tools and approaches being monitored, evaluated and researched to improve our use of them?

It’s difficult for widespread adoption to happen in the development space, where there is normally limited time and capacity for failure or for experimentation, without solid MERL. And even with “solid MERL” it can be difficult for organizations to adapt and change due to a multitude of factors, both internal and external.

I’m looking forward to September’s MERL Tech Conference in DC where we have some sessions that explore “the MERL on ICT4MERL?” and others that examine aspects of organizational change related to adopting newer MERL Tech tools and approaches.

(Register here if you haven’t already!)

 

 

Discrete choice experiment (DCE) to generate weights for a multidimensional index

In his MERL Tech Lightning Talk, Simone Lombardini, Global Impact Evaluation Adviser, Oxfam, discussed his experience with an innovative method for applying tech to help determine appropriate metrics for measuring concepts that escape easy definition. To frame his talk, he referenced Oxfam’s recent experience with using discrete choice experiments (DCE) to establish a strategy for measuring women’s empowerment.

Two methods already exist, Simone points out, for transforming soft concepts into hard metrics. First, the evaluator could assume full authority and responsibility over defining the metrics. Alternatively, the evaluator could design the evaluation so that relevant stakeholders are incorporated into the process and use their input to help define the metrics.

Though both methods are common, they are missing (for practical reasons) the level of mass input that could make them truly accurate reflections of the social perception of whatever concept is being considered. Tech has a role to play in scaling the quantity of input that can be collected. If used correctly, this could lead to better evaluation metrics.

Simone described this approach as “context-specific” and “multi-dimensional.” The process starts by defining the relevant characteristics (such as those found in empowered women) in their social context, then translating these characteristics into indicators, and finally combining indicators into one empowerment index for evaluating the project.

After the characteristics are defined, a discrete choice experiment can be used to determine its “weight” in a particular social context. A discrete choice experiment (DCE) is a technique that’s frequently been used in health economics and marketing, but not much in impact evaluation. To implement a DCE, researchers present different hypothetical scenarios to respondents and ask them to decide which one they consider to best reflect the concept in question (i.e. women’s empowerment). The responses are used to assess the indicators covered by the DCE, and these can then be used to develop an empowerment index.

This process was integrated into data collection process and added 10 mins at the end of a one hour survey, and was made practicable due to the ubiquity of smartphones. The results from Oxfam’s trial run using this method are still being analyzed. For more on this, watch Lombardini’s video below!

Community-led mobile research–What could it look like?

Adam Groves, Head of Programs at On Our Radar, gave a presentation at MERL Tech London in February where he elaborated on a new method for collecting qualitative ethnographic data remotely.

The problem On Our Radar sought to confront, Adam declares, is the cold and impenetrable bureaucratic machinery of complex organizations. To many people, the unresponsiveness and inhumanity of the bureaucracies that provide them with services is dispiriting, and this is a challenge to overcome for anyone that wants to provide a quality service.

On Our Radar’s solution is to enable people to share their real-time experiences of services by recording audio and SMS diaries with their basic mobile phones. Because of the intimacy they capture, these first-person accounts have the capacity to grab the people behind services and make them listen to and experience the customer’s thoughts and feelings as they happened.

Responses obtained from audio and SMS diaries are different from those obtained from other qualitative data collection methods because, unlike solutions that crowdsource feedback, these diaries contain responses from a small group of trained citizen reporters that share their experiences in these diaries over a sustained period of time. The product is a rich and textured insight into the reporters’ emotions and priorities. One can track their journeys through services and across systems.

On Our Radar worked with British Telecom (BT) to implement this technique. The objective was to help BT understand how their customers with dementia experience their services. Over a few weeks, forty people living with dementia recorded audio diaries about their experiences dealing with big companies.

Adam explained how the audio diary method was effective for this project:

  • Because diaries and dialogues are in real time, they captured emotional highs and lows (such as the anxiety of picking up the phone and making a call) that would not be recalled in post fact interviews.
  • Because diaries are focused on individuals and their journeys instead of on discrete interactions with specific services, they showed how encountering seemingly unrelated organizations or relationships impacted users’ experiences of BT. For example, cold calls became terrifying for people with dementia and made them reluctant to answer the phone for anyone.
  • Because this method follows people’s experiences over time, it allows researchers to place individual pain points and problems in the context of a broader experience.
  • Because the data is in first person and in the moment, it moved people emotionally. Data was shared with call center staff and managers, and they found it compelling. It was an emotional human story told in one’s own words. It invited decision makers to walk in other people’s shoes.

On Our Radar’s future projects include working in Sierra Leone with local researchers to understand how households are changing their practices post-Ebola and a major piece of research with the London School of Hygiene and Tropical Medicine in Malaysia and the Philippines to gain insight on people’s understanding of their health systems.

For more, find a video of Adam’s original presentation below!

New Report: Global Innovations in Measurement and Evaluation

All 8 innovationsOn June 26th, New Philanthropy Capital (NPC) released its “Global Innovations in Measurement and Evaluation” report. In it, NPC outlines and elaborates on eight concepts that represent innovations in conducting effective measurement and evaluation of social impact programs. The list of concepts was distilled from conversations with leading evaluation experts about what is exciting in the field and what is most likely to make a long-lasting impact on the practice of evaluation. Below, we feature each of these eight concepts accompanied by brief descriptions of their meanings and implications.

User-Centric

The key to making an evaluation user-centric is to ensure that the service users are truly involved in every stage of the evaluation process. In this way, the power dynamic ceases to be unidirectional as more agency is given to the user. As a result, not only can findings become more compelling to decision makers because of more robust data collection, but also those responsible for the program now become accountable to the users in addition to the funders, a shift that is both ethically important and that is important for the trust it builds.

Shared Measurement & Evaluation

Shared measurement and evaluation requires multiple organizations with similar missions, programs or users to work together to measure their own and their combined impact. This involves using the same evaluation metrics and, at a more advanced stage, developing shared measurement tools and methodologies. Pooling data and comparing outcomes creates a bigger dataset that can support stronger conclusions and provide more insights.

Theory-Based Evaluation

The central idea behind theory-based evaluation is to not only measure the outcome of a program but to also get at the reason why it does or does not work. Typically, this approach begins with a theory of change that proposes an explanation for how activities lead to impact, and this theory is then tested and accepted, refuted or qualified. It is important to apply this concept because without an understanding of why programs work, there is a risk that mistakes will be repeated or that attempts to replicate a program will fail when attempted under different conditions.

Impact Management

Impact management is the integration of impact assessment into strategy and performance management by regularly collecting data and responding to it with course corrections designed to improve the outcomes of a program. This method contrasts with assessment strategies that only examine a program at the end of its life cycle. The objective here is to be flexible and adaptive in order to produce a more effective intervention rather than waiting to evaluate it until there is nothing that can be done to change it.

Data Linkage

Data linkage is the act of bringing together different but relevant data about a specified group of users from beyond a single organization or sub-sector dataset. One example could be a homelessness charity that supports its users in accessing social housing linking its data with the local council to see if its users ultimately remained in their homes. In essence, this method allows organizations to leverage the increasing quantities of data to create comparison groups to track the long term impacts of their programs.

Big Data

Big data is typically considered as the data generated as a by-product of digital transactions and interactions. It is a category that includes people’s social media activity, web searches and digital financial transaction trails. New technology has expanded the human ability to analyze large datasets, and consequently big data has become a powerful tool for helping identify trends and patterns, even if it does not provide explanations for them.

Remote Sensing

Remote sensing uses technology, such as mobile phones, to gather information from afar. This method is useful because it allows one to collect data that may not be typically accessible. Additionally, remote sensing data can be highly detailed, accurate, and in real time. Finally, one of its great strengths is that it is generated passively, which reduces the possibility of introducing researcher bias through human input.

Data Visualization

Data visualization is the practice of presenting data in a graphic form. New technology has made it possible to create a broad range of useful visualizations. The result is that data is now more accessible to non-specialists, and the insights produced through analysis can now be better understood and communicated.

For more details and more examples of real-world applications of these concepts, check out the full “Global Innovations in Measurement and Evaluation” report here.

Cost-benefit comparisons of IVR, SMS, and phone survey methods

In his MERL Tech London Lightning Talk back in February, Jan Liebnitzky of Firetail provided a research-backed assessment of the costs and benefits of using interactive voice response surveys (IVR), SMS surveys, and phone surveys for MERL purposes.

First, he outlined the opportunities and challenges of using phones for survey research:

  • They are a good means for providing incentives. And research shows that incentives don’t have to be limited to airtime credits. The promise of useful information is sometimes the best motivator for respondents to participate in surveys.
  • They are less likely to reach subgroupsThough mobile phones are ubiquitous, one challenge is that groups like women, illiterate people and people in low-connectivity areas do not always have access to them. Thus, phones may not be as effective as one would hope for reaching the people most often targeted by aid programs.
  • They are scalable and have expansive reach. Scripting and outsourcing phone-based surveys to call centers takes time and capacity. Fixed costs are high, while marginal costs for each new question or respondent is low. This means that they can be cost effective (compared to on the ground surveys) if implemented at a large scale or in remote and high risk areas with problematic access.

Then, Jan shared some strategies for using phones for MERL purposes:

1. Interactive Voice Response Surveys

    • These are pre-recorded and automated surveys. Respondent can reply to them by voice or with the numerical keypad.
    • IVR has been used in interactive radio programs in Tanzania, where listening posts were established for the purpose of interacting with farmers. Listening posts are multi-channel, web-based platforms that gather and analyze feedback and questions from farmers that listen to particular radio shows. The radio station will run the IVR, and farmers can call in to the radio show to submit their questions or responses. These are effective because they are run through a trusted radio shows. However, it is important that farmers receive answers for the questions they ask, as this incentivizes future participation.

2. SMS Surveys

    • These make use of mobile messaging capabilities to send questions and receive answers. Usually, the SMS survey respondent will either choose between fixed multiple choice answers or write a freeform response. Responses, however, are limited to 160 characters.
    • One example of this is U-Reporter, a free SMS social monitoring tool for community participation in Uganda. Polls are sent to U-Reporters who answer back in real time, and the results are then shared back with the community.

3. Phone Surveys

    • Phone surveys are run through call centers by enumerators. They function like face to face interview, but over the phone.
    • As an example, phone surveys were used as a monitoring tool by an agriculture extension services provider. Farmers in the area subscribed to receive texts from the provider with tips about when and how to plant crops. From the list of subscribers, prospective respondents were sampled and in-country call centers were contracted to call up to 1,000 service users to inquire about quality of service, behaviour changes and adoption of new farming technologies.
    • The challenges here were that the data were only as good as call staff was trained. Also, there was a 80% drop off rate, partly due to the language limitations of call staff.

Finally, Jan provided a rough cost and effectivity assessment for each method:

  • IVR survey: medium cost, high response
  • SMS survey: low cost, low response
  • Phone survey: high cost, medium response

Jan closed with a question: What is the value of these methods for MERL?

His answer: The surveys are quick and dirty and, to their merit, they produce timely data from remote areas at a reasonable cost. If the data is made use of, it can be effective for monitoring. However, these methods are not yet adequate for use in evaluation.

For more, watch Jan’s Lightning Talk below!

Focus on the right users to avoid an M&E apocalypse

In his MERL Tech London Lightning Talk, George Flatters from the Open Data Institute told us that M&E is extractive. “It takes data from poor communities, it refines it, and it sells it to to rich communities.” he noted, and this process is unsustainable. The ease of deploying a survey means that there are more and more surveys being administered. This leads to survey fatigue, and when people stop wanting to take surveys, the data quality suffers, leading to an M&E apocalypse.

George outlined 4 ways to mitigate against doomsday:

1) Understand the problem–who is doing what, where?

At the moment, no one can be totally sure about which NGOs are doing what data collection and where. What is needed is the Development equivalent of the Humanitarian Data Exchange–a way to centralize and share all collected Development data. Besides the International Household Survey Catalog and NGO Aid Map (which serve a similar function, but to a limited degree), no such central location exists. With it, the industry could avoid duplication and maximize the use of its survey-administering resources.

2) Share more and use existing data

Additionally, with access to a large and comprehensive database such as this, the industry could greatly expand the scope of analysis done with the same set of data. This, of course, should be paired with the appropriate privacy considerations. For example, the data should be anonymized. Generally, a balance must be struck between accessibility and ethics. The Open Data Institute has a useful framework for thinking about how different data should be governed and shared.

3) Focus on the right users

One set of users is the data-collectors at the head office of an NGO. There are M&E solutions that will make their lives easier. However, attention must also be given to the people in communities providing the data. We need to think about how to make their lives easier as well.

4) Think like a multinational tech corporation (and/or get their data)

These corporations do not sit there and think about how to extract the maximum amount of data, they consider how they can provide quality services that will attract customers. Most of their data is obtained through the provision of services. Similarly, the question here should be, “what M&E services can we provide and receive data as a byproduct?” Examples include: cash-transfers, health visits, app download & usage, and remote watch sensing.

These principles can help minimize the amount of effort spent on extracting data, alleviating the strain placed on those who provide the data, and staving of the end of days for a little longer.

Watch George’s Lightning Talk for some additional tips!

Exploring Causality in Complex Situations using EvalC3

By Hur Hassnain, Monitoring, Evaluation, Accountability and Learning Adviser, War Child UK

At the 2017 MERL Tech London conference, my team and I gave a presentation that addressed the possibilities for and limitations of evaluating complex situations using simple Excel-based tools. The question we explored was: can Excel help us manipulate data to create predictive models and suggest  promising avenues  to project success? Our basic answer was “not yet,” at least not to its full extent. However, there are people working with accessible software like Excel to make analysis simpler for evaluators with less technical expertise.

In our presentation, Rick Davies, Mark Skipper and I showcased EvalC3, an Excel based evaluation tool that enables users to easily identify sets of attributes in a project dataset and to then compare and evaluate the relevance of these attributes to achieving the desired outcome. In other words, it helps answer the question ‘what combination of factors helped bring about the results we observed?’ In the presentation, after we explained what EvalC3 is and gave a live demonstration of how it works, we spoke about our experience using it to analyze real data from a UNICEF funded War Child UK project in Afghanistan–a project that helps children who have been deported back to Afghanistan from Iran.

Our team first learned of EvalC3 when, upon returning from a trip to our Afghanistan country programme, we discussed how our M&E team in Afghanistan uses Excel for storing and analysing data but is not able to use the software to explore or evaluate complex causal configurations. We reached out to Rick with this issue, and he introduced us to EvalC3. It sounded like the solution to our problem, and our M&E officer in Afghanistan decided to test it by using it to dig deeper into an Excel database he’d created to store data on one thousand children who were registered when they were deported to Afghanistan.  

Rick, Hosain Hashmi (our M&E Officer in Afghanistan) and I formed a working group on Skype to test drive EvalC3. First, we needed to clean the data. To do this, we asked our social workers to contact the children and their caretakers to collect important missing data. Missing data is a common problem when collecting data in fragile and conflict affected contexts like those where War Child works. Fortunately, we found that EvalC3 algorithms can work with some missing data, with the tradeoff being slightly less accurate measures of model performance. Compare this to other algorithms (like Quine-McCluskey used in QCA) which do not work at all if the data is missing for some variables. We also had to reduce the number of dimensions we used. If we did not, there could be millions of combinations that could be possible outcome predictors, and an algorithm could not search all of these possibilities in a reasonable span of time. This exercise spoke to M. A. Munson’s theory that “model building only consumes 14% of the time spent on a typical [data mining] project; the remaining time is spent on the pre and post processing steps”.

With a few weeks of work on the available dataset of children deported from Iran, we found that the children who are most likely to go back to Iran for economic purposes are mainly the children who:

  • Are living with friends (instead of with. relatives/caretakers)
  • Had not been doing farming work when they were in Iran
  • Had not completed 3 months vocational training
  • Are from adult headed households (instead of from child headed households).

As the project is still ongoing, we will continue to  investigate the cases covered by the model described here in order to better understand the causal mechanisms at work.

This experience of using EvalC3 encouraged War Child to refine the data it routinely collects with a view to developing a better understanding of where War Child interventions help or don’t help. The in-depth data-mining process and analysis conducted by the national M&E Officer and programmes team resulted in improved understanding of the results we can achieve by analyzing quality data.  EvalC3 is a user-friendly evaluation tool that is not only useful in improving current programmes but also designing new and evidence based programmes.

Using an online job platform to understand gender dynamics in the Mozambican informal labour market

Oxford Policy Management Consultant Paul Jasper’s self-professed professional passion is exploring new ways to use data to improve policy making in the Global South. At MERL Tech London in 2016, he presented on two tech-driven initiatives in Mozambique–Muva and Biscate. In his talk, he explains why these are two great examples of how programming and social policy can benefit from innovations and data coming from the tech sector.

Muva is a program that aims at testing new ways of improving women’s access to economic opportunities. It works primarily with women and girls in urban areas of Mozambique and across all sectors of the economy. Its beneficiaries include employees, self employed people, and micro-entrepreneurs. While conducting its work, the program recognized that one key characteristic of the Mozambican economy is that the informal sector is pervasive. Over 90% of new jobs in sub-Saharan Africa were produced in the informal sector. Given this quality, the challenge for organizations like Muva is that because it is an informal sector, there is very little quantitative data about it, and analysts are not quite sure how it works, what dynamics are in operation, or what role gender plays.

This is where UX, a startup based in Maputo, was able to step in. The startup noted that the majority of jobs in informal sector were assigned in old-fashioned way–people put up signs with a telephone number advertising their service. They came up with an USSD-based solution called Biscate. Biscate is a service that allows workers to register on the platform using normal mobile phones (few people have access to smartphones) and set up advertising profiles with educational status and skills. Clients can then check the platform to find people offering a service they need and can leave reviews and ratings about the service they received.

UX has soft-launched the platform and registered 30 thousand workers across the country. Since its launch, Biscate has provided unique data and insights into the labor market and helped close the data and information gap that plagued the informal sector. Muva can use this info to improve women’s access to opportunities. The organizations have partnered and started a pilot with three objectives:

  1. To understand dynamics in the informal labor market.
  2. To test whether Muva’s approaches can be used to help women become more successful in the informal sector.
  3. To influence policy makers that want to develop similar programs by producing valuable and up to date lessons.

See Paul’s presentation below if you’d like to learn more!