The future of development evaluation in the age of big data


Screen Shot 2017-07-22 at 1.52.33 PMBy Michael Bamberger, Independent Evaluation Consultant. Michael has been involved in development evaluation for 50 years and recently wrote the report: “Integrating Big Data into the Monitoring and Evaluation of Development Programs” for UN Global Pulse.

We are living in an increasingly quantified world.

There are multiple sources of data that can be generated and analyzed in real-time. They can be synthesized to capture complex interactions among data streams and to identify previously unsuspected linkages among seemingly unrelated factors [such as the purchase of diapers and increased sales of beer]. We can now quantify and monitor ourselves, our houses (even the contents of our refrigerator!), our communities, our cities, our purchases and preferences, our ecosystem, and multiple dimensions of the state of the world.

These rich sources of data are becoming increasingly accessible to individuals, researchers and businesses through huge numbers of mobile phone and tablet apps and user-friendly data analysis programs.

The influence of digital technology on international development is growing.

Many of these apps and other big data/data analytics tools are now being adopted by international development agencies. Due to their relatively low cost, ease of application, and accessibility in remote rural areas, the approaches are proving particularly attractive to non-profit organizations; and the majority of NGOs probably now use some kind of mobile phone apps.

Apps are widely-used for early warning systems, emergency relief, dissemination of information (to farmers, mothers, fishermen and other groups with limited access to markets), identifying and collecting feedback from marginal and vulnerable groups, and permitting rapid analysis of poverty. Data analytics are also used to create integrated data bases that synthesize all of the information on topics as diverse as national water resources, human trafficking, updates on conflict zones, climate change and many other development topics.

Table 1: Widely used big data/data analytics applications in international development

Application

Big data/data analytics tools

Early warning systems for natural and man-made disasters
  • Analysis of Twitter, Facebook and other social media
  • Analysis of radio call-in programs
  • Satellite images and remote sensors
  • Electronic transaction records [ATM, on-line purchases]
Emergency relief
  • GPS mapping and tracking
  • Crowd-sourcing
  • Satellite images
Dissemination of information to small farmers, mothers, fishermen and other traders
  • Mobile phones
  • Internet
Feedback from marginal and vulnerable groups and on sensitive topics
  • Crowd-sourcing
  • Secure hand-held devices [e.g. UNICEF’s “U-Report” device]
Rapid analysis of poverty and identification of low-income groups
  • Analysis of phone records
  • Social media analysis
  • Satellite images [e.g. using thatched roofs as a proxy indicator of low-income households]
  • Electronic transaction records
Creation of an integrated data base synthesizing all the multiples sources of data on a development topic
  • National water resources
  • Human trafficking
  • Agricultural conditions in a particular region


Evaluation is lagging behind.

Surprisingly, program evaluation is the area that is lagging behind in terms of the adoption of big data/analytics. The few available studies report that a high proportion of evaluators are not very familiar with big data/analytics and significantly fewer report having used big data in their professional evaluation work. Furthermore, while many international development agencies have created data development centers within the past few years, many of these are staffed by data scientists (many with limited familiarity with conventional evaluation methods) and there are weak institutional links to agency evaluation offices.

A recent study on the current status of the integration of big data into the monitoring and evaluation of development programs identified a number of reasons for the slow adoption of big data/analytics by evaluation offices:

  • Weak institutional links between data development centers and evaluation offices
  • Differences of methodology and the approach to data generation and analysis
  • Issues concerning data quality
  • Concerns by evaluators about the commercial, political and ethical nature of how big data is generated, controlled and used.

(Linda Raftree talks about a number of other reasons why parts of the development sector may be slow to adopt big data.)

Key questions for the future of evaluation in international development…

The above gives rise to two sets of questions concerning the future role of evaluation in international development:

  • The future direction of development evaluation. Given the rapid expansion of big data in international development, it is likely there will be a move towards integrated program information systems. These will begin to generate, analyze and synthesize data for program selection, design, management, monitoring, evaluation and dissemination. A possible scenario is that program evaluation will no longer be considered a specialized function that is the responsibility of a separate evaluation office, rather it will become one of the outputs generated from the program data base. If this happens, evaluation may be designed and implemented not by evaluation specialists using conventional evaluation methods (experimental and quasi-experimental designs, theory-based evaluation) but by data analysts using methods such as predictive analytics and machine learning.

Key Question: Is this scenario credible? If so how widespread will it become and over what time horizon? Is it likely that evaluation will become one of the outputs of an integrated management information system? And if so is it likely that many of the evaluation functions will be taken over by big data analysts?

  • The changing role of development evaluators and the evaluation office. We argued that currently many or perhaps most development evaluators are not very familiar with big data/analytics, and even fewer apply these approaches. There are both professional reasons (how evaluators and data scientists are trained) and organizational reasons (the limited formal links between evaluation offices and data centers in many organizations) that explain the limited adoption of big data approaches by evaluators. So, assuming the above scenario proves to be at least partially true, what will be required for evaluators to become sufficiently conversant with these new approaches to be able to contribute to how big data/focused evaluation approaches are designed and implemented? According to Pete York at Communityscience.com, the big challenge and opportunity for evaluators is to ensure that the scientific method becomes an essential part of the data analytics toolkit. Recent studies by the Global Environmental Faciity (GEF) illustrate some of the ways that big data from sources such as satellite images and remote sensors can be used to strengthen conventional quasi-experimental evaluation designs. In a number of evaluations these data sources used propensity score matching to select matched samples for pretest-posttest comparison group designs to evaluate the effectiveness of programs to protect forest cover or reserves for mangrove swamps.

Key Question: Assuming there will be a significant change in how the evaluation function is organized and managed, what will be required to bridge the gap between evaluators and data analysts? How likely is it that the evaluators will be able to assume this new role and how likely is it that organizations will make the necessary adjustments to facilitate these transformations?

What do you think? How will these scenarios play out?

Note: Stay tuned for Michael’s next post focusing on how to build bridges between evaluators and big data analysts.

Below are some useful references if you’d like to read more on this topic:

Anderson, C (2008) “The end of theory: The data deluge makes the scientific method obsolete” Wired Magazine 6/23/08. The original article in the debate on whether big data analytics requires a theoretical framework.

Bamberger, M., Raftree, L and Olazabal, V (2016) The role of new information and communication technologies in equity–focused evaluation: opportunities and challenges. Evaluation. Vol 22(2) 228–244 . A discussion of the ethical issues and challenges with new information technology

Bamberger, M (2017) Integrating big data into the monitoring and evaluation of development programs. UN Global Pulse with support from the Rockefeller Foundation. Review of progress in the incorporation of new information technology into development programs and the opportunities and challenges of building bridges between evaluators and big data specialists.

Meier , P (2015) Digital Humanitarians: How big data is changing the face of humanitarian response. CRC Press. A review, with detailed case studies, of how digital technology is being used by NGOs and civil society.

O’Neill, C (2016) The weapons of math destruction: How big data increases inequality and threatens democracy.   How widely-used digital algorithms negatively affect the poor and marginalized sectors of society. Crown books.

Petersson, G.K and Breul, J.D (editors) (2017) Cyber society, big data and evaluation. Comparative policy evaluation. Volume 24. Transaction Publications. The evolving role of evaluation in cyber society.

Wolf, G The quantified self [TED Talk]  Quick overview of the multiple self-monitoring measurements that you can collect on yourself.

World Bank (2016). Digital Dividends. World Development Report. Overview of how the expansion of digital technology is affecting all areas of our lives.

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *