Tag Archives: state of the field

Emerging Technologies: How Can We Use Them for MERL?

Guest post from Kerry Bruce, Clear Outcomes

A new wave of technologies and approaches has the potential to influence how monitoring, evaluation, research and learning (MERL) practitioners do their work. The growth in use of smartphones and the internet, digitization of existing data sets, and collection of digital data make data increasingly available for MERL activities. This changes how MERL is conducted and, in some cases, who conducts it.

We recently completed research on emerging technologies for use in MERL as part of a wider research project on The State of the Field of MERL Tech.

We hypothesized that emerging technology is revolutionizing the types of data that can be collected and accessed and the ways that it can be processed and used for better MERL. However, improved research on and documentation of how these technologies are being used is required so the sector can better understand where, when, why, how, and for which populations and which types of MERL these emerging technologies would be appropriate.

The team reviewed the state of the field and found there were three key new areas of data that MERL practitioners should consider:

  • New kinds of data sources, such as application data, sensor data, data from drones and biometrics. These types of data are providing more access to information and larger volumes of data than ever before.
  • New types of systems for data storage.  The most prominent of these was the distributed ledger technologies (also known as blockchain) and an increasing use of cloud and edge computing.  We discuss the implications of these technologies for MERL.
  • New ways of processing data, mainly from the field of machine learning, specifically supervised and unsupervised learning techniques that could help MERL practitioners manage large volumes of both quantitative and qualitative data.

These new technologies hold great promise for making MERL practices more precise, automated and timely. However, some challenges include:

  • A need to clearly define problems so the choice of data, tool, or technique is appropriate
  • Non-representative selection bias when sampling
  • Reduced MERL practitioner or evaluator control
  • Change management needs to adapt how organizations manage data
  • Rapid platform changes and difficulty with assessing the costs
  • A need for systems thinking which may involve stitching different technologies together

To address emerging challenges and make best use of the new data, tools, and approaches, we found a need for capacity strengthening for MERL practitioners, greater collaboration among social scientists and technologists, a need for increased documentation, and a need for the incorporation of more systems thinking among MERL practitioners.

Finally there remains a need for greater attention to justice, ethics and privacy in emerging technology.

Download the paper here!

Read the other papers in the series here!

The Hype Cycle of MERL Tech Knowledge Synthesis

Guest Post by Zach Tilton, Doctoral Research Associate, Interdisciplinary Ph.D. in Evaluation (IDPE), Western Michigan University

Would I be revealing too much if I said we initially envisioned and even titled our knowledge synthesis as a ‘rapid’ scoping review? Hah! After over a year and a half of collaborative research with an amazing team we likely have just as many findings about how (and how not) to conduct a scoping review as we do about the content of our review on traditional MERL Tech. I console myself that the average Cochrane systematic review takes 30 months to complete (while recognizing that is a more disciplined knowledge synthesis).

Looking back, I could describe our hubris and emotions during the synthesis process similar to the trajectory of the Gartner Hype Cycle, a concept we draw from in our broader MERL Tech State of the Field research to conceptualize the maturity and adoption of technology. Our triggering curiosities about the state of the field was followed by multiple peaks of inflated expectations and troughs of disillusionment until we settled onto the plateau of productivity (and publication). We uncovered much about the nature of what we termed traditional MERL Tech, or tech-enabled systematic inquiry that allows us to do what we have always done in the MERL space, only better or differently.

One of our findings was actually related to the possible relationship technologies have with the Gartner Hype Cycle. Based on a typology we developed as we started screening studies from our review, we found that the ratio of studies related to a specific MERL Tech versus the studies focused on that same MERL Tech, provided an indirect measure of the trust researchers and practitioners had in that technology to deliver results, similar to the expectation variable in Y axis of the Hype Cycle plane.

Briefly, in focused studies MERL Tech is under the magnifying glass; in related studies MERL Tech is the magnifying glass. When we observed specific technologies being regularly used to study other phenomena significantly more than they were themselves being studied, we inferred these technologies were trusted more than others to deliver results. Conversely, when we observed a higher proportion of technologies being investigated as opposed to facilitating investigations, we inferred these were less trusted to deliver results. In other words, coupled with higher reported frequency, the technologies with higher levels of trust could be viewed as farther along on the hype cycle than those with lower levels of trust. Online surveys, geographic information system, and quantitative data analysis software were among the most trusted technologies, with dashboards, mobile tablets, and real-time technologies among the least trusted.

To read a further explanation of this and other findings, conclusions, and recommendations from our MERL Tech State of the Field Scoping Review, download the white paper.

Read the other papers in the State of the Field of MERL Tech series.

New Research! The State of the Field of MERL Tech, 2014-2019

The year 2020 is a compelling time to look back and pull together lessons from five years of convening hundreds of monitoring, evaluation, research, and learning and technology practitioners who have joined us as part of the MERL Tech community. The world is in the midst of the global COVID-19 pandemic, and there is an urgent need to know what is happening, where, and to what extent. Data is a critical piece of the COVID-19 response — it can mean the difference between life and death. And technology use is growing due to stay-at-home orders and a push for “remote monitoring” and data collection from a distance.

At the same time, we’re witnessing (and I hope, also joining in with) a global call for justice — perhaps a tipping point — in the wake of decades of racist and colonialist systems that operate at the level of nations, institutions, organizations, the global aid and development systems, and the tech sector. There is no denying that these power dynamics and systems have shaped the MERL space as a whole, and the MERL Tech space as well.

Moments of crisis tend to test a field, and we live in extreme times. The coming decade will demand a nimble, adaptive, fair, and just use of data for managing complexity and for gaining longer-term understanding of change and impact. Perhaps most importantly, in 2020 and beyond, we need meaningful involvement of stakeholders at every level and openness to a re-shaping of our sector and its relationships and power dynamics.

It is in this time of upheaval and change that we are releasing a set of four papers that aim to take stock of the field from 2014-2019 as launchpad for shaping the future of MERL Tech. In September 2018, the papers’ authors began reviewing the past five years of MERL Tech events to identify lessons, trends, and issues in this rapidly changing field. They also reviewed the literature base in an effort to determine what we know, what we yet need to understand about technology in MERL, and what are the gaps in the formal literature. No longer is this a nascent field, yet it is one that is hard to keep up with, given that it is fast paced and constantly shifting with the advent of new technologies. We have learned many lessons over the past five years, but complex political, technical, and ethical questions remain.

The State of the Field series includes four papers:

MERL Tech State of the Field: The Evolution of MERL Tech: Linda Raftree, independent consultant and MERL Tech Conference organizer.

 

What We Know About Traditional MERL Tech: Insights from a Scoping Review: Zach Tilton, Michael Harnar, and Michele Behr, University of Western Michigan; Soham Banerji and Manon McGuigan, independent consultants; and Paul Perrin, Gretchen Bruening, John Gordley and Hannah Foster, University of Notre Dame; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Big Data to Data Science: Moving from “What” to “How” in the MERL Tech SpaceKecia Bertermann, Luminate; Alexandra Robinson, Threshold.World; Michael Bamberger, independent consultant; Grace Lyn Higdon, Institute of Development Studies; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Emerging Technologies and Approaches in Monitoring, Evaluation, Research, and Learning for International Development Programs: Kerry Bruce and Joris Vandelanotte, Clear Outcomes; and Valentine Gandhi, The Development CAFE and Social Impact.

Through these papers, we aim to describe the State of the Field up to 2019 and to offer a baseline point in time from which the wider MERL Tech community can take action to make the next phase of MERL Tech development effective, responsible, ethical, just, and equitable. We share these papers as conversation pieces and hope they will generate more discussion in the MERL Tech space about where to go from here.

We’d like to start or collaborate on a second round of research to delve into areas that were under-researched or less developed. Your thoughts are most welcome on topics that need more research, and if you are conducting research about MERL Tech, please get in touch and we’re happy to share here on MERL Tech News or to chat about how we could work together!

Big Data to Data Science: Moving from ‘What’ to ‘How’ in MERL

Guest post by Grace Higdon

Big data is a big topic in other sectors but its application within monitoring and evaluation (M&E) is limited, with most reports focusing more on its potential rather than actual use. Our paper,  “Big Data to Data Science: Moving from ‘What’ to ‘How’ in the MERL Tech Space”  probes trends in the use of big data between 2014 and 2019 by a community of early adopters working in monitoring, evaluation, research, and learning (MERL) in the development and humanitarian sectors. We focus on how MERL practitioners actually use big data and what encourages or deters adoption.

First, we collated administrative and publicly available MERL Tech conference data from the 281 sessions accepted for presentation between 2015 and 2019. Of these, we identified 54 sessions that mentioned big data and compared trends between sessions that did and did not mention this topic. In any given year from 2015 to 2019, 16 percent to 26 percent of sessions at MERL Tech conferences were related to the topic of big data. (Conferences were held in Washington DC, London, and Johannesburg).

Our quantitative analysis was complemented by 11 qualitative key informant interviews. We selected interviewees representing diverse viewpoints (implementers, donors, MERL specialists) and a range of subject matter expertise and backgrounds. During interviews, we explored why an interviewee chose to use big data, the benefits and challenges of using big data, reflections on the use of big data in the wider MERL tech community, and opportunities for the future.

Findings

Our findings indicate that MERL practitioners are in a fragmented, experimental phase, with use and application of big data varying widely, accompanied by shifting terminologies. One interviewee noted that “big data is sort of an outmoded buzzword” with practitioners now using terms such as ‘artificial intelligence’ and ‘machine learning.’ Our analysis attempted to expand the umbrella of terminologies under which big data and related technologies might fall. Key informant interviews and conference session analysis identified four main types of technologies used to collect big data: satellites, remote sensors, mobile technology, and M&E platforms, as well as a number of other tools and methods. Additionally, our analysis surfaced six main types of tools used to analyze big data: artificial intelligence and machine learning, geospatial analysis, data mining, data visualization, data analysis software packages, and social network analysis.

Barriers to adoption

We also took an in-depth look at barriers to and enablers of use of big data within MERL, as well as benefits and drawbacks. Our analysis found that perceived benefits of big data included enhanced analytical possibilities, increased efficiency, scale, data quality, accuracy, and cost-effectiveness. Big data is contributing to improved targeting and better value for money. It is also enabling remote monitoring in areas that are difficult to access for reasons such as distance, poor infrastructure, or conflict.

Concerns about bias, privacy, and the potential for big data to magnify existing inequalities arose frequently. MERL practitioners cited a number of drawbacks and limitations that make them cautious about using big data. These include lack of trust in the data (including mistrust from members of local communities); misalignment of objectives, capacity, and resources when partnering with big data firms and the corporate sector; and ethical concerns related to privacy, bias, and magnification of inequalities. Barriers to adoption include insufficient resources, absence of relevant use cases, lack of skills for big data, difficulty in determining return on investment, and challenges in pinpointing the tangible value of using big data in MERL.

Our paper includes a series of short case studies of big data applications in MERL. Our research surfaced a need for more systematic and broader sharing of big data use cases and case studies in the development sector.

The field of Big Data is rapidly evolving, thus we expect that shifts have happened already in the field since the beginning of our research in 2018. We recommend several steps for advancing with Big Data / Data Science in the MERL Space, including:

  1. Consider. MERL Tech practitioners should examine relevant learning questions before deciding whether big data is the best tool for the MERL job at hand or whether another source or method could answer them just as well.
  2. Pilot testing of various big data approaches is needed in order to assess their utility and the value they add. Pilot testing should be collaborative; for example, an organization with strong roots at the field level might work with an agency that has technical expertise in relevant areas.
  3. Documenting. The current body of documentation is insufficient to highlight relevant use cases and identify frameworks for determining return on investment in big data for MERL work. The community should do more to document efforts, experiences, successes, and failures in academic and gray literature.
  4. Sharing. There is a hum of activity around big data in the vibrant MERL Tech community. We encourage the MERL Tech community to engage in fora such as communities of practice, salons, events, and other convenings, and to seek less typical avenues for sharing information and learning and to avoid knowledge silos.
  5. Learning. The MERL Tech space is not static; indeed, the terminology and applications of big data have shifted rapidly in the past 5 years and will continue to change over time. The MERL Tech community should participate in new training related to big data, continuing to apply critical thinking to new applications.
  6. Guiding. Big data practitioners are crossing exciting frontiers as they apply new methods to research and learning questions. These new opportunities bring significant responsibility. MERL Tech programs serve people who are often vulnerable — but whose rights and dignity deserve respect. As we move forward with using big data, we must carefully consider, implement, and share guidance for responsible use of these new applications, always honoring the people at the heart of our interventions.

Download the full paper here.

Read the other papers in the State of the Field of MERL Tech series.

What’s Happening with Tech and MERL?

by Linda Raftree, Independent Consultant and MERL Tech organizer

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before roll-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE