Guest Post by Zach Tilton, Doctoral Research Associate, Interdisciplinary Ph.D. in Evaluation (IDPE), Western Michigan University
Would I be revealing too much if I said we initially envisioned and even titled our knowledge synthesis as a ‘rapid’ scoping review? Hah! After over a year and a half of collaborative research with an amazing team we likely have just as many findings about how (and how not) to conduct a scoping review as we do about the content of our review on traditional MERL Tech. I console myself that the average Cochrane systematic review takes 30 months to complete (while recognizing that is a more disciplined knowledge synthesis).
Looking back, I could describe our hubris and emotions during the synthesis process similar to the trajectory of the Gartner Hype Cycle, a concept we draw from in our broader MERL Tech State of the Field research to conceptualize the maturity and adoption of technology. Our triggering curiosities about the state of the field was followed by multiple peaks of inflated expectations and troughs of disillusionment until we settled onto the plateau of productivity (and publication). We uncovered much about the nature of what we termed traditional MERL Tech, or tech-enabled systematic inquiry that allows us to do what we have always done in the MERL space, only better or differently.
One of our findings was actually related to the possible relationship technologies have with the Gartner Hype Cycle. Based on a typology we developed as we started screening studies from our review, we found that the ratio of studies related to a specific MERL Tech versus the studies focused on that same MERL Tech, provided an indirect measure of the trust researchers and practitioners had in that technology to deliver results, similar to the expectation variable in Y axis of the Hype Cycle plane.
Briefly, in focused studies MERL Tech is under the magnifying glass; in related studies MERL Tech is the magnifying glass. When we observed specific technologies being regularly used to study other phenomena significantly more than they were themselves being studied, we inferred these technologies were trusted more than others to deliver results. Conversely, when we observed a higher proportion of technologies being investigated as opposed to facilitating investigations, we inferred these were less trusted to deliver results. In other words, coupled with higher reported frequency, the technologies with higher levels of trust could be viewed as farther along on the hype cycle than those with lower levels of trust. Online surveys, geographic information system, and quantitative data analysis software were among the most trusted technologies, with dashboards, mobile tablets, and real-time technologies among the least trusted.
To read a further explanation of this and other findings, conclusions, and recommendations from our MERL Tech State of the Field Scoping Review, download the white paper.
by Linda Raftree, Independent Consultant and MERL Tech organizer
Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).
The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.
In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before roll-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.
We concluded that:
The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors.
The field needs more documentation on the utility and impact of ICTs for M&E.
Pressure to show impact may open up space for testing new M&E approaches.
A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs.
Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.
Where are we now: MERL Tech in 2019
Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.
The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”
In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.
Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:
First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base.
Second wave: Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use.
Third wave: Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL.
We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!