The Hype Cycle of MERL Tech Knowledge Synthesis
Guest Post by Zach Tilton, Doctoral Research Associate, Interdisciplinary Ph.D. in Evaluation (IDPE), Western Michigan University
Would I be revealing too much if I said we initially envisioned and even titled our knowledge synthesis as a ‘rapid’ scoping review? Hah! After over a year and a half of collaborative research with an amazing team we likely have just as many findings about how (and how not) to conduct a scoping review as we do about the content of our review on traditional MERL Tech. I console myself that the average Cochrane systematic review takes 30 months to complete (while recognizing that is a more disciplined knowledge synthesis).
Looking back, I could describe our hubris and emotions during the synthesis process similar to the trajectory of the Gartner Hype Cycle, a concept we draw from in our broader MERL Tech State of the Field research to conceptualize the maturity and adoption of technology. Our triggering curiosities about the state of the field was followed by multiple peaks of inflated expectations and troughs of disillusionment until we settled onto the plateau of productivity (and publication). We uncovered much about the nature of what we termed traditional MERL Tech, or tech-enabled systematic inquiry that allows us to do what we have always done in the MERL space, only better or differently.
One of our findings was actually related to the possible relationship technologies have with the Gartner Hype Cycle. Based on a typology we developed as we started screening studies from our review, we found that the ratio of studies related to a specific MERL Tech versus the studies focused on that same MERL Tech, provided an indirect measure of the trust researchers and practitioners had in that technology to deliver results, similar to the expectation variable in Y axis of the Hype Cycle plane.
Briefly, in focused studies MERL Tech is under the magnifying glass; in related studies MERL Tech is the magnifying glass. When we observed specific technologies being regularly used to study other phenomena significantly more than they were themselves being studied, we inferred these technologies were trusted more than others to deliver results. Conversely, when we observed a higher proportion of technologies being investigated as opposed to facilitating investigations, we inferred these were less trusted to deliver results. In other words, coupled with higher reported frequency, the technologies with higher levels of trust could be viewed as farther along on the hype cycle than those with lower levels of trust. Online surveys, geographic information system, and quantitative data analysis software were among the most trusted technologies, with dashboards, mobile tablets, and real-time technologies among the least trusted.
To read a further explanation of this and other findings, conclusions, and recommendations from our MERL Tech State of the Field Scoping Review, download the white paper.
Read the other papers in the State of the Field of MERL Tech series.