Tag Archives: integration

Integrating MERL with program design is good program management

by Yaquta Fatehi, Program Manager of Performance Measurement at the William Davidson Institute at the University of Michigan; and Heather Esper, Senior Program Manager of Performance Measurement at the William Davidson Institute at the University of Michigan.

At MERL Tech DC 2018, we — Yaquta Fatehi and Heather Esper — led a session titled “Integrating MERL with program design: Presenting an approach to balance your MERL strategy with four principles” The session focused on our experience of implementing this approach.

The challenge: There are a number of pressing tensions and challenges in development programs related to MERL implementation. These include project teams and MERL teams working in silos and, just as importantly, leadership’s lack of understanding and commitment to MERL (as leadership often views MERL only in terms of accountability). And while there are solutions developed to address some of these challenges, our consortium, the Balanced Design, Monitoring, Evaluation, Research, and Learning (BalanceD-MERL) consortium (under U.S. Agency for International Development’s (USAID’s) MERLIN program) saw that there was still a strong need for integration of MERL in program design for good program management and adaptive management. We chose four principles – relevant, right-sized, responsible, and trustworthy – to guide this approach to enable sustainable integration of MERL with program design and adaptive management. Definitions of the principles can be found here.

How to integrate program design and MERL (a case example): Our consortium aimed to identify the benefits of such integration and application of these principles in the Women + Water Global Development Alliance program. The Alliance is a five year public/private partnership between USAID and Gap, Inc., and four other non-profit sector partners. The Alliance draws upon these organizations’ complementary strengths to improve and sustain the health and well-being of women and communities touched by the apparel industry in India. Gap, Inc. had not partnered with USAID before and had limited experience with MERL on a complex program such as this which consisted of multiple individual activities or projects implemented by multiple partners. The BalanceD-MERL consortium’s services were requested during the program design stage, to develop a rigorous program-wide, high-level, MERL strategy. We proposed co-developing the MERL activities with the Women + Water partners as listed in the MERL Strategy Template (see Table 1 in the case study shared below) – that has been developed by our consortium partner – Institute for Development Impact.

Our first step was to co-design the program’s theory of change with the Women + Water partners to establish a shared understanding of what was the problem and how it was to be addressed by the program. We used the theory of change as a communication asset that helped bring a shared understanding of the solution among partners. We found that through this process we also identified gaps in the program design that could then be addressed, in turn making the program design stronger. Grounded by the theory of change in order to be relevant and trustworthy, we co-developed a risk matrix, which was one of the most useful exercises for Gap, Inc. because it helped them place judgment on their assumptions and identify risks that needed to be frequently monitored. Following this, we co-identified the key performance indicators and associated metadata using the Performance Indicator Reference Sheets format. This exercise, done iteratively with all partners, helped them understand the tradeoffs between trustworthy and right-size; helped to ensure the feasibility of data collection and that indicators were right-sized and relevant; verified that methods were responsible and not placing unnecessary burden on key stakeholders; and confirmed that data was trustworthy enough to provide insights on the activity’s progress and changing context.

In order to integrate MERL with the program design, we closely co-created these key components with the partners. We also co-developed questions for a learning agenda and recommended adaptive management tasks such as quarterly pause and reflect sessions so that leadership and program managers could make necessary adaptations to the program based on performance data. The consortium was also tasked with developing the performance management information system.

Findings: Through this experience, we found that the theory of change can serve as a key tool to integrate MERL with program design and it can form the foundation on which to build remaining MERL activities. Additionally, indeed, MERL can be compromised by an immature program design that has been informed by an incomplete needs assessment. For all key takeaways from this experience of applying the approach and principles as well as action items for program and MERL practitioners and key questions for leadership, please see the following case study.

All in all, it was an engaging session and we heard good questions and comments from our audience. To learn more or if you have any questions on the approach, feel free to email us at wdi-performancemeasurement@umich.edu

This publication was produced by William Davidson Institute at the University of Michigan (WDI) in collaboration with World Vision (WV) under the BalanceD-MERL Program, Cooperative Agreement Number AID-OAA-A-15-00061, funded by the U.S. Agency for International Development (USAID). This study/ report/ audio/ visual/other information/ media product (specify) is made possible by the generous support of the American people through the USAID. The contents are the responsibility of the William Davidson Institute and World Vision and do not necessarily reflect the views of USAID or the United States Government.

Big data or big hype: a MERL Tech debate

by Shawna Hoffman, Specialist, Measurement, Evaluation and Organizational Performance at the Rockefeller Foundation.

Both the volume of data available at our fingertips and the speed with which it can be accessed and processed have increased exponentially over the past decade.  The potential applications of this to support monitoring and evaluation (M&E) of complex development programs has generated great excitement.  But is all the enthusiasm warranted?  Will big data integrate with evaluation — or is this all just hype?

A recent debate that I chaired at MERL Tech London explored these very questions. Alongside two skilled debaters (who also happen to be seasoned evaluators!) – Michael Bamberger and Rick Davies – we sought to unpack whether integration of big data and evaluation is beneficial – or even possible.

Before we began, we used Mentimeter to see where the audience  stood on the topic:

Once the votes were in, we started.

Both Michael and Rick have fairly balanced and pragmatic viewpoints; however, for the sake of a good debate, and to help unearth the nuances and complexity surrounding the topic, they embraced the challenge of representing divergent and polarized perspectives – with Michael arguing in favor of integration, and Rick arguing against.

“Evaluation is in a state of crisis,” Michael argued, “but help is on the way.” Arguments in favor of the integration of big data and evaluation centered on a few key ideas:

  • There are strong use cases for integration. Data science tools and techniques can complement conventional evaluation methodology, providing cheap, quick, complexity-sensitive, longitudinal, and easily analyzable data.
  • Integration is possible. Incentives for cross-collaboration are strong, and barriers to working together are reducing. Traditionally these fields have been siloed, and their relationship has been characterized by a mutual lack of understanding of the other (or even questioning of the other’s motivations or professional rigor).  However, data scientists are increasingly recognizing the benefits of mixed methods, and evaluators are seeing the potential to use big data to increase the number of types of evaluation that can be conducted within real-world budget, time and data constraints. There are some compelling examples (explored in this UN Global Pulse Report) of where integration has been successful.
  • Integration is the right thing to do.  New approaches that leverage the strengths of data science and evaluation are potentially powerful instruments for giving voice to vulnerable groups and promoting participatory development and social justice.   Without big data, evaluation could miss opportunities to reach the most rural and remote people.  Without evaluation (which emphasizes transparency of arguments and evidence), big data algorithms can be opaque “black boxes.”

While this may paint a hopeful picture, Rick cautioned the audience to temper its enthusiasm. He warned of the risk of domination of evaluation by data science discourse, and surfaced some significant practical, technical, and ethical considerations that would make integration challenging.

First, big data are often non-representative, and the algorithms underpinning them are non-transparent. Second, “the mechanistic approaches offered by data science, are antithetical to the very notion of evaluation being about people’s values and necessarily involving their participation and consent,” he argued. It is – and will always be – critical to pay attention to the human element that evaluation brings to bear. Finally, big data are helpful for pattern recognition, but the ability to identify a pattern should not be confused with true explanation or understanding (correlation ≠ causation). Overall, there are many problems that integration would not solve for, and some that it could create or exacerbate.

The debate confirmed that this question is complex, nuanced, and multi-faceted. It helped to remind that there is cause for enthusiasm and optimism, at the same time as a healthy dose of skepticism. What was made very clear is that the future should leverage the respective strengths of these two fields in order to maximize good and minimize potential risks.

In the end, the side in favor of integration of big data and evaluation won the debate by a considerable margin.

The future of integration looks promising, but it’ll be interesting to see how this conversation unfolds as the number of examples of integration continues to grow.

Interested in learning more and exploring this further? Stay tuned for a follow-up post from Michael and Rick. You can also attend MERL Tech DC in September 2018 if you’d like to join in the discussions in person!