Tag Archives: monitoring

What’s Happening with Tech and MERL?

by Linda Raftree, Independent Consultant and MERL Tech organizer

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before role-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Practicing Safe Monitoring and Evaluation in the 21st Century

By Stephen Porter. Adapted from the original post published here.

Monitoring and evaluation practice can do harm. It can harm:

  • the environment by prioritizing economic gain over species that have no voice
  • people who are invisible to us when we are in a position of power
  • by asking for information that can then be misused.

In the quest for understanding What Works, the focus is often too narrowly on program goals rather than the safety of people. A classic example in the environmental domain is the use of DDT: “promoted as a wonder-chemical, the simple solution to pest problems large and small. Today, nearly 40 years after DDT was banned in the U.S., we continue to live with its long-lasting effects.” The original evaluation of its effects had failed to identify harm and emphasized its benefits. Only when harm to the ecosystem became more apparent was evidence presented in Rachel Carson’s book Silent Spring. We should not have to wait for failure to be so apparent before evaluating for harm.

Join me, Veronica Olazabal, Rodney Hopson, Dugan Fraser and Linda Raftree, for a session on “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, 8-9am, Room CC M100 H, at the American Evaluation Association Conference in Minneapolis.

Ethical standards have been developed for evaluators, which are discussed at conferences and included in professional training. Yet institutional monitoring and evaluation practices still struggle to fully get to grips with the reality of harm in the pressure to get results reported. If we want monitoring and evaluation to be safer for the 21st Century we need to shift from training and evaluator-to-evaluator discussions to changing institutional practices.

At a workshop convened by Oxfam and the Rockefeller Foundation in 2019, we sought to identify core issues that could cause harm and get to grips with areas where institutions need to change practices. The workshop brought together partners from UN agencies, philanthropies, research organizations and NGOs. This meeting sought to give substance to issues. It was noted by a participant that though the UNEG Norms and Standards and UNDP’s evaluation policy are designed to make evaluation safe, in practice there is little consideration given to capturing or understanding the unintended or perverse consequences of programs or policies. The workshop explored this and other issues and identified three areas of practice that could help to reframe institutional monitoring and evaluation in a practical manner.

1. Data rights, privacy and protection: 

In working on rights in the 21st Century, data and Information are some of the most important ‘levers’ pulled to harm and disadvantage people. Oxfam has had a Responsible Data in Program policy in place since 2015 goes some way towards recognizing this.But we know we need to more fully implement data privacy and protection measures in our work.

At Oxfam, work is continuing to build a rights-based approach which already includes aligned confederation-wide Data Protection Policies, implementation of responsible data management policy and practices and other tools aligned with the Responsible Data Policy and European Privacy law, including a responsible data training pack.

Planned and future work includes stronger governance, standardized baseline measures of privacy & information security, and communications/guidance/change management. This includes changes in evaluation protocols related to how we assess risk to the people we work with, who gets access to the data and ensure consent for how the data will be used.

This is a start, but consistent implementation is hard and if we know we aren’t competent at operating the controls within our reach, it becomes more difficult in how we call others out if they are causing harm when they misuse theirs.

2. Harm prevention lens for evaluation

The discussion highlighted that evaluation has not often sought to understand the harm of practices or interventions. When they do, however, the results can powerfully shed new light on an issue. A case that starkly illustrates potential under-reporting is that of the UN Military Operation in Liberia (UNMIL). UNMIL was put in place with the aim “to consolidate peace, address insecurity and catalyze the broader development of Liberia”. Traditionally we would evaluate this objective. Taking a harm lens we may evaluate the sexual exploitation and abuse related to the deployment. The reporting system highlights low levels of abuse, 14 from 2007 – 2008 and 6 in 2015. A study by Beber, Gilligan, Guardado and Karim, however, estimated through representative randomized survey that more than half of eighteen- to thirty-year-old women in greater Monrovia have engaged in transactional sex and that most of them (more than three-quarters, or about 58,000 women) have done so with UN personnel, typically in exchange for money.

Changing evaluation practice should not just focus on harm in the human systems, but also provide insight in the broader ecosystem. Institutionally there needs to be championship for identifying harm within and through monitoring and evaluation practice and changes in practice.

3. Strengthening safeguarding and evaluation skills

We need to resource teams appropriately so they have the capacity to be responsive to harm and reflective on the potential for harm. This is both about tools and procedures and conceptual frames.

Tools and procedures can include, for example:

  • Codes-of-conduct that create a safe environment for reporting issues
  • Transparent reporting lines to safeguarding/safe programming advisors
  • Training based on actual cases
  • Safe data protocols (see above)

All of these fall by the way-side, however, if the values and concepts that guide implementation are absent. Rodney Hopson at the workshop, drawing on environmental policy and concepts of ecology, presented a frame to increasing evaluators’ usefulness in complex ecologies where safeguarding issues are prevalent, that emphasizes:

  • Relationships – the need to identify and relate to key interests, interactions, variables and stakeholders amid dynamic and complex issues in an honest manner that is based on building trust.
  • Responsibilities – acting with propriety, doing what is proper, fair, right, just in evaluation against standards.
  • Relevance – being accurate and meaningful technically, culturally and contextually.

Safe monitoring and evaluation in the 21st Century does not just seek ‘What Works’ and will need to be relentless at looking at ‘How we can work differently?’. This includes us understanding connectivity in harm between human and environmental systems. The three areas noted here are a start of a conversation and a challenge to institutions to think more about what it means to be safe in monitoring and evaluation practice.

Planning to attend the American Evaluation Association Conference this week? Join us for the session “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, from 8- 9:00 AM) in room CC M100 H.

Panelists will discuss ideas to better address harm in regards to: (i) harm identification and mitigation in evaluation practice; (ii) responsible data practice evaluation in complex ecologies, (iii) understanding harm in an international development context, and (iv) evaluation in complex ecologies.

The panel will be chaired by  Veronica M Olazabal, (Senior Advisor & Director, Measurement, Evaluation and Organizational Performance, The Rockefeller Foundation) , with speakers Stephen Porter (Evaluation Strategy Advisor, World Bank), Linda Raftree (Independent Consultant, Organizer of MERL Tech), Dugan Fraser (Prof & Director CLEAR-AA – University of the Witwatersrand, Johannesburg) and Rodney Hopson (Prof of Evaluation, Department of Ed Psych, University of Illinois Urbana-Champaign). View the full program here: https://lnkd.in/g-CHMEj 

Join us for MERL Tech DC, Sept 5-6th!

MERL Tech DC: Taking Stock

September 5-6, 2019

FHI 360 Academy Hall, 8th Floor
1825 Connecticut Avenue NW
Washington, DC 20009

We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.

The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.

Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.

Conference strands:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.

Submit your session ideas, register to attend the conference, or reserve a demo table for MERL Tech DC now!

You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

Evaluating for Trust in Blockchain Applications

by Mike Cooper

This is the fourth in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL,  Blockchain as an M&E Tool, How Can MERL Inform Maturation of the Blockchain, this post, and future posts on integrating  blockchain into MEL practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Enabling trust in an efficient manner is the primary innovation that the blockchain delivers through the use of cryptology and consensus algorithms.  Trust is usually a painstaking relationship building effort that requires iterative interactions to build.  The blockchain alleviates the need for much of the resources required to build this trust, but that does not mean that stakeholders will automatically trust the blockchain application.  There will still need to be trust building mechanisms with any blockchain application and MEL practitioners are uniquely situated to inform how these trust relationships can mature.

Function of trust in the blockchain

Trust is expensive.  You pay fees to banks who provide confidence to sellers who take your debit card as payment and trust that they will receive funds for the transaction.  Agriculture buyers pay fees to third parties (who can certify that the produce is organic, etc.) to validate quality control on products coming through the value chain  Often sellers do not see the money from debit card transaction in their accounts automatically and agriculture actors perpetually face the pressures resulting from being paid for goods and/or services they provided weeks previously. The blockchain could alleviate much of these harmful effects by substituting trust in humans by trust in math.

We pay these third parties because they are trusted agents, and these trusted agents can be destructive rent seekers at times; creating profits that do not add value to the goods and services they work with. End users in these transactions are used to using standard payment services for utility bills, school fees, etc.  This history of iterative transactions has resulted in a level of trust in these processes. It may not be equitable but it is what many are used to and introducing an innovation like blockchain will require an understanding of how these processes are influencing stakeholders, their needs and how they might be nudged to trust something different like a blockchain application.  

How MEL can help understand and build trust

Just as microfinance introduced new methods of sending/receiving money and access to new financial services that required piloting different possible solutions to build this understanding, so will blockchain applications. This is an area where MEL can add value to achieving mass impact, by designing the methods to iteratively build this understanding and test solutions.  

MEL has done this before.  Any project that requires relationship building should be based on understanding the mindset and incentives for relevant actions (behavior) amongst stakeholders to inform the design of the “nudge” (the treatment) intended to shift behavior.

Many of the programs we work on as MEL practitioners involve various forms and levels of relationship building, which is essentially “trust”.  There have been many evaluations of relationship building whether it be in microfinance, agriculture value chains or policy reform.  In each case, “trust” must be defined as a behavior change outcome that is “nudged” based on the framing (mindset) of the stakeholder.  Meaning that each stakeholder, depending on their mindset and the required behavior to facilitate blockchain uptake, will require a customized nudge.  

The role of trust in project selection and design: What does that mean for MEL

Defining “trust” should begin during project selection/design.  Project selection and design criteria/due diligence are invaluable for MEL.  Many of the dimensions of evaluability assessments refer back to the work that is done in the project selection/design phrase (which is why some argue evaluability assessments are essentially project design tools).  When it comes to blockchain, the USAID Blockchain Primer provides some of the earliest thinking for how to select and design blockchain projects, hence it is a valuable resources for MEL practitioners who want to start thinking about how they will evaluate blockchain applications.  

What should we be thinking about?

Relationship building and trust are behaviors, hence blockchain theories of change should have outcomes stated as behavior changes by specific stakeholders (hence the value add of tools like stakeholder analysis and outcome mapping).  However, these Theories of Change (TOC) are only as good as what informs them, hence building a knowledge base of blockchain applications as well as previous lessons learned from evidence on relationship building/trust will be critical to developing a MEL Strategy for blockchain applications.  

If you’d like to discuss this and related aspects, join us on September 5th in Washington, DC, for a one-day workshop on “What can the blockchain offer MERL?”

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

How can MERL inform maturation of the blockchain?

by Mike Cooper

This is the third in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL,  Blockchain as an M&E Tool, this post, and future posts on evaluating for trust in Blockchain applications, and integrating  blockchain into MEL practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Technology solutions in development contexts can be runaway trains of optimistic thinking.  Remember the play pump, a low technology solution meant to provide communities with clean water as children play?  Or the Soccket, the soccer ball that was going to help kids learn to read at night? I am not disparaging these good intentions, but the need to learn the evidence from past failure is widely recognized. When it comes to the blockchain, possibly the biggest technological innovation on the social horizon, the learning captured in guidance like the Principles for Digital Development or Blockchain Ethical Design Frameworks, needs to not only be integrated into the design of blockchain applications but also into how MEL practitioners will need to assess this integration and test solutions.   Data driven feedback from MEL will help inform the maturation of human centered blockchain solutions that mitigate endless/pointless pilots which exhaust the political will of good natured partners and creates barriers to sustainable impact.

The Blockchain is new but we have a head start in thinking about it

The blockchain is an innovation, and it should be evaluated as such. True the blockchain could be revolutionary in its impact.  And yes this potential could grease the wheels of the runaway train thinking referenced above, but this potential does not moot the evidence we have around evaluating innovations.

Keeping the risk of the runaway train at bay includes MERL practitioners working with stakeholders to ask : is blockchain the right approach for this at all?  Only after determining the competitive advantage of the blockchain solutions over other possible solutions should MEL practitioners work with stakeholders to finalize design of the initial piloting.  The USAID Blockchain Primer is the best early thinking about this process and the criteria involved.  

Michael Quinn Patton and others have developed an expanded toolkit for MERL practitioners to best unpack the complexity of a project and design a MERL framework that responds to the decision making requirements on the scale up pathway.  Because the blockchain is an innovation, which by definition means there is less evidence on its application but great potential, it will require MEL frameworks that iteratively test and modify applications to inform the scale up pathway.  

The Principles for Digital Development highlight the need for iterative learning in technology driven solutions.  The overlapping regulatory, organizational and technological spheres further assist in unpacking the complexity using tools like Problem Driven Iterative Adaptation (PDIA) or other adaptive management frameworks that are well suited to testing innovations in each sphere.  

How Blockchain is different: Intended Impacts and Potential Spoilers

There will be intended and unintended outcomes from blockchain applications that MEL should account for.  This includes general intended outcomes of increased access to services and overall costs savings while “un-intended” outcomes include the creation of winners and losers.  

The primary intended outcomes that could be expected from blockchain applications are an increase in cost savings (by cutting out intermediaries) which results in increased access to whatever service/product (assuming any cost savings are re-invested in expanding access).  Or a possible increase in access that results from creating a service where none existed before (for example creating access to banking services in rural populations). Hence methods for measuring the specific type of cost savings and increased access that are already used could be applied with modification.  

However, the blockchain will be disruptive and when I say “un-intended” (using quotation marks) I do so because the cost savings from blockchain applications are the result of alleviating the need for some intermediaries or middlemen. These middlemen are third parties who could be some form of rent-seeker in providing a validation, accreditation, certification  or other type of service meant to communicate trust. For example, with m-Pesa,  banking loan and other services from banks were expanded to new populations. With a financial inclusion blockchain project these same services could be accessed by the same population but without the need for a bank, hence incurring a cost savings. However, as is well known in many a policy reform intervention, creating efficiencies usually means creating losers and in our example the losers are those previously offering the services that the blockchain makes more efficient.  

The blockchain can facilitate efficiencies, not elimination of all intermediary functions. With the introduction of any innovation, the need for new functions will emerge as old functions are mooted.  For example mPesa experienced substantial barriers in its early development until they began working with kiosk owners who, after being trained up, could demonstrate and explain mPesa to customers.  Hence careful iterative assessment of the ecosystem (similar to value chain mapping) to identify mooted functions (losers) and new functions (winners) is critical.

MERL practitioners have a value add in mitigating the negative effects from the creation of losers, who could become spoilers.  MERL practitioners have many analytical tools/skills that can not only help in identifying the potential spoilers (perhaps through various outcome mapping and stakeholder analysis tools) but also in mitigating any negative effects (creating user personas of potential spoilers to better assess how to incentivize targeted behavior changes).  Hence MEL might be uniquely placed to build a broader understanding amongst stakeholders on what the blockchain is, what it can offer and how to create a learning framework that builds trust in the solution.

Trust, the real innovation of blockchain

MERL is all about behavior change, because no matter the technology or process innovation,  it requires uptake and uptake requires behavior. Trust is a behavior, you trust that when you put your money in a bank it will be available for when you want to use it.  Without this behavior, stemming from a belief, there are runs on banks which in turn fail which further erodes trust in the banking system. The same could be said for paying money to a water or power utility and expecting that they will provide service, The more use, the more a relationship matures into a trustful one. But it does not take much to erode this trust even after the relationship is established, again think about how easy it is to cause a run on a bank or stop using a service provider.  

The real innovation of the blockchain is that it replaces the need for trust in humans (whether it is an individual or system of organizations) with trust in math. Just as any entity needs to build a relationship of trust with its targeted patrons, so will the blockchain have to develop a  relationship of trust not only with end users but with those within the ecosystem that could influence the impact of the blockchain solution to include beneficiaries and potential loser/spoilers.  This brings us back to the importance of understanding who these stakeholders are, how they will interact with and influence the blockchain, and their perspectives, needs and capacities.

MERL practitioners who wish to use blockchain will need to pick up the latest thinking in behavioral sciences to understand this “trust” factor for each stakeholder and integrate it into an adaptive management framework.  The next blog in this series will go into further detail about the role of “trust” when evaluating a blockchain application.  

The Blockchain is different — don’t throw the baby out with the bath water

There will inevitably be mountains of pressure go to “full steam ahead” (part of me wants to add “and damn the consequences”) without sufficient data driven due diligence and ethical review, since blockchain is the next new shiny thing.  MERL practitioners should not only be aware of this unfortunate certainty, but they also need to pro-actively consider their own informed strategy on how they will respond to this pressure. MERL practitioners are uniquely positioned to advocate for data driven decision making and provide the data necessary to steer clear of misapplication of blockchain solutions.  There are already great resources for MEL practitioners on the ethical criteria and design implications for blockchain solutions.

The potential impact of blockchain is still unknown but if current thinking is to be believed, the impact could be paradigm shifting.  Given this potential, getting the initial testing right to maximize learning will be critical to cultivating the political will, the buy-in, and the knowledge base to kick start something much bigger.  

If you’d like to discuss this and related aspects, join us on September 5th in Washington, DC, for a one-day workshop on “What can the blockchain offer MERL?”

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

Blockchain as an M&E Tool

by Mike Cooper and Shailee Adinofi

This is the second in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL, this post (Blockchain as an M&E Tool), and future posts on the use of MEL to inform Blockchain maturation, evaluating for trust in Blockchain applications, and integrating  blockchain into MEL Practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Introducing the Blockchain as an M&E Tool   

Blockchain is a technology that could transform many of the functions we now take for granted in our daily lives. It could change everything from supply chain management to trade to the Internet of Things (IOT), and possibly even serve as the backbone for the next evolution of the internet itself.  Within international development there have already been blockchain pilots for refugee assistance and financial inclusion (amongst others) with more varied pilots and scaled applications soon to come.

Technological solutions, however, need uptake in order for their effects to be truly known. This is no different for the blockchain. Technology solutions are not self-implementing — their uptake is dependent on social structures and human decision making.  Hence, while on paper the blockchain offers many benefits, the realization of these benefits in the monitoring, evaluation and learning (MEL) space requires close working with MEL practitioners to hear their concerns, excitement, and feedback on how the blockchain can best produce these benefits.

Blockchain removes intermediaries, thus increasing integrity

The blockchain is a data management tool for achieving data integrity, transparency, and addressing privacy concerns. It is a distributed software network of peer-to-peer transactions (data), which are validated through consensus, using pre-established rules. This can remove the need for a middleman or “intermediaries”, meaning that it can “disintermediate” the holders of a traditional MEL database, where data is stored and owned by a set of actors.  

Hence the blockchain solves two primary problems:

  1.   It reduces the need for “middlemen” (intermediaries) because it is peer-to-peer in nature.  For MEL, the blockchain may thus reduce the need for people to be involved in data management protocols, from data collection to dissemination, resulting in cost and time efficiencies.
  2.  The blockchain maintains data integrity (meaning that the data is immutable and is only shared in the intended manner) in a distributed peer-to-peer network where the reliability and trustworthiness of the network is inherent to the rules established in the consensus algorithms of the blockchain.  

So, what does this mean?  Simply put, a blockchain is a type of distributed immutable ledger or decentralized database that keeps continuously updated digital records of data ownership. Rather than having a central administrator manage a single database, a distributed ledger has a network of replicated databases, synchronized via the internet, and visible to anyone within the network (more on control of the network and who has access permissions below).

Advantages over Current Use of Centralized Data Management  

Distributed ledgers are much less vulnerable to loss of control over data integrity than current centralized data management systems. Loss of data integrity can happen in numerous ways, whether by hacking, manipulation or some other nefarious or accidental use.  Consider the multiple cases of political manipulation of census data as recorded in Poor Numbers: How We Are Misled by African Development Statistics and What to Do about It because census instruments are designed and census data analyzed/managed in a centralized fashion with little to no transparency.

Likewise, within the field of evaluation there has been increasing attention on p-hacking, where initial statistical results are manipulated on the back side to produce results more favorable to the original hypothesis.  Imagine if cleaned and anonymized data sets were put onto the blockchain where transparency, without sacrificing PII, makes p-hacking much more difficult (perhaps resulting in increased trust in data sets and their overall utility/uptake).

Centralized systems can have lost and/or compromised data (or loss of access) due to computer malfunctions or what we call “process malfunctions” where the bureaucratic control over the data builds artificially high barriers to access and subsequent use of the data by anyone outside the central sphere of control. This level of centralized control (as in the examples above regarding manipulation of census design/data and p-hacking) introduces the ability for data manipulation.

Computer malfunctions are mitigated by the blockchain because the data does not live in a central network hub but instead “lives’ in copies of the ledger that are distributed across every computer in the network. This lack of central control increases transparency. “Hashing” (a form of version control) ensures that any data manipulations in the blockchain are not included in the blockchain, meaning only a person with the necessary permissions can change the data on the chain. With the blockchain, access to information is as open, or closed, as is desired.  

How can we use this technology in MEL?

All MEL data must eventually find its way to a digital version of itself, whether it is entered from paper surveys or it goes through analytical software or straight into an Excel cell, with varying forms/rigor of quality control.  A benefit of blockchain is its compatibility with all digital data. It can include data files from all forms of data collection and analytical methods or software. Practitioners are free to collect data in whatever manner best suits their mandates with the blockchain becoming the data management tool at any point after collection, as the data can be uploaded to the blockchain at any point. Meaning data can be loaded directly by enumerators in the field or after additional cleaning/analysis.  

MEL has  specific data management challenges that the blockchain seems uniquely suited to overcome including 1. protection of Personally Identifiable Information (PII)/data integrity, 2. mitigating data management resource requirements, and 3. lowering barriers to end use through timely dissemination and increased access to reliable data.  

Let’s explore each of these below:

1. Increasing Protection and Integrity of Data: There might be a knee jerk reaction against increasing transparency in evaluation data management, given the prevalence of personally identifiable information (PII) and other sensitive data. Meeting internal quality control procedures for developing and sharing draft results is usually a long arduous process — even more so if delivering cleaned data sets.  Hence there might be hesitation in introducing new data management techniques given the priority given to the protection of PII balanced against the pressure to deliver data sets in a timely fashion.

However, we should learn a lesson from our counterparts in healthcare records management, one of the more PII and sensitive data laden data management fields in the world.  The blockchain has seen piloting in healthcare records management precisely because it is able to secure the integrity of sensitive data in such an efficient manner.

Imagine an evaluator completes a round of household surveys, the data is entered, cleaned and anonymized and the data files are ready to be sent to whomever the receiver is (funder, public data catalog, etc.)  The funder requires that the data uploaded to the blockchain is done using a Smart Contract. Essentially a Smart Contract is a set of “if……then” protocols on the Ethereum network (a specific type of blockchain) which can say “if all data has been cleaned of PII and is appropriately formatted….etc….etc…, it can be accepted onto the blockchain.”  If the requirements written into the Smart Contract are not met, the data is rejected and not uploaded to the blockchain (see point 2 below). So, in the case where proper procedures or best or preferred practices are not met, the data is not shared and remains safe within the confines of a (hopefully) secure and reliable centralized database.

This example demonstrates one of the unsung values of the blockchain. When correctly done (meaning the Smart Contract is properly developed) it can ensure that only the data that is appropriate is shared and is in fact shared only with those meant to have it in a manner where the data cannot be manipulated.  This is an advantage over current practice where human error can result in PII being released or unuseable or incompatible data files being shared.

The blockchain also has inherent quality control protocols around version control that mitigate against manipulation of the data for whatever reason. Hashing is partly a summary labelling of different encrypted data sets on the blockchain where any modification to the data set results in a different hash for that data set.  Hence version control is automatic and easily tracked through the different hashes which are one way only (meaning that once the data is hashed it cannot be reverse engineered to change the original data). Thus, all data on the blockchain is immutable.

2. Decreasing Data Management Resources: Current data management practice is very resource intensive for MEL practitioners.  Data entry, creation of data files, etc. requires ample amounts of time, mostly spent guarding against error, which introduces timeliness issues where processes take so long the data uses its utility by the time it is “ready” for decision makers.  A future post in this series will cover how the blockchain can introduce efficiencies at various points in the data management process (from collection to dissemination). There are many unknowns in this space that require further thinking about the ability to embed automated cleaning and/or analytical functions into the blockchain or compatibility issues around data files and software applications (like STATA or NIVIVO).  This series of posts will highlight broad areas where the blockchain can introduce the benefits of an innovation as well as finer points that still need to be “unpacked” for the benefits to materialize.

3. Distributed ledger enables timely dissemination in a flexible manner:  With the increased focus on the use of evaluation data, there has been a correlated increase in discussion in how evaluation data is shared.

Current data dissemination practices include:

  • depositing them with a data center, data archive, or data bank
  • submitting them to a journal to support a publication
  • depositing them in an institutional repository
  • making them available online via a project or institutional website
  • making them available informally between researchers on a peer-to-peer basis

All these avenues of dissemination are very resource intensive. Each avenue has its own procedures, protocols, and other characteristics that may not be conducive to timely learning. Timelines for publishing in journals is long with incentives towards only publishing positive results, contributing to a dismal utilization rates of results.  Likewise, many institutional evaluation catalogs are difficult to navigate, often incomplete, and generally not user friendly. (We will look at query capabilities on the blockchain later in the blog series).

Using the blockchain to manage and disseminate data could result in more timely and transparent sharing.  Practitioners could upload data to the chain at any point after collection, and with the use of Smart Contracts, data can be widely distributed in a controlled manner.  Data sets can be easily searchable and available in much timelier and user-friendly fashion to a much larger population. This creates the ability to share specific data with specific partners (funders, stakeholders, the general public) in a more automated fashion and on a timelier basis.  Different Smart Contracts can be developed so that funders can see all data as soon as it is collected in the field, while a different Smart Contract with local officials allows them to see data relevant to their locality only after it is entered, cleaned, etc.).

With the help of read/write protocols, anyone can control the extent to which data is shared. Use of the data is immutable, meaning it cannot be changed (in contrast to current practice where we hope the PDF is “good enough” to guard against modification but most times data are pushed out in excel sheets, or something similar, with no way to determine what the “real” data when different versions appear).   

Where are we?

We are in the early stages of understanding, developing and exploring the blockchain in general and with MEL in particular. On September 5th, we’ll be leading a day-long Pre-Conference Workshop on What Blockchain Can Do For MERL. The Pre-Conference Workshop and additions to this blog series will focus on how:

  • The blockchain can introduce efficiencies in MEL data management
  • The blockchain can facilitate “end use” whether it is accountability, developmental, formative, etc.
  • To work with MEL practitioners and other stakeholders to improve the uptake of the blockchain as an innovation by overcoming regulatory, organizational and cultural barriers.  

This process is meant to be collaborative so we invite others to help inform us on what issues they think warrant further exploration.  We look forward to collaborating with others to unpack these issues to help develop thinking that leads to appropriate uptake of blockchain solutions to MEL problems.  

Where are we going?

As it becomes increasingly possible that blockchain will be a disruptive technology, it is critical that we think about how it will affect the work of MEL practitioners.  To this end, stay tuned for a few more posts, including:

  • How can MEL inform Blockchain maturation?
  • Evaluating for Trust in Blockchain applications
  • How can we integrate blockchain into MEL Practices?

We would greatly benefit from feedback on this series to help craft topics that the series can cover.  Please comment below or contact the authors with any feedback, which would be greatly appreciated.

Register here for the September 5th workshop on Blockchain and MERL!

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

Shailee Adinolfi is an international development professional with over 15 years of experience working at the intersection of financial services, technology, and global development. Recently, she performed business development, marketing, account management, and solution design as Vice President at BanQu, a Blockchain-based identity platform. She held a variety of leadership roles on projects related to mobile banking, financial inclusion, and the development of emerging markets. More about Shailee 

What does Blockchain offer to MERL?

by Shailee Adinolfi

By now you’ve read at least one article on the potential of blockchain, as well as the challenges in its current form. USAID recently published a Primer on Blockchain: How to assess the relevance of distributed ledger technology to international development, which explains that distributed ledgers are “a type of shared computer database that enables participants to agree on the state of a set of facts or events (frequently described as an “authoritative shared truth”) in a peer-to-peer fashion without needing to rely on a single, centralized, or fully trusted party”.

Explained differently, the blockchain introduces cost savings and resource efficiencies by allowing data to be entered, stored and shared in an immutable fashion by substituting the need for a trusted third party with algorithms and cryptography.

The blockchain/Distributed Ledger Technology (DLT) industry is evolving quickly, as are the definitions and terminology. Blockchain may not solve world hunger, but the core promises are agreed upon by many – transparency, auditability, resiliency, and streamlining. The challenges, which companies are racing to be the first to address, include scale (speed of transactions), security, and governance.

It’s not time to sit back wait and see what happens. It’s time to deepen our understanding. Many have already begun pilots across sectors. As this McKinsey article points out, early data from pilots shows strong potential in the Agriculture and Government sectors, amongst others. The article indicates that scale may be as little as 3-5 years away, and that’s not far out.

The Center for Global Development’s Michael Pisa argues that the potential benefits of blockchain do not outweigh the associated costs and complexities right now. He suggests that the development community focus its energies and resources on bringing down barriers to actual implementation, such as standards, interoperability, de-siloing data, and legal and regulatory rules around data storage, privacy and protection.

One area where blockchain may be useful is Monitoring, Evaluation, Research and Learning (MERL). But we need to dig in and understand better what the potentials and pitfalls are.

Join us on September 5th for a one-day workshop on Blockchain and MERL at Chemonics International where we will discuss what blockchain offers to MERL.

This is the first in a series of blogs at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes this post (What does Blockchain Offer to MERL),  Blockchain as an M&E Tool, and future posts on the use of MEL to inform Blockchain maturation, evaluating for trust in Blockchain applications, and integrating  blockchain into MEL Practices. 

 

Evaluating ICT4D projects against the Digital Principles

By Laura Walker McDonald,  This post was originally published on the Digital Impact Alliance’s Blog on March 29, 2018.

As I have written about elsewhere, we need more evidence of what works and what doesn’t in the ICT4D and tech for social change spaces – and we need to hold ourselves to account more thoroughly and share what we know so that all of our work improves. We should be examining how well a particular channel, tool or platform works in a given scenario or domain; how it contributes to development goals in combination with other channels and tools; how the team selected and deployed it; whether it is a better choice than not using technology or using a different sort of technology; and whether or not it is sustainable.

At SIMLab, we developed our Framework for Monitoring and Evaluation of Technology in Social Change projects to help implementers to better measure the impact of their work. It offers resources towards a minimum standard of best practice which implementers can use or work toward, including on how to design and conduct evaluations. With the support of the Digital Impact Alliance (DIAL), the resource is now finalized and we have added new evaluation criteria based on the Principles for Digital Development.

Last week at MERL Tech London, DIAL was able to formally launch this product by sharing a 2-page summary available at the event and engaging attendees in a conversation about how it could be used. At the event, we joined over 100 organizations to discuss Monitoring, Evaluation, Research and Learning related to technology used for social good.

Why evaluate?

Evaluations provide snapshots of the ongoing activity and the progress of a project at a specific point in time, based on systematic and objective review against certain criteria. They may inform future funding and program design; adjust current program design; or to gather evidence to establish whether a particular approach is useful. They can be used to examine how, and how far, technology contributes to wider programmatic goals. If set up well, your program should already have evaluation criteria and research questions defined, well before it’s time to commission the evaluation.

Evaluation criteria provide a useful frame for an evaluation, bringing in an external logic that might go beyond the questions that implementers and their management have about the project (such as ‘did our partnerships on the ground work effectively?’ or ‘how did this specific event in the host country affect operations?’) to incorporate policy and best practice questions about, for example, protection of target populations, risk management, and sustainability. The criteria for an evaluation could be any set of questions that draw on an organization’s mission, values, principles for action; industry standards or other best practice guidance; or other thoughtful ideas of what ‘good’ looks like for that project or organization. Efforts like the Principles for Digital Development can set useful standards for good practice, and could be used as evaluation criteria.

Evaluating our work, and sharing learning, is radical – and critically important

While the potential for technology to improve the lives of vulnerable people around the world is clear, it is also evident that these improvements are not keeping pace with the advances in the sector. Understanding why requires looking critically at our work and holding ourselves to account. There is still insufficient evidence of the contribution technology makes to social change work. What evidence there is often is not shared or the analysis doesn’t get to the core issues. Even more important, the learnings from what has not worked and why have not been documented and absorbed.

Technology-enabled interventions succeed or fail based on their sustainability, business models, data practices, choice of communications channel and technology platform; organizational change, risk models, and user support – among many other factors. We need to build and examine evidence that considers these issues and that tells us what has been successful, what has failed, and why. Holding ourselves to account against standards like the Principles is a great way to improve our practice, and honor our commitment to the people we seek to help through our work.

Using the Digital Principles as evaluation criteria

The Principles for Digital Development are a set of living guidance intended to help practitioners succeed in applying technology to development programs. They were developed, based on some pre-existing frameworks, by a working group of practitioners and are now hosted by the Digital Impact Alliance.

These nine principles could also form a useful set of evaluation criteria, not unlike OECD evaluation criteria, or Sphere standards. Principles overlap, so data can be used to examine more than one criterion, and ot every evaluation would need to consider all of the Digital Principles.

Below are some examples of Digital Principles and sample questions that could initiate, or contribute to, an evaluation.

Design with the User: Great projects are designed with input from the stakeholders and users who are central to the intended change. How far did the team design the project with its users, based on their current tools, workflows, needs and habits, and work from clear theories of change and adaptive processes?

Understand the Existing Ecosystem: Great projects and programs are built, managed, and owned with consideration given to the local ecosystem. How far did the project work to understand the local, technology and broader global ecosystem in which the project is situated? Did it build on existing projects and platforms rather than duplicating effort? Did the project work sensitively within its ecosystem, being conscious of its potential influence and sharing information and learning?

Build for Sustainability: Great projects factor in the physical, human, and financial resources that will be necessary for long-term sustainability. How far did the project: 1) think through the business model, ensuring that the value for money and incentives are in place not only during the funded period but afterwards, and 2) ensure that long-term financial investments in critical elements like system maintenance and support, capacity building, and monitoring and evaluation are in place? Did the team consider whether there was an appropriate local partner to work through, hand over to, or support the development of, such as a local business or government department?

Be Data Driven: Great projects fully leverage data, where appropriate, to support project planning and decision-making. How far did the project use real-time data to make decisions, use open data standards wherever possible, and collect and use data responsibly according to international norms and standards?

Use Open Standards, Open Data, Open Source, and Open Innovation: Great projects make appropriate choices, based on the circumstances and the sensitivity of their project and its data, about how far to use open standards, open the project’s data, use open source tools and share new innovations openly. How far did the project: 1) take an informed and thoughtful approach to openness, thinking it through in the context of the theory of change and considering risk and reward, 2) communicate about what being open means for the project, and 3) use and manage data responsibly according to international norms and standards?

For a more complete set of guidance, see the complete Framework for Monitoring and Evaluating Technology, and the more nuanced and in-depth guidance on the Principles, available on the Digital Principles website.

MERL Tech London 2018 Agenda is out!

We’ve been working hard over the past several weeks to finish up the agenda for MERL Tech London 2018, and it’s now ready!

We’ve got workshops, panels, discussions, case studies, lightning talks, demos, community building, socializing, and an evening reception with a Fail Fest!

Topics range from mobile data collection, to organizational capacity, to learning and good practice for information systems, to data science approaches, to qualitative methods using mobile ethnography and video, to biometrics and blockchain, to data ethics and privacy and more.

You can search the agenda to find the topics, themes and tools that are most interesting, identify sessions that are most relevant to your organization’s size and approach, pick the session methodologies that you prefer (some of us like participatory and some of us like listening), and to learn more about the different speakers and facilitators and their work.

Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here!)

View the MERL Tech London schedule & directory.