Tag Archives: evaluation

MERL and the 4th Industrial Revolution: Submit your AfrEA abstract now!

by Dhashni Naidoo, Genesis Analytics

Digitization is everywhere! Digital technologies and data have changed the way we engage with each other and how we work. We cannot escape the effects of digitization. Whether in our personal capacity — how our own data is being used — or in our professional capacity, in terms of understanding how to use data and technology. These changes are exciting! But we also need to consider the challenges they present to the MERL community and their impact on development.

The advent and proliferation of big data has the potential to change how evaluations are conducted. New skills are needed to process and analyse big data. Mathematics, statistics and analytical skills will be ever more important. As evaluators, we need to be discerning about the data we use. In a world of copious amounts of data, we need to ensure we have the ability to select the right data to answer our evaluation questions.

We also have an ethical and moral duty to manage data responsibly. We need new strategies and tools to guide the ways in which we collect, store, use and report data. Evaluators need to improve our skills as related to processing and analysing data. Evaluative thinking in the digital age is evolving and we need to consider the technical and soft skills required to maintain integrity of the data and interpretation thereof.

Though technology can make data collection faster and cheaper, two important considerations are access to technology by vulnerable groups and data integrity. Women, girls and people in rural areas normally do not have the same levels of access to technology as men and boys This impacts on our ability to rely solely on technology to collect data from these population groups, because we need to be aware of inclusion, bias and representativity. Equally we need to consider how to maintain the quality of data being collected through new technologies such as mobile phones and to understand how the use of new devices might change or alter how people respond.

In a rapidly changing world where technologies such as AI, Blockchain, Internet of Things, drones and machine learning are on the horizon, evaluators need to be robust and agile in how we change and adapt.

For this reason, a new strand has been introduced at the African Evaluation Association (AfrEA) conference, taking place from 11 – 15 March 2019 in Abidjan, Cote d’Ivoire. This stream, The Fourth Industrial Revolution and its Impact on Development: Implications for Evaluation, will focus on five sub-themes:

  • Guide to Industry 4.0 and Next Generation Tech
  • Talent and Skills in Industry 4.0
  • Changing World of Work
  • Evaluating youth programmes in Industry 4.0
  • MERLTech

Genesis Analytics will be curating this strand.  We are excited to invite experts working in digital development and practitioners at the forefront of technological innovation for development and evaluation to submit abstracts for this strand.

The deadline for abstract submissions is 16 November 2018. For more information please visit the AfrEA Conference site!

How I Learned to Stop Worrying and Love Big Data

by Zach Tilton, a Peacebuilding Evaluation Consultant and a Doctoral Research Associate at the Interdisciplinary PhD in Evaluation program at Western Michigan University. 
 
In 2013 Dan Airley quipped “Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it….” In 2015 the metaphor was imported to the international development sector by Ben Ramalingam, in 2016 it became a MERL Tech DC lightning talk, and has been ringing in our ears ever since. So, what about 2018? Well, unlike US national trends in teenage sex, there are some signals that big or at least ‘bigger’ data is continuing to make its way not only into the realm of digital development, but also evaluation. I recently attended the 2018 MERL Tech DC pre-conference workshop Big Data and Evaluation where participants were introduced to real ways practitioners are putting this trope to bed (sorry, not sorry). In this blog post I share some key conversations from the workshop framed against the ethics of using this new technology, but to do that let me first provide some background.
 
I entered the workshop on my heels. Given the recent spate of security breaches and revelations about micro-targeting, ‘Big Data’ has been somewhat of a boogie-man for myself and others. I have taken some pains to limit my digital data-footprint, have written passionately about big data and surveillance capitalism, and have long been skeptical of big data applications for serving marginalized populations in digital development and peacebuilding. As I found my seat before the workshop started I thought, “Is it appropriate or ethical to use big data for development evaluation?” My mind caught hold of a 2008 Evaluation Café debate between evaluation giants Michael Scriven and Tom Cook on causal inference in evaluation and the ethics of Randomized Control Trials. After hearing Scriven’s concerns about the ethics of withholding interventions from control groups, Cook asks, “But what about the ethics of not doing randomized experiments?” He continues, “What about the ethics of having causal information that is in fact based on weaker evidence and is wrong? When this happens, you carry on for years and years with practices that don’t work whose warrant lies in studies that are logically weaker than experiments provide.”
 
While I sided with Scriven for most of that debate, this question haunted me. It reminded me of an explanation of structural violence by peace researcher Johan Galtung who writes, “If a person died from tuberculosis in the eighteenth century it would be hard to conceive of this as violence since it might have been quite unavoidable, but if he dies from it today, despite all the medical resources in the world, then violence is present according to our definition.” Galtung’s intellectual work on violence deals with the difference between potential and the actual realizations and what increases that difference. While there are real issues with data responsibility, algorithmic biases, and automated discrimination that need to be addressed, if there are actually existing technologies and resources not being used to address social and material inequities in the world today, is this unethical, even violent? “What about the ethics of not using big data?” I asked myself back. The following are highlights of the actually existing resources for using big data in the evaluation of social amelioration.
 

Actually Existing Data

 
During the workshop, Kerry Bruce from Social Impact shared with participants her personal mantra, “We need to do a better job of secondary data analysis before we collect any more primary data.” She challenged us to consider how to make use of the secondary data available to our organizations. She gave examples of potential big data sources such as satellite images, remote sensors, GPS location data, social media, internet searches, call-in radio programs, biometrics, administrative data and integrated data platforms that merge many secondary data files such as public records and social service agency and client files. The key here is there are a ton of actually existing data, many of which are collected passively, digitally, and longitudinally. Despite noting real limitations to accessing existing secondary data, including donor reluctance to fund such work, limited training in appropriate methodologies in research teams, and differences in data availability between contexts, to underscore the potential of using secondary data, she shared a case study where she lead a team to use large amounts of secondary indirect data to identify ecosystems of modern day slavery at a significantly reduced cost than collecting the data first-hand. The outputs of this work will help pinpoint interventions and guide further research into the factors that may lead to predicting and prescribing what works well for stopping people from becoming victims of slavery.
 

Actually Existing Tech (and math)

 
Peter York from BCT Partners provided a primer on big data and data science including the reality-check that most of the work is the unsexy “ETL,” or the extraction, transformation, and loading of data. He contextualized the potential of the so-called big data revolution by reminding participants that the V’s of big data, Velocity, Volume, and Variety, are made possible by the technological and social infrastructure of increasingly networked populations and how these digital connections enable the monitoring, capturing, and tracking of ever increasing aspects of our lives in an unprecedented way. He shared, “A lot of what we’ve done in research were hacks because we couldn’t reach entire populations.” With advances in the tech stacks and infrastructure that connect people and their internet-connected devices with each other and the cloud, the utility of inferential statistics and experimental design lessens when entire populations of users are producing observational behavior data. When this occurs, evaluators can apply machine learning to discover the naturally occurring experiments in big data sets, what Peter terms ‘Data-driven Quasi-Experimental Design.’ This is exactly what Peter does when he builds causal models to predict and prescribe better programs for child welfare and juvenile justice to automate outcome evaluation, taking cues from precision medicine.
 
One example of a naturally occurring experiment was the 1854 Broad Street cholera outbreak in which physician John Snow used a dot map to identify a pattern that revealed the source of the outbreak, the Broad Street water pump. By finding patterns in the data John Snow was able to lay the groundwork for rejecting the false Miasma Theory and replace it with a proto-typical Germ Theory. And although he was already skeptical of miasma theory, by using the data to inform his theory-building he was also practicing a form of proto-typical Grounded Theory. Grounded theory is simply building theory inductively, after data collection and analysis, not before, resulting in theory that is grounded in data. Peter explained, “Machine learning is Grounded Theory on steroids. Once we’ve built the theory, found the pattern by machine learning, we can go back and let the machine learning test the theory.” In effect, machine learning is like having a million John Snows to pour over data to find the naturally occurring experiments or patterns in the maps of reality that are big data.
 
A key aspect of the value of applying machine learning in big data is that patterns more readily present themselves in datasets that are ‘wide’ as opposed to ‘tall.’ Peter continued, “If you are used to datasets you are thinking in rows. However, traditional statistical models break down with more features, or more columns.” So, Peter and evaluators like him that are applying data science to their evaluative practice are evolving from traditional Frequentist to Bayesian statistical approaches. While there is more to the distinction here, the latter uses prior knowledge, or degrees of belief, to determine the probability of success, where the former does not. This distinction is significant for evaluators who are wanting to move beyond predictive correlation to prescriptive evaluation. Peter expounded, Prescriptive analytics is figuring out what will best work for each case or situation.” For example, with prediction, we can make statements that a foster child with certain attributes is 70% not likely to find a home. Using the same data points with prescriptive analytics we can find 30 children that are similar to that foster child and find out what they did to find a permanent home. In a way, only using predictive analytics can cause us to surrender while including prescriptive analytics can cause us to endeavor.
 

Existing Capacity

The last category of existing resources for applying big data for evaluation was mostly captured by the comments of independent evaluation consultant, Michael Bamberger. He spoke of the latent capacity that existed in evaluation professionals and teams, but that we’re not taking full advantage of big data: “Big data is being used by development agencies, but less by evaluators in these agencies. Evaluators don’t use big data, so there is a big gap.”

He outlined two scenarios for the future of evaluation in this new wave of data analytics: a state of divergence where evaluators are replaced by big data analysts and a state of convergence where evaluators develop a literacy with the principles of big data for their evaluative practice. One problematic consideration with this hypothetical is that many data scientists are not interested in causation, as Peter York noted. To move toward the future of convergence, he shared how big data can enhance the evaluation cycle from appraisal and planning through monitoring, reporting and evaluating sustainability. Michael went on to share a series of caveats emptor that include issues with extractive versus inclusive uses of big data, the fallacy of large numbers, data quality control, and different perspectives on theory, all of which could warrant their own blog posts for development evaluation.

While I deepened my basic understandings of data analytics including the tools and techniques, benefits and challenges, and guidelines for big data and evaluation, my biggest take away is reconsidering big data for social good by considering the ethical dilemma of not using existing data, tech, and capacity to improve development programs, possibly even prescribing specific interventions by identifying their probable efficacy through predictive models before they are deployed.

(Slides from the Big Data and Evaluation workshop are available here).

Do you use or have strong feelings about big data for evaluation? Please continue the conversation below.

 

 

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

Integrating big data into program evaluation: An invitation to participate in a short survey

As we all know, big data and data science are becoming increasingly important in all aspects of our lives. There is a similar rapid growth in the applications of big data in the design and implementation of development programs. Examples range from the use of satellite images and remote sensors in emergency relief and the identification of poverty hotspots, through the use of mobile phones to track migration and to estimate changes in income (by tracking airtime purchases), social media analysis to track sentiments and predict increases in ethnic tension, and using smart phones on Internet of Things (IOT) to monitor health through biometric indicators.

Despite the rapidly increasing role of big data in development programs, there is speculation that evaluators have been slower to adopt big data than have colleagues working in other areas of development programs. Some of the evidence for the slow take-up of big data by evaluators is summarized in “The future of development evaluation in the age of big data”.  However, there is currently very limited empirical evidence to test these concerns.

To try to fill this gap, my colleagues Rick Davies and Linda Raftree and I would like to invite those of you who are interested in big data and/or the future of evaluation to complete the attached survey. This survey, which takes about 10 minutes to complete asks evaluators to report on the data collection and data analysis techniques that you use in the evaluations you design, manage or analyze; while at the same time asking data scientists how familiar they are with evaluation tools and techniques.

The survey was originally designed to obtain feedback from participants in the MERL Tech conferences on “Exploring the Role of Technology in Monitoring, Evaluation, Research and Learning in Development” that are held annually in London and Washington, DC, but we would now like to broaden the focus to include a wider range of evaluators and data scientists.

One of the ways in which the findings will be used is to help build bridges between evaluators and data scientists by designing integrated training programs for both professions that introduce the tools and techniques of both conventional evaluation practice and data science, and show how they can be combined to strengthen both evaluations and data science research. “Building bridges between evaluators and big data analysts” summarizes some of the elements of a strategy to bring the two fields closer together.

The findings of the survey will be shared through this and other sites, and we hope this will stimulate a follow-up discussion. Thank you for your cooperation and we hope that the survey and the follow-up discussions will provide you with new ways of thinking about the present and potential role of big data and data science in program evaluation.

Here’s the link to the survey – please take a few minute to fill it out!

You can also join me, Kerry Bruce and Pete York on September 5th for a full day workshop on Big Data and Evaluation in Washington DC.

September 5th: MERL Tech DC pre-workshops

This year at MERL Tech DC, in addition to the regular conference on September 6th and 7th, we’re offering two full-day, in-depth workshops on September 5th. Join us for a deeper look into the possibilities and pitfalls of Blockchain for MERL and Big Data for Evaluation!

What can Blockchain offer MERL? with Shailee Adinolfi, Michael Cooper, and Val Gandhi, co-hosted by Chemonics International, 1717 H St. NW, Washington, DC 20016. 

Tired of the blockchain hype, but still curious on how it will impact MERL? Join us for a full day workshop with development practitioners who have implemented blockchain solutions with social impact goals in various countries. Gain knowledge of the technical promises and drawbacks of blockchain technology as it stands today and brainstorm how it may be able to solve for some of the challenges in MERL in the future. Learn about ethical design principles for blockchain and how to engage with blockchain service providers to ensure that your ideas and programs are realistic and avoid harm. See the agenda here.

Register now to claim a spot at the blockchain and MERL pre-workshop!

Big Data and Evaluation with Michael Bamberger, Kerry Bruce and Peter York, co-hosted by the Independent Evaluation Group at the World Bank – “I” Building, Room: I-1-200, 1850 I St NW, Washington, DC 20006

Join us for a one-day, in-depth workshop on big data and evaluation where you’ll get an introduction to Big Data for Evaluators. We’ll provide an overview of applications of big data in international development evaluation, discuss ways that evaluators are (or could be) using big data and big data analytics in their work. You’ll also learn about the various tools of data science and potential applications, as well as run through specific cases where evaluators have employed big data as one of their methods. We will also address the important question as to why many evaluators have been slower and more reluctant to incorporate big data into their work than have their colleagues in research, program planning, management and other areas such as emergency relief programs. Lastly, we’ll discuss the ethics of using big data in our work. See the agenda here!

Register now to claim a spot at the Big Data and Ealuation pre-workshop!

You can also register here for the main conference on September 6-7, 2018!

 

How MERL Tech Jozi helped me bridge my own data gap

Guest post from Praekelt.org. The original post appeared on August 15 here.

Our team had the opportunity to enjoy a range of talks at the first ever MERL Tech in Johannesburg. Here are some of their key learnings:

During “Designing the Next Generation of MERL Tech Software” by Mobenzi’s CEO Andi Friedman, we were challenged to apply design thinking techniques to critique both our own as well as our partners’ current projects. I have previously worked on an educational tool that is aimed to improve the quality of learning of students who are based in a disadvantaged community in the Eastern Cape, South Africa. I learned that language barriers are a serious concern when it comes to effectively implementing a new tool.

We mapped out a visual representation of solving a communication issue that one of the partners had for an educational programme implemented in rural Eastern Cape, which included drawing various shapes on paper. What we came up with was to replace the posters that had instructions in words with clear visuals that the students were familiar with. This was inspired by the idea that visuals resonate with people more than words.

-Perez Mnkile, Project Manager

Amy Green Presenting on Video Metrics

I really enjoyed the presentation on video metrics from Girl Effect’s Amy Green. She spoke to us about video engagement on Hara Huru Dara, a vlog series featuring social media influencers. What I found really interesting is how hard it is to measure impact or engagement. Different platforms (YouTube vs Facebook) have different definitions for various measurements (e.g. views) and also use a range of algorithms to reach these measurements. Her talk really helped me understand just how hard MERL can be in a digital age! As our projects expand into new technologies, I’ll definitely be more aware of how complicated seemingly simple metrics (for example, views on a video) may be.

-Jessica Manim, Project Manager

Get it right by getting it wrong: embracing failure as a tool for learning and improvement was a theme visible throughout the two day MERL Tech conference and one session highlighting this theme was conducted by Annie Martin a Research Associate at Akros, who explored challenges in Offline Data Capture.

She referenced a project that took place in Zambia to track participants of an HIV prevention program, highlighting some of the technical challenges the project faced along the way. The project involved equipping field workers with an Android tablet and an Application developed for capturing offline data and synching data, when data connectivity was available. A number of bugs due to insufficient system user testing along with server hosting issues resulted in field workers often not successfully being able to send data or create user IDs.

The lesson, which I believe we strive to include in our developmental processes, is to focus on iterative piloting, testing and learning before deployment. This doesn’t necessarily mean that a bug-free system or service is guaranteed but it does encourage us to focus our attention on the end-users and stakeholders needs, expectations and requirements.

-Neville Tietz, Service Designer

Slide from Panel on WhatsApp and engagement

Sometimes, we don’t fully understand the problems that we are trying to solve. Siziwe Ngcwabe from the African Evidence Network gave the opening talk on evidence-based work. It showed me the importance of fully understanding the problem we are solving and identifying the markers of success or failure before we start rolling out solutions. Once we have established all this, we can then create effective solutions. Rachel Sibande from DIAL, gave a talk on how their organisation is now using data from mobile network providers to anticipate how a disease outbreak will spread, based on the movement patterns of the network’s subscribers. Using this data they can advise ministries to run campaigns in certain areas and increase medical supplies in another. The talk by Siziwe showed me the importance of fully understanding the problem you are trying to solve and how to effectively measure progress. Rachel’s talk really showed me how easy it is to create an effective solution, once you fully understand the problem.

-Katlego Maakane, Project Manager

I really enjoyed the panel discussion on Datafication Discrimination with William Bird, Director of Media Monitoring Africa, Richard Gevers, Director of Open Data Durban, Koketso Moeti, Executive Director of amandla.mobi that was moderated by Siphokazi Mthathi, Executive Director of Oxfam South Africa. The impact that the mass collection of data can have on communities can potentially be used to further discriminate against them, especially when they are not aware on what their data will be used for. For example, information around sexuality can be used to target individuals during a time when there is rapid reversing of anti-discrimination laws in many countries.

I also thought it was interesting how projection models for population movement and the planning of new areas for residential development and public infrastructure in cities in South Africa are flawed, since the development of these models are outsourced by government to the private sector and different government departments often use different forecasts. Essentially the various government departments are all planning cities with different projections further preventing the poorest people from accessing quality services and infrastructure.

For me this session really highlighted the responsibility we have when collecting data in our projects from vulnerable individuals and that we have to ensure that we interrogate what we intend to use this data for. As part of our process, we must investigate how the data could potentially be exploited. We need to empower people to take control of the information they share and be able to make decisions in their best interest.

-Benjamin Vermeulen, Project Manager

Evaluating for Trust in Blockchain Applications

by Mike Cooper

This is the fourth in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL,  Blockchain as an M&E Tool, How Can MERL Inform Maturation of the Blockchain, this post, and future posts on integrating  blockchain into MEL practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Enabling trust in an efficient manner is the primary innovation that the blockchain delivers through the use of cryptology and consensus algorithms.  Trust is usually a painstaking relationship building effort that requires iterative interactions to build.  The blockchain alleviates the need for much of the resources required to build this trust, but that does not mean that stakeholders will automatically trust the blockchain application.  There will still need to be trust building mechanisms with any blockchain application and MEL practitioners are uniquely situated to inform how these trust relationships can mature.

Function of trust in the blockchain

Trust is expensive.  You pay fees to banks who provide confidence to sellers who take your debit card as payment and trust that they will receive funds for the transaction.  Agriculture buyers pay fees to third parties (who can certify that the produce is organic, etc.) to validate quality control on products coming through the value chain  Often sellers do not see the money from debit card transaction in their accounts automatically and agriculture actors perpetually face the pressures resulting from being paid for goods and/or services they provided weeks previously. The blockchain could alleviate much of these harmful effects by substituting trust in humans by trust in math.

We pay these third parties because they are trusted agents, and these trusted agents can be destructive rent seekers at times; creating profits that do not add value to the goods and services they work with. End users in these transactions are used to using standard payment services for utility bills, school fees, etc.  This history of iterative transactions has resulted in a level of trust in these processes. It may not be equitable but it is what many are used to and introducing an innovation like blockchain will require an understanding of how these processes are influencing stakeholders, their needs and how they might be nudged to trust something different like a blockchain application.  

How MEL can help understand and build trust

Just as microfinance introduced new methods of sending/receiving money and access to new financial services that required piloting different possible solutions to build this understanding, so will blockchain applications. This is an area where MEL can add value to achieving mass impact, by designing the methods to iteratively build this understanding and test solutions.  

MEL has done this before.  Any project that requires relationship building should be based on understanding the mindset and incentives for relevant actions (behavior) amongst stakeholders to inform the design of the “nudge” (the treatment) intended to shift behavior.

Many of the programs we work on as MEL practitioners involve various forms and levels of relationship building, which is essentially “trust”.  There have been many evaluations of relationship building whether it be in microfinance, agriculture value chains or policy reform.  In each case, “trust” must be defined as a behavior change outcome that is “nudged” based on the framing (mindset) of the stakeholder.  Meaning that each stakeholder, depending on their mindset and the required behavior to facilitate blockchain uptake, will require a customized nudge.  

The role of trust in project selection and design: What does that mean for MEL

Defining “trust” should begin during project selection/design.  Project selection and design criteria/due diligence are invaluable for MEL.  Many of the dimensions of evaluability assessments refer back to the work that is done in the project selection/design phrase (which is why some argue evaluability assessments are essentially project design tools).  When it comes to blockchain, the USAID Blockchain Primer provides some of the earliest thinking for how to select and design blockchain projects, hence it is a valuable resources for MEL practitioners who want to start thinking about how they will evaluate blockchain applications.  

What should we be thinking about?

Relationship building and trust are behaviors, hence blockchain theories of change should have outcomes stated as behavior changes by specific stakeholders (hence the value add of tools like stakeholder analysis and outcome mapping).  However, these Theories of Change (TOC) are only as good as what informs them, hence building a knowledge base of blockchain applications as well as previous lessons learned from evidence on relationship building/trust will be critical to developing a MEL Strategy for blockchain applications.  

If you’d like to discuss this and related aspects, join us on September 5th in Washington, DC, for a one-day workshop on “What can the blockchain offer MERL?”

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

How can MERL inform maturation of the blockchain?

by Mike Cooper

This is the third in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL,  Blockchain as an M&E Tool, this post, and future posts on evaluating for trust in Blockchain applications, and integrating  blockchain into MEL practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Technology solutions in development contexts can be runaway trains of optimistic thinking.  Remember the play pump, a low technology solution meant to provide communities with clean water as children play?  Or the Soccket, the soccer ball that was going to help kids learn to read at night? I am not disparaging these good intentions, but the need to learn the evidence from past failure is widely recognized. When it comes to the blockchain, possibly the biggest technological innovation on the social horizon, the learning captured in guidance like the Principles for Digital Development or Blockchain Ethical Design Frameworks, needs to not only be integrated into the design of blockchain applications but also into how MEL practitioners will need to assess this integration and test solutions.   Data driven feedback from MEL will help inform the maturation of human centered blockchain solutions that mitigate endless/pointless pilots which exhaust the political will of good natured partners and creates barriers to sustainable impact.

The Blockchain is new but we have a head start in thinking about it

The blockchain is an innovation, and it should be evaluated as such. True the blockchain could be revolutionary in its impact.  And yes this potential could grease the wheels of the runaway train thinking referenced above, but this potential does not moot the evidence we have around evaluating innovations.

Keeping the risk of the runaway train at bay includes MERL practitioners working with stakeholders to ask : is blockchain the right approach for this at all?  Only after determining the competitive advantage of the blockchain solutions over other possible solutions should MEL practitioners work with stakeholders to finalize design of the initial piloting.  The USAID Blockchain Primer is the best early thinking about this process and the criteria involved.  

Michael Quinn Patton and others have developed an expanded toolkit for MERL practitioners to best unpack the complexity of a project and design a MERL framework that responds to the decision making requirements on the scale up pathway.  Because the blockchain is an innovation, which by definition means there is less evidence on its application but great potential, it will require MEL frameworks that iteratively test and modify applications to inform the scale up pathway.  

The Principles for Digital Development highlight the need for iterative learning in technology driven solutions.  The overlapping regulatory, organizational and technological spheres further assist in unpacking the complexity using tools like Problem Driven Iterative Adaptation (PDIA) or other adaptive management frameworks that are well suited to testing innovations in each sphere.  

How Blockchain is different: Intended Impacts and Potential Spoilers

There will be intended and unintended outcomes from blockchain applications that MEL should account for.  This includes general intended outcomes of increased access to services and overall costs savings while “un-intended” outcomes include the creation of winners and losers.  

The primary intended outcomes that could be expected from blockchain applications are an increase in cost savings (by cutting out intermediaries) which results in increased access to whatever service/product (assuming any cost savings are re-invested in expanding access).  Or a possible increase in access that results from creating a service where none existed before (for example creating access to banking services in rural populations). Hence methods for measuring the specific type of cost savings and increased access that are already used could be applied with modification.  

However, the blockchain will be disruptive and when I say “un-intended” (using quotation marks) I do so because the cost savings from blockchain applications are the result of alleviating the need for some intermediaries or middlemen. These middlemen are third parties who could be some form of rent-seeker in providing a validation, accreditation, certification  or other type of service meant to communicate trust. For example, with m-Pesa,  banking loan and other services from banks were expanded to new populations. With a financial inclusion blockchain project these same services could be accessed by the same population but without the need for a bank, hence incurring a cost savings. However, as is well known in many a policy reform intervention, creating efficiencies usually means creating losers and in our example the losers are those previously offering the services that the blockchain makes more efficient.  

The blockchain can facilitate efficiencies, not elimination of all intermediary functions. With the introduction of any innovation, the need for new functions will emerge as old functions are mooted.  For example mPesa experienced substantial barriers in its early development until they began working with kiosk owners who, after being trained up, could demonstrate and explain mPesa to customers.  Hence careful iterative assessment of the ecosystem (similar to value chain mapping) to identify mooted functions (losers) and new functions (winners) is critical.

MERL practitioners have a value add in mitigating the negative effects from the creation of losers, who could become spoilers.  MERL practitioners have many analytical tools/skills that can not only help in identifying the potential spoilers (perhaps through various outcome mapping and stakeholder analysis tools) but also in mitigating any negative effects (creating user personas of potential spoilers to better assess how to incentivize targeted behavior changes).  Hence MEL might be uniquely placed to build a broader understanding amongst stakeholders on what the blockchain is, what it can offer and how to create a learning framework that builds trust in the solution.

Trust, the real innovation of blockchain

MERL is all about behavior change, because no matter the technology or process innovation,  it requires uptake and uptake requires behavior. Trust is a behavior, you trust that when you put your money in a bank it will be available for when you want to use it.  Without this behavior, stemming from a belief, there are runs on banks which in turn fail which further erodes trust in the banking system. The same could be said for paying money to a water or power utility and expecting that they will provide service, The more use, the more a relationship matures into a trustful one. But it does not take much to erode this trust even after the relationship is established, again think about how easy it is to cause a run on a bank or stop using a service provider.  

The real innovation of the blockchain is that it replaces the need for trust in humans (whether it is an individual or system of organizations) with trust in math. Just as any entity needs to build a relationship of trust with its targeted patrons, so will the blockchain have to develop a  relationship of trust not only with end users but with those within the ecosystem that could influence the impact of the blockchain solution to include beneficiaries and potential loser/spoilers.  This brings us back to the importance of understanding who these stakeholders are, how they will interact with and influence the blockchain, and their perspectives, needs and capacities.

MERL practitioners who wish to use blockchain will need to pick up the latest thinking in behavioral sciences to understand this “trust” factor for each stakeholder and integrate it into an adaptive management framework.  The next blog in this series will go into further detail about the role of “trust” when evaluating a blockchain application.  

The Blockchain is different — don’t throw the baby out with the bath water

There will inevitably be mountains of pressure go to “full steam ahead” (part of me wants to add “and damn the consequences”) without sufficient data driven due diligence and ethical review, since blockchain is the next new shiny thing.  MERL practitioners should not only be aware of this unfortunate certainty, but they also need to pro-actively consider their own informed strategy on how they will respond to this pressure. MERL practitioners are uniquely positioned to advocate for data driven decision making and provide the data necessary to steer clear of misapplication of blockchain solutions.  There are already great resources for MEL practitioners on the ethical criteria and design implications for blockchain solutions.

The potential impact of blockchain is still unknown but if current thinking is to be believed, the impact could be paradigm shifting.  Given this potential, getting the initial testing right to maximize learning will be critical to cultivating the political will, the buy-in, and the knowledge base to kick start something much bigger.  

If you’d like to discuss this and related aspects, join us on September 5th in Washington, DC, for a one-day workshop on “What can the blockchain offer MERL?”

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

Blockchain as an M&E Tool

by Mike Cooper and Shailee Adinofi

This is the second in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL, this post (Blockchain as an M&E Tool), and future posts on the use of MEL to inform Blockchain maturation, evaluating for trust in Blockchain applications, and integrating  blockchain into MEL Practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Introducing the Blockchain as an M&E Tool   

Blockchain is a technology that could transform many of the functions we now take for granted in our daily lives. It could change everything from supply chain management to trade to the Internet of Things (IOT), and possibly even serve as the backbone for the next evolution of the internet itself.  Within international development there have already been blockchain pilots for refugee assistance and financial inclusion (amongst others) with more varied pilots and scaled applications soon to come.

Technological solutions, however, need uptake in order for their effects to be truly known. This is no different for the blockchain. Technology solutions are not self-implementing — their uptake is dependent on social structures and human decision making.  Hence, while on paper the blockchain offers many benefits, the realization of these benefits in the monitoring, evaluation and learning (MEL) space requires close working with MEL practitioners to hear their concerns, excitement, and feedback on how the blockchain can best produce these benefits.

Blockchain removes intermediaries, thus increasing integrity

The blockchain is a data management tool for achieving data integrity, transparency, and addressing privacy concerns. It is a distributed software network of peer-to-peer transactions (data), which are validated through consensus, using pre-established rules. This can remove the need for a middleman or “intermediaries”, meaning that it can “disintermediate” the holders of a traditional MEL database, where data is stored and owned by a set of actors.  

Hence the blockchain solves two primary problems:

  1.   It reduces the need for “middlemen” (intermediaries) because it is peer-to-peer in nature.  For MEL, the blockchain may thus reduce the need for people to be involved in data management protocols, from data collection to dissemination, resulting in cost and time efficiencies.
  2.  The blockchain maintains data integrity (meaning that the data is immutable and is only shared in the intended manner) in a distributed peer-to-peer network where the reliability and trustworthiness of the network is inherent to the rules established in the consensus algorithms of the blockchain.  

So, what does this mean?  Simply put, a blockchain is a type of distributed immutable ledger or decentralized database that keeps continuously updated digital records of data ownership. Rather than having a central administrator manage a single database, a distributed ledger has a network of replicated databases, synchronized via the internet, and visible to anyone within the network (more on control of the network and who has access permissions below).

Advantages over Current Use of Centralized Data Management  

Distributed ledgers are much less vulnerable to loss of control over data integrity than current centralized data management systems. Loss of data integrity can happen in numerous ways, whether by hacking, manipulation or some other nefarious or accidental use.  Consider the multiple cases of political manipulation of census data as recorded in Poor Numbers: How We Are Misled by African Development Statistics and What to Do about It because census instruments are designed and census data analyzed/managed in a centralized fashion with little to no transparency.

Likewise, within the field of evaluation there has been increasing attention on p-hacking, where initial statistical results are manipulated on the back side to produce results more favorable to the original hypothesis.  Imagine if cleaned and anonymized data sets were put onto the blockchain where transparency, without sacrificing PII, makes p-hacking much more difficult (perhaps resulting in increased trust in data sets and their overall utility/uptake).

Centralized systems can have lost and/or compromised data (or loss of access) due to computer malfunctions or what we call “process malfunctions” where the bureaucratic control over the data builds artificially high barriers to access and subsequent use of the data by anyone outside the central sphere of control. This level of centralized control (as in the examples above regarding manipulation of census design/data and p-hacking) introduces the ability for data manipulation.

Computer malfunctions are mitigated by the blockchain because the data does not live in a central network hub but instead “lives’ in copies of the ledger that are distributed across every computer in the network. This lack of central control increases transparency. “Hashing” (a form of version control) ensures that any data manipulations in the blockchain are not included in the blockchain, meaning only a person with the necessary permissions can change the data on the chain. With the blockchain, access to information is as open, or closed, as is desired.  

How can we use this technology in MEL?

All MEL data must eventually find its way to a digital version of itself, whether it is entered from paper surveys or it goes through analytical software or straight into an Excel cell, with varying forms/rigor of quality control.  A benefit of blockchain is its compatibility with all digital data. It can include data files from all forms of data collection and analytical methods or software. Practitioners are free to collect data in whatever manner best suits their mandates with the blockchain becoming the data management tool at any point after collection, as the data can be uploaded to the blockchain at any point. Meaning data can be loaded directly by enumerators in the field or after additional cleaning/analysis.  

MEL has  specific data management challenges that the blockchain seems uniquely suited to overcome including 1. protection of Personally Identifiable Information (PII)/data integrity, 2. mitigating data management resource requirements, and 3. lowering barriers to end use through timely dissemination and increased access to reliable data.  

Let’s explore each of these below:

1. Increasing Protection and Integrity of Data: There might be a knee jerk reaction against increasing transparency in evaluation data management, given the prevalence of personally identifiable information (PII) and other sensitive data. Meeting internal quality control procedures for developing and sharing draft results is usually a long arduous process — even more so if delivering cleaned data sets.  Hence there might be hesitation in introducing new data management techniques given the priority given to the protection of PII balanced against the pressure to deliver data sets in a timely fashion.

However, we should learn a lesson from our counterparts in healthcare records management, one of the more PII and sensitive data laden data management fields in the world.  The blockchain has seen piloting in healthcare records management precisely because it is able to secure the integrity of sensitive data in such an efficient manner.

Imagine an evaluator completes a round of household surveys, the data is entered, cleaned and anonymized and the data files are ready to be sent to whomever the receiver is (funder, public data catalog, etc.)  The funder requires that the data uploaded to the blockchain is done using a Smart Contract. Essentially a Smart Contract is a set of “if……then” protocols on the Ethereum network (a specific type of blockchain) which can say “if all data has been cleaned of PII and is appropriately formatted….etc….etc…, it can be accepted onto the blockchain.”  If the requirements written into the Smart Contract are not met, the data is rejected and not uploaded to the blockchain (see point 2 below). So, in the case where proper procedures or best or preferred practices are not met, the data is not shared and remains safe within the confines of a (hopefully) secure and reliable centralized database.

This example demonstrates one of the unsung values of the blockchain. When correctly done (meaning the Smart Contract is properly developed) it can ensure that only the data that is appropriate is shared and is in fact shared only with those meant to have it in a manner where the data cannot be manipulated.  This is an advantage over current practice where human error can result in PII being released or unuseable or incompatible data files being shared.

The blockchain also has inherent quality control protocols around version control that mitigate against manipulation of the data for whatever reason. Hashing is partly a summary labelling of different encrypted data sets on the blockchain where any modification to the data set results in a different hash for that data set.  Hence version control is automatic and easily tracked through the different hashes which are one way only (meaning that once the data is hashed it cannot be reverse engineered to change the original data). Thus, all data on the blockchain is immutable.

2. Decreasing Data Management Resources: Current data management practice is very resource intensive for MEL practitioners.  Data entry, creation of data files, etc. requires ample amounts of time, mostly spent guarding against error, which introduces timeliness issues where processes take so long the data uses its utility by the time it is “ready” for decision makers.  A future post in this series will cover how the blockchain can introduce efficiencies at various points in the data management process (from collection to dissemination). There are many unknowns in this space that require further thinking about the ability to embed automated cleaning and/or analytical functions into the blockchain or compatibility issues around data files and software applications (like STATA or NIVIVO).  This series of posts will highlight broad areas where the blockchain can introduce the benefits of an innovation as well as finer points that still need to be “unpacked” for the benefits to materialize.

3. Distributed ledger enables timely dissemination in a flexible manner:  With the increased focus on the use of evaluation data, there has been a correlated increase in discussion in how evaluation data is shared.

Current data dissemination practices include:

  • depositing them with a data center, data archive, or data bank
  • submitting them to a journal to support a publication
  • depositing them in an institutional repository
  • making them available online via a project or institutional website
  • making them available informally between researchers on a peer-to-peer basis

All these avenues of dissemination are very resource intensive. Each avenue has its own procedures, protocols, and other characteristics that may not be conducive to timely learning. Timelines for publishing in journals is long with incentives towards only publishing positive results, contributing to a dismal utilization rates of results.  Likewise, many institutional evaluation catalogs are difficult to navigate, often incomplete, and generally not user friendly. (We will look at query capabilities on the blockchain later in the blog series).

Using the blockchain to manage and disseminate data could result in more timely and transparent sharing.  Practitioners could upload data to the chain at any point after collection, and with the use of Smart Contracts, data can be widely distributed in a controlled manner.  Data sets can be easily searchable and available in much timelier and user-friendly fashion to a much larger population. This creates the ability to share specific data with specific partners (funders, stakeholders, the general public) in a more automated fashion and on a timelier basis.  Different Smart Contracts can be developed so that funders can see all data as soon as it is collected in the field, while a different Smart Contract with local officials allows them to see data relevant to their locality only after it is entered, cleaned, etc.).

With the help of read/write protocols, anyone can control the extent to which data is shared. Use of the data is immutable, meaning it cannot be changed (in contrast to current practice where we hope the PDF is “good enough” to guard against modification but most times data are pushed out in excel sheets, or something similar, with no way to determine what the “real” data when different versions appear).   

Where are we?

We are in the early stages of understanding, developing and exploring the blockchain in general and with MEL in particular. On September 5th, we’ll be leading a day-long Pre-Conference Workshop on What Blockchain Can Do For MERL. The Pre-Conference Workshop and additions to this blog series will focus on how:

  • The blockchain can introduce efficiencies in MEL data management
  • The blockchain can facilitate “end use” whether it is accountability, developmental, formative, etc.
  • To work with MEL practitioners and other stakeholders to improve the uptake of the blockchain as an innovation by overcoming regulatory, organizational and cultural barriers.  

This process is meant to be collaborative so we invite others to help inform us on what issues they think warrant further exploration.  We look forward to collaborating with others to unpack these issues to help develop thinking that leads to appropriate uptake of blockchain solutions to MEL problems.  

Where are we going?

As it becomes increasingly possible that blockchain will be a disruptive technology, it is critical that we think about how it will affect the work of MEL practitioners.  To this end, stay tuned for a few more posts, including:

  • How can MEL inform Blockchain maturation?
  • Evaluating for Trust in Blockchain applications
  • How can we integrate blockchain into MEL Practices?

We would greatly benefit from feedback on this series to help craft topics that the series can cover.  Please comment below or contact the authors with any feedback, which would be greatly appreciated.

Register here for the September 5th workshop on Blockchain and MERL!

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

Shailee Adinolfi is an international development professional with over 15 years of experience working at the intersection of financial services, technology, and global development. Recently, she performed business development, marketing, account management, and solution design as Vice President at BanQu, a Blockchain-based identity platform. She held a variety of leadership roles on projects related to mobile banking, financial inclusion, and the development of emerging markets. More about Shailee 

What does Blockchain offer to MERL?

by Shailee Adinolfi

By now you’ve read at least one article on the potential of blockchain, as well as the challenges in its current form. USAID recently published a Primer on Blockchain: How to assess the relevance of distributed ledger technology to international development, which explains that distributed ledgers are “a type of shared computer database that enables participants to agree on the state of a set of facts or events (frequently described as an “authoritative shared truth”) in a peer-to-peer fashion without needing to rely on a single, centralized, or fully trusted party”.

Explained differently, the blockchain introduces cost savings and resource efficiencies by allowing data to be entered, stored and shared in an immutable fashion by substituting the need for a trusted third party with algorithms and cryptography.

The blockchain/Distributed Ledger Technology (DLT) industry is evolving quickly, as are the definitions and terminology. Blockchain may not solve world hunger, but the core promises are agreed upon by many – transparency, auditability, resiliency, and streamlining. The challenges, which companies are racing to be the first to address, include scale (speed of transactions), security, and governance.

It’s not time to sit back wait and see what happens. It’s time to deepen our understanding. Many have already begun pilots across sectors. As this McKinsey article points out, early data from pilots shows strong potential in the Agriculture and Government sectors, amongst others. The article indicates that scale may be as little as 3-5 years away, and that’s not far out.

The Center for Global Development’s Michael Pisa argues that the potential benefits of blockchain do not outweigh the associated costs and complexities right now. He suggests that the development community focus its energies and resources on bringing down barriers to actual implementation, such as standards, interoperability, de-siloing data, and legal and regulatory rules around data storage, privacy and protection.

One area where blockchain may be useful is Monitoring, Evaluation, Research and Learning (MERL). But we need to dig in and understand better what the potentials and pitfalls are.

Join us on September 5th for a one-day workshop on Blockchain and MERL at Chemonics International where we will discuss what blockchain offers to MERL.

This is the first in a series of blogs at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes this post (What does Blockchain Offer to MERL),  Blockchain as an M&E Tool, and future posts on the use of MEL to inform Blockchain maturation, evaluating for trust in Blockchain applications, and integrating  blockchain into MEL Practices.