by Mala Kumar, GitHub Open Source for Good program
My name is Mala, and I lead a program at GitHub called Open Source for Good under our Social Impact team. Before joining GitHub, I spent the better part of a decade wandering around the world designing, managing, implementing and deploying tech for international development (ICT4D) software products. Throughout my career, I was told repeatedly that open source (OS) would revolutionize the ICT4D industry. While I have indeed worked on a few interesting OS products, I began suspecting that statement was more complicated than had been presented.
Indeed, after joining GitHub this past April, I confirmed my suspicion. Overall, the adoption of OS in the social sector – defined as the collection of entities that positively advance or promote human rights – lags far behind the commercial, private sector. Why? You may ask.
Here’s one hypothesis we have at GitHub:
After our team’s many years of experience working in the social sector and through the hundreds of conversations we’ve had with fellow social sector actors, we’ve come to believe that IT teams in the social sector have significantly less decision making power and autonomy than commercial, private sector IT teams. This is irrespective of the size, the geographic location, or even the core mission of the organization or company.
In other words, decision-making power in the social sector does not lie with the techies who typically have the best understanding of the technology landscape. Rather, it’s non-techies who tend to make an organization’s IT budgetary decisions. Consequently, when budgetary decision-makers come to GitHub to assess OS tools and they see something like the below, a GitHub repo, they have no idea what they’re seeing. And this is a problem for the sector at large.
We want to help bridge that gap between private sector and social sector tech development. The social sector is quite large, however, so we’ve had to narrow our focus. We’ve decided to target the social sector’s M&E vertical. This is for several reasons:
M&E as a discipline is growing in the social sector
Increasingly more M&E data is being collected digitally
It’s easy to identify a target audience
Linda is great. ☺
How We Hope to Help
Our basic idea is to build a middle “layer” between a GitHub repo and a decision maker’s final budget. I’m calling that a MERL GitHub “Center” until I can come up with a better name.
As a sponsor of MERL Tech DC 2019, we set up our booth smack dab in front of the food and near the coffee, and we took advantage of this prime real estate to learn more about what our potential users would find valuable.
We spent two days talking with as many MERL conference attendees as we could and asked them to complete some exercises. One such exercise was to prioritize the possible features a MERL GitHub Center might have. We’ve summarized the results in the chart below. The top right has two types of features: 1) features most commonly sorted as helpful in using open source and 2) features potential Center users would actually use. From this exercise, we’ve learned that our minimum viable product (MVP) should include all or some of the following:
Use case studies of open source tools
Description of listed tools
A way to search in the Center
Security assessments of the tools
Beginner’s Guide to Open Source for the Social Sector
Installation guides for listed tools
Aggregation of prioritization exercise from ~10 participants
We also spoke to an additional 30+ attendees about the OS tools they currently use. Anecdotally, mobile data collection, GIS, and data visualization were the most common use cases. A few tools are built on or with DHIS2. Many attendees we spoke with are data scientists using R and Python notebooks. DFID and GIZ were mentioned as two large donor organizations that are thinking about OS for MERL funding.
In the coming weeks, we’re going to reach out to many of the attendees we spoke to at MERL Tech to conduct user testing for our forthcoming Center prototype. In the spirit of open source and not duplicating work, we are also speaking with a few potential partners working on different angles to our problem to align our efforts. It’s our hope to build out new tools and product features that will help the MERL community better use and develop OS tools.
How can you get Involved?
Email email@example.com with a brief intro to you and your work in OS for social good.
Guest post from Jo Kaybryn, an international development consultant currently directing evaluation frameworks, evaluation quality assurance services, and leading evaluations for UN agencies and INGOs.
“Upping the Ex Ante” is a series of articles aimed at evaluators in international development exploring how our work is affected by – and affects – digital data and technology. I’ve been having lots of exciting conversations with people from all corners of the universe about our brave new world. But I’ve also been conscious that for those who have not engaged a lot with the rapid changes in technologies around us, it can be a bit daunting to know where to start. These articles explore a range of technologies and innovations against the backdrop of international development and the particular context of evaluation. For readers not yet well versed in technology there are lots of sources to do further research on areas of interest.
series is half way through, with 4 articles published.
in Part 1 the series has gone back to the olden days (1948!) to consider the
origin story of cybernetics and the influences that are present right now in
algorithms and big data. The philosophical and ethical dilemmas are a recurring
theme in later articles.
examines the problems of distance which is something that technology offers
huge strides forwards in, and yet it remains never fully solved, with a
discussion on what blockchains mean for the veracity of data.
considers qualitative data and shines a light on the gulf between our digital
data-centric and analogue-centric worlds and the need for data scientists and social
scientists to cooperate to make sense of it.
looks at quantitative data and the implications for better decision making, why
evaluators really don’t like an algorithmic “black box”; and reflections on how humans’
assumptions and biases leak into our technologies whether digital or analogue.
few articles will see a focus on ethics, psychology and bias; a case study on a
hypothetical machine learning intervention to identify children at risk of
maltreatment (lots more risk and ethical considerations), and some thoughts about putting it all
in perspective (i.e. Don’t
There is no real
evidence base about what does and does not work for applying blockchain
technology to interventions seeking social impacts. Most current blockchain interventions are
driven by developers (programmers) and visionary entrepreneurs. There is little
thinking in current blockchain interventions around designing for “social”
impact (there is an over abundant trust in technology to achieve the outcomes
and little focus on the humans interacting with the technology) and integrating
relevant evidence from behavioral economics, behavior change design, human
centered design, etc.
To build the needed evidence base, Monitoring, Evaluation, Research and Learning (MERL) practitioners will have to not only get to know the broad strokes of blockchain technology but the specifics of token design and tokenomics (the political economics of tokenized ecosystems). Token design could become the focal point for MERL on blockchain interventions since:
If not all, the vast majority of blockchain interventions will involve some type of desired behavior change
The token provides the link between the ledger (which is the blockchain) and the social ecosystem created by the token in which the behavior change is meant to happen
Hence the token is the “nudge” meant to leverage behavior change in the social ecosystem while governing the transactions on the blockchain ledger.
(While this blog will focus on these points, it will not go into a full discussion of what tokens are and how they create ecosystems. But there are some very good resources out there that do this which you can review at your leisure and to the degree that works for you. The Complexity Institute has published a book exploring the various attributes of complexity and main themes involved with tokenomics while Outlier Ventures has published, what I consider, to be the best guidance on token design. The Outlier Ventures guidance contains many of the tools MERL practitioners will be familiar with (problem analysis, stakeholder mapping, etc.) and should be consulted.)
Hence it could be that by understanding token design and its requirements and mapping it against our current MERL thinking, tools and practices, we can develop new thinking and tools that could be the beginning point in building our much-needed evidence base.
What is a “blockchain intervention”?
As MERL practitioners
we roughly define an “intervention” as a group of inputs and activities meant
to leverage outcomes within a given eco-system.
“Interventions” are what we are usually mandated to asses, evaluate and
When thinking about MERL and blockchain, it is useful to think of two categories of “blockchain interventions”.
1) Integrating the blockchain into MERL data collection, entry, management, analysis or dissemination practices and
2) MERL strategies for interventions using the blockchain in some way shape or form.
Here we will focus on the #2 and in so doing demonstrate that while the blockchain is an innovative, potentially disruptive technology, evaluating its applications on social outcomes is still an issue of assessing behavior change against dimensions of intervention design.
Designing for Behavior Change
We generally design
interventions (programs, projects, activities) to “nudge” a certain type of behavior (stated as
outcomes in a theory of change) amongst a certain population (beneficiaries,
stakeholders, etc.). We often attempt to
integrate mechanisms of change into our intervention design, but often do not
for a variety of reasons (lack of understanding, lack of resources, lack of
political will, etc.). This lack of due
diligence in design is partly responsible for the lack of evidence around what
works and what does not work in our current universe of interventions.
Enter blockchain technology, which as MERL practitioners, we will be responsible for assessing in the foreseeable future. Hence, we will need to determine how interventions using the blockchain attempt to nudge behavior, what behaviors they seek to nudge, amongst whom, when and how well the design of the intervention accomplishes these functions. In order to do that we will need to better understand how blockchains use tokens to nudge behavior.
The Centrality of the Token
We have all used tokens before. Stores issue coupons that can only be used at those stores, we get receipts for groceries as soon as we pay, arcades make you buy tokens instead of just using quarters. The coupons and arcade tokens can be considered utility tokens, meaning that they can only be used in a specific “ecosystem” which in this case is a store and arcade respectively. The grocery store receipt is a token because it demonstrates ownership, if you are stopped on the way out the store and you show your receipt you are demonstrating that you now have rights to ownership over the foodstuffs in your bag.
Whether you realize
it or not at the time, these tokens are trying to nudge your behavior. The store gives you the coupon because the
more time you spend in their store trying to redeem coupons, the greatly
likelihood you will spend additional money there. The grocery store wants you to pay for all
your groceries while the arcade wants you to buy more tokens than you end up
If needed, we could design
MERL strategies to assess how well these different tokens nudged the desired
behaviors. We would do this, in part, by thinking about how each token is
designed relative to the behavior it wants (i.e. the value, frequency and
duration of coupons, etc.).
Thinking about these ecosystems and their respective tokens will help us understand the interdependence between 1) the blockchain as a ledger that records transactions, 2) the token that captures the governance structures for how transactions are stored on the blockchain ledger as well as the incentive models for 3) the mechanisms of change in the social eco-system created by the token.
Figure #1: The inter-relationship between the blockchain
(ledger), token and social eco-system
Token Design as Intervention Design
Just as we assess
theories of change and their mechanisms against intervention design, we will
assess blockchain based interventions against their token design in much the
same way. This is because blockchain
tokens capture all the design dimensions of an intervention; namely the problem
to be solved, stakeholders and how they influence the problem (and thus the
solution), stakeholder attributes (as mapped out in something like a
stakeholder analysis), the beneficiary population, assumptions/risks, etc.
Outlier Ventures has adapted what they call a Token
Utility Canvas as a milestone in
their token design process. The canvas
can be correlated to the various dimensions of an evaluability
assessment tool (I am using the evaluability
assessment tool as a demonstration of the necessary dimensions of an
interventions design, meaning that the evaluability assessment tool assesses
the health of all the components of an intervention design). The Token Utility Canvas is a useful
milestone in the token design process that captures many of the problem
diagnostic, stakeholder assessment and other due diligence tools that are
familiar to MERL practitioners who have seen them used in intervention
design. Hence token design could be
largely thought of as intervention design and evaluated as such.
Comparing Token Design with Dimensions of Program Design (as represented in an
This table is not meant to be exhaustive and not all of the fields will be explained here but in general, it could be a useful starting point in developing our own thinking and tools for this emerging space.
The Token as a Tool
for Behavior Change
Coming up with a taxonomy of blockchain interventions and relevant tokens is a necessary task, but all blockchains that need to nudge behavior will have to have a token.
Consider supply chain management. Blockchains are increasingly being used as the ledger system for supply chain management. Supply chains are typically comprised of numerous actors packaging, shipping, receiving, applying quality control protocols to various goods, all with their own ledgers of the relevant goods as they snake their way through the supply chain. This leads to ample opportunities for fraud, theft and high costs associated with reconciling the different ledgers of the different actors at different points in the supply chain. Using the blockchain as the common ledger system, many of these costs are diminished as a single ledger is used with trusted data, hence transactions (shipping, receiving, repackaging, etc.) can happen more seamlessly and reconciliation costs drop.
However even in “simple” applications such as this there are behavior change implications. We still want the supply chain actors to perform their functions in a manner that adds value to the supply chain ecosystem as a whole, rewarding them for good behavior within the ecosystem and punishing for bad.
What if those shippers trying to pass on a faulty product had
already deposited a certain value of currency in an escrow account (housed in a
contract on the blockchain)? Meaning that if they are found to be
attempting a prohibited behavior (passing on faulty products) they surrender a
certain amount automatically from the escrow account in the blockchain smart
contract. How much should be deposited
in the escrow account? What is the ratio
between the degree of punishment and undesired action? These are behavior questions around a
mechanism of change that are dimensions of current intervention designs and will
be increasingly relevant in token design.
The point of this is to demonstrate that even “benign”
applications of the blockchain, like supply chain management, have behavior
change implications and thus require good due diligence in token design.
There is a lot that could be said about the validation function
of this process, who validates that the bad behavior has taken place and should
be punished or that good behavior should be rewarded? There are lessons to be learned from results
based contracting and the role of the validator in such a contracting
vehicle. This “validating” function will
need to be thought out in terms of what can be automated and what needs a
“human touch” (and who is responsible, what methods they should use,
Implications for MERL
If tokens are fundamental to MERL strategies for blockchain
interventions, there are several critical implications:
MERL practitioners will need to be heavily integrated into the due diligence processes and tools for token design
MERL strategies will need to be highly formative, if not developmental, in facilitating the timeliness and overall effectiveness of the feedback loops informing token design
New thinking and tools will need to be developed to assess the relationships between blockchain governance, token design and mechanisms of change in the resulting social ecosystem.
The opportunity cost for impact and “learning” could go up the less MERL practitioners are integrated into the due diligence of token design. This is because the costs to adapt token design are relatively low compared to current social interventions, partly due to the ability to integrate automated feedback.
Blockchain based interventions present us with significant learning opportunities due to our ability to use the technology itself as a data collection/management tool in learning about what does and does not work. Feedback from an appropriate MERL strategy could inform decision making around token design that could be coded into the token on an iterative basis. For example as incentives of stakeholder’s shift (i.e. supply chain shippers incur new costs and their value proposition changes) token adaptation can respond in a timely fashion so long as the MERL feedback that informs the token design is accurate.
There is need to determine what components of these feedback
loops can be completed by automated functions and what requires a “human
touch”. For example, what dimensions of
token design can be informed by smart infrastructure (i.e. temp gauges on
shipping containers in the supply chain) versus household surveys completed by
enumerators? This will be a task to
complete and iteratively improve starting with initial token design and lasting
through the lifecycle of the intervention.
Token design dimensions, outlined in the Token Utility Canvas, and decision-making
will need to result in MERL questions that are correlated to the best strategy
to answer them, automated or human, much the same as we do now in current
While many of our current due diligence tools used in both
intervention and evaluation design (things like stakeholder mapping, problem
analysis, cost benefit analysis, value propositions, etc.), will need to be
adapted to the type of relationships that are within a tokenized eco-systems. These include the relationships of influence
between the social eco-system as well as the blockchain ledger itself (or more
specifically the governance of that ledger) as demonstrated in figure #1.
This could be our, as MERL practitioners, biggest priority. While blockchain interventions could create incredible opportunities for social experimentation, the need for human centered due diligence (incentivizing humans for positive behavior change) in token design is critical. Over reliance on the technology to drive social outcomes is already a well evidenced opportunity cost that could be avoided with blockchain-based solutions if the gap between technologists, social scientists and practitioners can be bridged.
Written by Jana Melpolder, MERL Tech DC Volunteer and former ICT Works Editor. Find Jana on Twitter: @JanaMelpolder
As organizations grow, they become increasingly aware of how important MERL (Monitoring, Evaluation, Research, and Learning) is to their international development programs. To meet this challenge, new hires need to be brought on board, but more importantly, changes need to happen in the organization’s culture.
How can nonprofits and organizations change to include more MERL? Friday afternoon’s MERL Tech DC session “Creating a MERL Culture at Your Nonprofit” set out to answer that question. Representatives from Salesforce.org and Samaschool.org were part of the discussion.
Salesforce.org staff members Eric Barela and Morgan Buras-Finlay emphasized that their organization has set aside resources (financial and otherwise) for international and external M&E. “A MERL culture is the foundation for the effective use of technology!” shared Eric Barela.
Data is a vital part of MERL, but those providing it to organizations often need to “hold the hands” of those on the receiving end. What is especially vital is helping people understand this data and gain deeper insight from it. It’s not just about the numbers – it’s about what is meant by those numbers and how people can learn and improve using the data.
According to Salesforce.org, an organization’s MERL culture is comprised of its understanding of the benefit of defining, measuring, understanding, and learning for social impact with rigor. And building or maintaining a MERL culture doesn’t just mean letting the data team do whatever they like or being the ones in charge. Instead, it’s vital to focus on outcomes. Salesforce.org discussed how its MERL staff prioritize keeping a foot in the door in many places and meeting often with people from different departments.
Where does technology fit into all of this? According to Salesforce.org, the push is on keep the technology ethical. Morgan Buras-Finlay described it well, saying “technology goes from building a useful tool to a tool that will actually be used.”
Another participant on Friday’s panel was Samaschool’s Director of Impact, Kosar Jahani. Samaschool describes itself as a San Francisco-based nonprofit focused on preparing low-income populations to succeed as independent workers. The organization has “brought together a passionate group of social entrepreneurs and educators who are reimagining workforce development for the 21st century.”
Samaschool creates a MERL culture through Learning Calls for their different audiences and funders. These Learning Calls are done regularly, they have a clear agenda, and sometimes they even happen openly on Facebook LIVE.
By ensuring a high level of transparency, Samasource is also aiming to create a culture of accountability where it can learn from failures as well as successes. By using social media, doors are opened and people have an easier time gaining access to information that otherwise would have been difficult to obtain.
Kosar explained a few negative aspects of this kind of transparency, saying that there is a risk to putting information in such a public place to view. It can lead to lost future investment. However, the organization feels this has helped build relationships and enhanced interactions.
Sadly, flight delays prevented a third organization. Big Elephant Studios and its founder Andrew Means from attending MERL Tech. Luckily, his slides were presented by Eric Barela. Andrew’s slides highlighted the following three things that are needed to create a MERL Culture:
Tools – investments in tools that help an organization acquire, access, and analyze the data it needs to make informed decisions
Processes – Investments in time to focus on utilizing data and supporting decision making
Culture – Organizational values that ensure that data is invested in, utilized, and listened to
One of Andrew’s main points was that generally, people really do want to gain insight and learn from data. The other members of the panel reiterated this as well.
A few lingering questions from the audience included:
How do you measure how culture is changing within an organization?
How does one determine if an organization’s culture is more focused on MERL that previously?
Which social media platforms and strategies can be used to create a MERL culture that provides transparency to clients, funders, and other stakeholders?
What about you? How do you create and measure the “MERL Culture” in your organization?
When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.
Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.
Consider the Public Option
And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?
For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.
The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.
Flip the Paradigm
And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.
But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.
Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.
As we all know, big data and data science are becoming increasingly important in all aspects of our lives. There is a similar rapid growth in the applications of big data in the design and implementation of development programs. Examples range from the use of satellite images and remote sensors in emergency relief and the identification of poverty hotspots, through the use of mobile phones to track migration and to estimate changes in income (by tracking airtime purchases), social media analysis to track sentiments and predict increases in ethnic tension, and using smart phones on Internet of Things (IOT) to monitor health through biometric indicators.
Despite the rapidly increasing role of big data in development programs, there is speculation that evaluators have been slower to adopt big data than have colleagues working in other areas of development programs. Some of the evidence for the slow take-up of big data by evaluators is summarized in “The future of development evaluation in the age of big data”. However, there is currently very limited empirical evidence to test these concerns.
To try to fill this gap, my colleagues Rick Davies and Linda Raftree and I would like to invite those of you who are interested in big data and/or the future of evaluation to complete the attached survey. This survey, which takes about 10 minutes to complete asks evaluators to report on the data collection and data analysis techniques that you use in the evaluations you design, manage or analyze; while at the same time asking data scientists how familiar they are with evaluation tools and techniques.
The survey was originally designed to obtain feedback from participants in the MERL Tech conferences on “Exploring the Role of Technology in Monitoring, Evaluation, Research and Learning in Development” that are held annually in London and Washington, DC, but we would now like to broaden the focus to include a wider range of evaluators and data scientists.
One of the ways in which the findings will be used is to help build bridges between evaluators and data scientists by designing integrated training programs for both professions that introduce the tools and techniques of both conventional evaluation practice and data science, and show how they can be combined to strengthen both evaluations and data science research. “Building bridges between evaluators and big data analysts” summarizes some of the elements of a strategy to bring the two fields closer together.
The findings of the survey will be shared through this and other sites, and we hope this will stimulate a follow-up discussion. Thank you for your cooperation and we hope that the survey and the follow-up discussions will provide you with new ways of thinking about the present and potential role of big data and data science in program evaluation.
This year at MERL Tech DC, in addition to the regular conference on September 6th and 7th, we’re offering two full-day, in-depth workshops on September 5th. Join us for a deeper look into the possibilities and pitfalls of Blockchain for MERL and Big Data for Evaluation!
What can Blockchain offer MERL? with Shailee Adinolfi, Michael Cooper, and Val Gandhi, co-hosted by Chemonics International, 1717 H St. NW, Washington, DC 20016.
Tired of the blockchain hype, but still curious on how it will impact MERL? Join us for a full day workshop with development practitioners who have implemented blockchain solutions with social impact goals in various countries. Gain knowledge of the technical promises and drawbacks of blockchain technology as it stands today and brainstorm how it may be able to solve for some of the challenges in MERL in the future. Learn about ethical design principles for blockchain and how to engage with blockchain service providers to ensure that your ideas and programs are realistic and avoid harm. See the agenda here.
Big Data and Evaluation with Michael Bamberger, Kerry Bruce and Peter York, co-hosted by the Independent Evaluation Group at the World Bank – “I” Building, Room: I-1-200, 1850 I St NW, Washington, DC 20006
Join us for a one-day, in-depth workshop on big data and evaluation where you’ll get an introduction to Big Data for Evaluators. We’ll provide an overview of applications of big data in international development evaluation, discuss ways that evaluators are (or could be) using big data and big data analytics in their work. You’ll also learn about the various tools of data science and potential applications, as well as run through specific cases where evaluators have employed big data as one of their methods. We will also address the important question as to why many evaluators have been slower and more reluctant to incorporate big data into their work than have their colleagues in research, program planning, management and other areas such as emergency relief programs. Lastly, we’ll discuss the ethics of using big data in our work. See the agenda here!
Our team had the opportunity to enjoy a range of talks at the first ever MERL Tech in Johannesburg. Here are some of their key learnings:
During “Designing the Next Generation of MERL Tech Software” by Mobenzi’s CEO Andi Friedman, we were challenged to apply design thinking techniques to critique both our own as well as our partners’ current projects. I have previously worked on an educational tool that is aimed to improve the quality of learning of students who are based in a disadvantaged community in the Eastern Cape, South Africa. I learned that language barriers are a serious concern when it comes to effectively implementing a new tool.
We mapped out a visual representation of solving a communication issue that one of the partners had for an educational programme implemented in rural Eastern Cape, which included drawing various shapes on paper. What we came up with was to replace the posters that had instructions in words with clear visuals that the students were familiar with. This was inspired by the idea that visuals resonate with people more than words.
-Perez Mnkile, Project Manager
I really enjoyed the presentation on video metrics from Girl Effect’s Amy Green. She spoke to us about video engagement on Hara Huru Dara, a vlog series featuring social media influencers. What I found really interesting is how hard it is to measure impact or engagement. Different platforms (YouTube vs Facebook) have different definitions for various measurements (e.g. views) and also use a range of algorithms to reach these measurements. Her talk really helped me understand just how hard MERL can be in a digital age! As our projects expand into new technologies, I’ll definitely be more aware of how complicated seemingly simple metrics (for example, views on a video) may be.
-Jessica Manim, Project Manager
Get it right by getting it wrong: embracing failure as a tool for learning and improvement was a theme visible throughout the two day MERL Tech conference and one session highlighting this theme was conducted by Annie Martin a Research Associate at Akros, who explored challenges in Offline Data Capture.
She referenced a project that took place in Zambia to track participants of an HIV prevention program, highlighting some of the technical challenges the project faced along the way. The project involved equipping field workers with an Android tablet and an Application developed for capturing offline data and synching data, when data connectivity was available. A number of bugs due to insufficient system user testing along with server hosting issues resulted in field workers often not successfully being able to send data or create user IDs.
The lesson, which I believe we strive to include in our developmental processes, is to focus on iterative piloting, testing and learning before deployment. This doesn’t necessarily mean that a bug-free system or service is guaranteed but it does encourage us to focus our attention on the end-users and stakeholders needs, expectations and requirements.
-Neville Tietz, Service Designer
Sometimes, we don’t fully understand the problems that we are trying to solve. Siziwe Ngcwabe from the African Evidence Network gave the opening talk on evidence-based work. It showed me the importance of fully understanding the problem we are solving and identifying the markers of success or failure before we start rolling out solutions. Once we have established all this, we can then create effective solutions. Rachel Sibande from DIAL, gave a talk on how their organisation is now using data from mobile network providers to anticipate how a disease outbreak will spread, based on the movement patterns of the network’s subscribers. Using this data they can advise ministries to run campaigns in certain areas and increase medical supplies in another. The talk by Siziwe showed me the importance of fully understanding the problem you are trying to solve and how to effectively measure progress. Rachel’s talk really showed me how easy it is to create an effective solution, once you fully understand the problem.
-Katlego Maakane, Project Manager
I really enjoyed the panel discussion on Datafication Discrimination with William Bird, Director of Media Monitoring Africa, Richard Gevers, Director of Open Data Durban, Koketso Moeti, Executive Director of amandla.mobi that was moderated by Siphokazi Mthathi, Executive Director of Oxfam South Africa. The impact that the mass collection of data can have on communities can potentially be used to further discriminate against them, especially when they are not aware on what their data will be used for. For example, information around sexuality can be used to target individuals during a time when there is rapid reversing of anti-discrimination laws in many countries.
I also thought it was interesting how projection models for population movement and the planning of new areas for residential development and public infrastructure in cities in South Africa are flawed, since the development of these models are outsourced by government to the private sector and different government departments often use different forecasts. Essentially the various government departments are all planning cities with different projections further preventing the poorest people from accessing quality services and infrastructure.
For me this session really highlighted the responsibility we have when collecting data in our projects from vulnerable individuals and that we have to ensure that we interrogate what we intend to use this data for. As part of our process, we must investigate how the data could potentially be exploited. We need to empower people to take control of the information they share and be able to make decisions in their best interest.
Enabling trust in an efficient manner is the primary innovation that the blockchain delivers through the use of cryptology and consensus algorithms. Trust is usually a painstaking relationship building effort that requires iterative interactions to build. The blockchain alleviates the need for much of the resources required to build this trust, but that does not mean that stakeholders will automatically trust the blockchain application. There will still need to be trust building mechanisms with any blockchain application and MEL practitioners are uniquely situated to inform how these trust relationships can mature.
Function of trust in the blockchain
Trust is expensive. You pay fees to banks who provide confidence to sellers who take your debit card as payment and trust that they will receive funds for the transaction. Agriculture buyers pay fees to third parties (who can certify that the produce is organic, etc.) to validate quality control on products coming through the value chain Often sellers do not see the money from debit card transaction in their accounts automatically and agriculture actors perpetually face the pressures resulting from being paid for goods and/or services they provided weeks previously. The blockchain could alleviate much of these harmful effects by substituting trust in humans by trust in math.
We pay these third parties because they are trusted agents, and these trusted agents can be destructive rent seekers at times; creating profits that do not add value to the goods and services they work with. End users in these transactions are used to using standard payment services for utility bills, school fees, etc. This history of iterative transactions has resulted in a level of trust in these processes. It may not be equitable but it is what many are used to and introducing an innovation like blockchain will require an understanding of how these processes are influencing stakeholders, their needs and how they might be nudged to trust something different like a blockchain application.
How MEL can help understand and build trust
Just as microfinance introduced new methods of sending/receiving money and access to new financial services that required piloting different possible solutions to build this understanding, so will blockchain applications. This is an area where MEL can add value to achieving mass impact, by designing the methods to iteratively build this understanding and test solutions.
MEL has done this before. Any project that requires relationship building should be based on understanding the mindset and incentives for relevant actions (behavior) amongst stakeholders to inform the design of the “nudge” (the treatment) intended to shift behavior.
Many of the programs we work on as MEL practitioners involve various forms and levels of relationship building, which is essentially “trust”. There have been many evaluations of relationship building whether it be in microfinance, agriculture value chains or policy reform. In each case, “trust” must be defined as a behavior change outcome that is “nudged” based on the framing (mindset) of the stakeholder. Meaning that each stakeholder, depending on their mindset and the required behavior to facilitate blockchain uptake, will require a customized nudge.
The role of trust in project selection and design: What does that mean for MEL
Defining “trust” should begin during project selection/design. Project selection and design criteria/due diligence are invaluable for MEL. Many of the dimensions of evaluability assessments refer back to the work that is done in the project selection/design phrase (which is why some argue evaluability assessments are essentially project design tools). When it comes to blockchain, the USAID Blockchain Primer provides some of the earliest thinking for how to select and design blockchain projects, hence it is a valuable resources for MEL practitioners who want to start thinking about how they will evaluate blockchain applications.
What should we be thinking about?
Relationship building and trust are behaviors, hence blockchain theories of change should have outcomes stated as behavior changes by specific stakeholders (hence the value add of tools like stakeholder analysis and outcome mapping). However, these Theories of Change (TOC) are only as good as what informs them, hence building a knowledge base of blockchain applications as well as previous lessons learned from evidence on relationship building/trust will be critical to developing a MEL Strategy for blockchain applications.
Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation. He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at firstname.lastname@example.org or through the Emergence website.
Technology solutions in development contexts can be runaway trains of optimistic thinking. Remember the play pump, a low technology solution meant to provide communities with clean water as children play? Or the Soccket, the soccer ball that was going to help kids learn to read at night? I am not disparaging these good intentions, but the need to learn the evidence from past failure is widely recognized. When it comes to the blockchain, possibly the biggest technological innovation on the social horizon, the learning captured in guidance like the Principles for Digital Development or Blockchain Ethical Design Frameworks, needs to not only be integrated into the design of blockchain applications but also into how MEL practitioners will need to assess this integration and test solutions. Data driven feedback from MEL will help inform the maturation of human centered blockchain solutions that mitigate endless/pointless pilots which exhaust the political will of good natured partners and creates barriers to sustainable impact.
The Blockchain is new but we have a head start in thinking about it
The blockchain is an innovation, and it should be evaluated as such. True the blockchain could be revolutionary in its impact. And yes this potential could grease the wheels of the runaway train thinking referenced above, but this potential does not moot the evidence we have around evaluating innovations.
Keeping the risk of the runaway train at bay includes MERL practitioners working with stakeholders to ask : is blockchain the right approach for this at all? Only after determining the competitive advantage of the blockchain solutions over other possible solutions should MEL practitioners work with stakeholders to finalize design of the initial piloting. The USAID Blockchain Primer is the best early thinking about this process and the criteria involved.
Michael Quinn Patton and others have developed an expanded toolkit for MERL practitioners to best unpack the complexity of a project and design a MERL framework that responds to the decision making requirements on the scale up pathway. Because the blockchain is an innovation, which by definition means there is less evidence on its application but great potential, it will require MEL frameworks that iteratively test and modify applications to inform the scale up pathway.
The Principles for Digital Development highlight the need for iterative learning in technology driven solutions. The overlapping regulatory, organizational and technological spheres further assist in unpacking the complexity using tools like Problem Driven Iterative Adaptation (PDIA) or other adaptive management frameworks that are well suited to testing innovations in each sphere.
How Blockchain is different: Intended Impacts and Potential Spoilers
There will be intended and unintended outcomes from blockchain applications that MEL should account for. This includes general intended outcomes of increased access to services and overall costs savings while “un-intended” outcomes include the creation of winners and losers.
The primary intended outcomes that could be expected from blockchain applications are an increase in cost savings (by cutting out intermediaries) which results in increased access to whatever service/product (assuming any cost savings are re-invested in expanding access). Or a possible increase in access that results from creating a service where none existed before (for example creating access to banking services in rural populations). Hence methods for measuring the specific type of cost savings and increased access that are already used could be applied with modification.
However, the blockchain will be disruptive and when I say “un-intended” (using quotation marks) I do so because the cost savings from blockchain applications are the result of alleviating the need for some intermediaries or middlemen. These middlemen are third parties who could be some form of rent-seeker in providing a validation, accreditation, certification or other type of service meant to communicate trust. For example, with m-Pesa, banking loan and other services from banks were expanded to new populations. With a financial inclusion blockchain project these same services could be accessed by the same population but without the need for a bank, hence incurring a cost savings. However, as is well known in many a policy reform intervention, creating efficiencies usually means creating losers and in our example the losers are those previously offering the services that the blockchain makes more efficient.
The blockchain can facilitate efficiencies, not elimination of all intermediary functions. With the introduction of any innovation, the need for new functions will emerge as old functions are mooted. For example mPesa experienced substantial barriers in its early development until they began working with kiosk owners who, after being trained up, could demonstrate and explain mPesa to customers. Hence careful iterative assessment of the ecosystem (similar to value chain mapping) to identify mooted functions (losers) and new functions (winners) is critical.
MERL practitioners have a value add in mitigating the negative effects from the creation of losers, who could become spoilers. MERL practitioners have many analytical tools/skills that can not only help in identifying the potential spoilers (perhaps through various outcome mapping and stakeholder analysis tools) but also in mitigating any negative effects (creating user personas of potential spoilers to better assess how to incentivize targeted behavior changes). Hence MEL might be uniquely placed to build a broader understanding amongst stakeholders on what the blockchain is, what it can offer and how to create a learning framework that builds trust in the solution.
Trust, the real innovation of blockchain
MERL is all about behavior change, because no matter the technology or process innovation, it requires uptake and uptake requires behavior. Trust is a behavior, you trust that when you put your money in a bank it will be available for when you want to use it. Without this behavior, stemming from a belief, there are runs on banks which in turn fail which further erodes trust in the banking system. The same could be said for paying money to a water or power utility and expecting that they will provide service, The more use, the more a relationship matures into a trustful one. But it does not take much to erode this trust even after the relationship is established, again think about how easy it is to cause a run on a bank or stop using a service provider.
The real innovation of the blockchain is that it replaces the need for trust in humans (whether it is an individual or system of organizations) with trust in math. Just as any entity needs to build a relationship of trust with its targeted patrons, so will the blockchain have to develop a relationship of trust not only with end users but with those within the ecosystem that could influence the impact of the blockchain solution to include beneficiaries and potential loser/spoilers. This brings us back to the importance of understanding who these stakeholders are, how they will interact with and influence the blockchain, and their perspectives, needs and capacities.
MERL practitioners who wish to use blockchain will need to pick up the latest thinking in behavioral sciences to understand this “trust” factor for each stakeholder and integrate it into an adaptive management framework. The next blog in this series will go into further detail about the role of “trust” when evaluating a blockchain application.
The Blockchain is different — don’t throw the baby out with the bath water
There will inevitably be mountains of pressure go to “full steam ahead” (part of me wants to add “and damn the consequences”) without sufficient data driven due diligence and ethical review, since blockchain is the next new shiny thing. MERL practitioners should not only be aware of this unfortunate certainty, but they also need to pro-actively consider their own informed strategy on how they will respond to this pressure. MERL practitioners are uniquely positioned to advocate for data driven decision making and provide the data necessary to steer clear of misapplication of blockchain solutions. There are already great resources for MEL practitioners on the ethical criteria and design implications for blockchain solutions.
The potential impact of blockchain is still unknown but if current thinking is to be believed, the impact could be paradigm shifting. Given this potential, getting the initial testing right to maximize learning will be critical to cultivating the political will, the buy-in, and the knowledge base to kick start something much bigger.
Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation. He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at email@example.com or through the Emergence website.