All posts by Guest Post

How can MERL inform maturation of the blockchain?

by Mike Cooper

This is the third in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL,  Blockchain as an M&E Tool, this post, and future posts on evaluating for trust in Blockchain applications, and integrating  blockchain into MEL practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Technology solutions in development contexts can be runaway trains of optimistic thinking.  Remember the play pump, a low technology solution meant to provide communities with clean water as children play?  Or the Soccket, the soccer ball that was going to help kids learn to read at night? I am not disparaging these good intentions, but the need to learn the evidence from past failure is widely recognized. When it comes to the blockchain, possibly the biggest technological innovation on the social horizon, the learning captured in guidance like the Principles for Digital Development or Blockchain Ethical Design Frameworks, needs to not only be integrated into the design of blockchain applications but also into how MEL practitioners will need to assess this integration and test solutions.   Data driven feedback from MEL will help inform the maturation of human centered blockchain solutions that mitigate endless/pointless pilots which exhaust the political will of good natured partners and creates barriers to sustainable impact.

The Blockchain is new but we have a head start in thinking about it

The blockchain is an innovation, and it should be evaluated as such. True the blockchain could be revolutionary in its impact.  And yes this potential could grease the wheels of the runaway train thinking referenced above, but this potential does not moot the evidence we have around evaluating innovations.

Keeping the risk of the runaway train at bay includes MERL practitioners working with stakeholders to ask : is blockchain the right approach for this at all?  Only after determining the competitive advantage of the blockchain solutions over other possible solutions should MEL practitioners work with stakeholders to finalize design of the initial piloting.  The USAID Blockchain Primer is the best early thinking about this process and the criteria involved.  

Michael Quinn Patton and others have developed an expanded toolkit for MERL practitioners to best unpack the complexity of a project and design a MERL framework that responds to the decision making requirements on the scale up pathway.  Because the blockchain is an innovation, which by definition means there is less evidence on its application but great potential, it will require MEL frameworks that iteratively test and modify applications to inform the scale up pathway.  

The Principles for Digital Development highlight the need for iterative learning in technology driven solutions.  The overlapping regulatory, organizational and technological spheres further assist in unpacking the complexity using tools like Problem Driven Iterative Adaptation (PDIA) or other adaptive management frameworks that are well suited to testing innovations in each sphere.  

How Blockchain is different: Intended Impacts and Potential Spoilers

There will be intended and unintended outcomes from blockchain applications that MEL should account for.  This includes general intended outcomes of increased access to services and overall costs savings while “un-intended” outcomes include the creation of winners and losers.  

The primary intended outcomes that could be expected from blockchain applications are an increase in cost savings (by cutting out intermediaries) which results in increased access to whatever service/product (assuming any cost savings are re-invested in expanding access).  Or a possible increase in access that results from creating a service where none existed before (for example creating access to banking services in rural populations). Hence methods for measuring the specific type of cost savings and increased access that are already used could be applied with modification.  

However, the blockchain will be disruptive and when I say “un-intended” (using quotation marks) I do so because the cost savings from blockchain applications are the result of alleviating the need for some intermediaries or middlemen. These middlemen are third parties who could be some form of rent-seeker in providing a validation, accreditation, certification  or other type of service meant to communicate trust. For example, with m-Pesa,  banking loan and other services from banks were expanded to new populations. With a financial inclusion blockchain project these same services could be accessed by the same population but without the need for a bank, hence incurring a cost savings. However, as is well known in many a policy reform intervention, creating efficiencies usually means creating losers and in our example the losers are those previously offering the services that the blockchain makes more efficient.  

The blockchain can facilitate efficiencies, not elimination of all intermediary functions. With the introduction of any innovation, the need for new functions will emerge as old functions are mooted.  For example mPesa experienced substantial barriers in its early development until they began working with kiosk owners who, after being trained up, could demonstrate and explain mPesa to customers.  Hence careful iterative assessment of the ecosystem (similar to value chain mapping) to identify mooted functions (losers) and new functions (winners) is critical.

MERL practitioners have a value add in mitigating the negative effects from the creation of losers, who could become spoilers.  MERL practitioners have many analytical tools/skills that can not only help in identifying the potential spoilers (perhaps through various outcome mapping and stakeholder analysis tools) but also in mitigating any negative effects (creating user personas of potential spoilers to better assess how to incentivize targeted behavior changes).  Hence MEL might be uniquely placed to build a broader understanding amongst stakeholders on what the blockchain is, what it can offer and how to create a learning framework that builds trust in the solution.

Trust, the real innovation of blockchain

MERL is all about behavior change, because no matter the technology or process innovation,  it requires uptake and uptake requires behavior. Trust is a behavior, you trust that when you put your money in a bank it will be available for when you want to use it.  Without this behavior, stemming from a belief, there are runs on banks which in turn fail which further erodes trust in the banking system. The same could be said for paying money to a water or power utility and expecting that they will provide service, The more use, the more a relationship matures into a trustful one. But it does not take much to erode this trust even after the relationship is established, again think about how easy it is to cause a run on a bank or stop using a service provider.  

The real innovation of the blockchain is that it replaces the need for trust in humans (whether it is an individual or system of organizations) with trust in math. Just as any entity needs to build a relationship of trust with its targeted patrons, so will the blockchain have to develop a  relationship of trust not only with end users but with those within the ecosystem that could influence the impact of the blockchain solution to include beneficiaries and potential loser/spoilers.  This brings us back to the importance of understanding who these stakeholders are, how they will interact with and influence the blockchain, and their perspectives, needs and capacities.

MERL practitioners who wish to use blockchain will need to pick up the latest thinking in behavioral sciences to understand this “trust” factor for each stakeholder and integrate it into an adaptive management framework.  The next blog in this series will go into further detail about the role of “trust” when evaluating a blockchain application.  

The Blockchain is different — don’t throw the baby out with the bath water

There will inevitably be mountains of pressure go to “full steam ahead” (part of me wants to add “and damn the consequences”) without sufficient data driven due diligence and ethical review, since blockchain is the next new shiny thing.  MERL practitioners should not only be aware of this unfortunate certainty, but they also need to pro-actively consider their own informed strategy on how they will respond to this pressure. MERL practitioners are uniquely positioned to advocate for data driven decision making and provide the data necessary to steer clear of misapplication of blockchain solutions.  There are already great resources for MEL practitioners on the ethical criteria and design implications for blockchain solutions.

The potential impact of blockchain is still unknown but if current thinking is to be believed, the impact could be paradigm shifting.  Given this potential, getting the initial testing right to maximize learning will be critical to cultivating the political will, the buy-in, and the knowledge base to kick start something much bigger.  

If you’d like to discuss this and related aspects, join us on September 5th in Washington, DC, for a one-day workshop on “What can the blockchain offer MERL?”

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

Blockchain as an M&E Tool

by Mike Cooper and Shailee Adinofi

This is the second in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL, this post (Blockchain as an M&E Tool), and future posts on the use of MEL to inform Blockchain maturation, evaluating for trust in Blockchain applications, and integrating  blockchain into MEL Practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Introducing the Blockchain as an M&E Tool   

Blockchain is a technology that could transform many of the functions we now take for granted in our daily lives. It could change everything from supply chain management to trade to the Internet of Things (IOT), and possibly even serve as the backbone for the next evolution of the internet itself.  Within international development there have already been blockchain pilots for refugee assistance and financial inclusion (amongst others) with more varied pilots and scaled applications soon to come.

Technological solutions, however, need uptake in order for their effects to be truly known. This is no different for the blockchain. Technology solutions are not self-implementing — their uptake is dependent on social structures and human decision making.  Hence, while on paper the blockchain offers many benefits, the realization of these benefits in the monitoring, evaluation and learning (MEL) space requires close working with MEL practitioners to hear their concerns, excitement, and feedback on how the blockchain can best produce these benefits.

Blockchain removes intermediaries, thus increasing integrity

The blockchain is a data management tool for achieving data integrity, transparency, and addressing privacy concerns. It is a distributed software network of peer-to-peer transactions (data), which are validated through consensus, using pre-established rules. This can remove the need for a middleman or “intermediaries”, meaning that it can “disintermediate” the holders of a traditional MEL database, where data is stored and owned by a set of actors.  

Hence the blockchain solves two primary problems:

  1.   It reduces the need for “middlemen” (intermediaries) because it is peer-to-peer in nature.  For MEL, the blockchain may thus reduce the need for people to be involved in data management protocols, from data collection to dissemination, resulting in cost and time efficiencies.
  2.  The blockchain maintains data integrity (meaning that the data is immutable and is only shared in the intended manner) in a distributed peer-to-peer network where the reliability and trustworthiness of the network is inherent to the rules established in the consensus algorithms of the blockchain.  

So, what does this mean?  Simply put, a blockchain is a type of distributed immutable ledger or decentralized database that keeps continuously updated digital records of data ownership. Rather than having a central administrator manage a single database, a distributed ledger has a network of replicated databases, synchronized via the internet, and visible to anyone within the network (more on control of the network and who has access permissions below).

Advantages over Current Use of Centralized Data Management  

Distributed ledgers are much less vulnerable to loss of control over data integrity than current centralized data management systems. Loss of data integrity can happen in numerous ways, whether by hacking, manipulation or some other nefarious or accidental use.  Consider the multiple cases of political manipulation of census data as recorded in Poor Numbers: How We Are Misled by African Development Statistics and What to Do about It because census instruments are designed and census data analyzed/managed in a centralized fashion with little to no transparency.

Likewise, within the field of evaluation there has been increasing attention on p-hacking, where initial statistical results are manipulated on the back side to produce results more favorable to the original hypothesis.  Imagine if cleaned and anonymized data sets were put onto the blockchain where transparency, without sacrificing PII, makes p-hacking much more difficult (perhaps resulting in increased trust in data sets and their overall utility/uptake).

Centralized systems can have lost and/or compromised data (or loss of access) due to computer malfunctions or what we call “process malfunctions” where the bureaucratic control over the data builds artificially high barriers to access and subsequent use of the data by anyone outside the central sphere of control. This level of centralized control (as in the examples above regarding manipulation of census design/data and p-hacking) introduces the ability for data manipulation.

Computer malfunctions are mitigated by the blockchain because the data does not live in a central network hub but instead “lives’ in copies of the ledger that are distributed across every computer in the network. This lack of central control increases transparency. “Hashing” (a form of version control) ensures that any data manipulations in the blockchain are not included in the blockchain, meaning only a person with the necessary permissions can change the data on the chain. With the blockchain, access to information is as open, or closed, as is desired.  

How can we use this technology in MEL?

All MEL data must eventually find its way to a digital version of itself, whether it is entered from paper surveys or it goes through analytical software or straight into an Excel cell, with varying forms/rigor of quality control.  A benefit of blockchain is its compatibility with all digital data. It can include data files from all forms of data collection and analytical methods or software. Practitioners are free to collect data in whatever manner best suits their mandates with the blockchain becoming the data management tool at any point after collection, as the data can be uploaded to the blockchain at any point. Meaning data can be loaded directly by enumerators in the field or after additional cleaning/analysis.  

MEL has  specific data management challenges that the blockchain seems uniquely suited to overcome including 1. protection of Personally Identifiable Information (PII)/data integrity, 2. mitigating data management resource requirements, and 3. lowering barriers to end use through timely dissemination and increased access to reliable data.  

Let’s explore each of these below:

1. Increasing Protection and Integrity of Data: There might be a knee jerk reaction against increasing transparency in evaluation data management, given the prevalence of personally identifiable information (PII) and other sensitive data. Meeting internal quality control procedures for developing and sharing draft results is usually a long arduous process — even more so if delivering cleaned data sets.  Hence there might be hesitation in introducing new data management techniques given the priority given to the protection of PII balanced against the pressure to deliver data sets in a timely fashion.

However, we should learn a lesson from our counterparts in healthcare records management, one of the more PII and sensitive data laden data management fields in the world.  The blockchain has seen piloting in healthcare records management precisely because it is able to secure the integrity of sensitive data in such an efficient manner.

Imagine an evaluator completes a round of household surveys, the data is entered, cleaned and anonymized and the data files are ready to be sent to whomever the receiver is (funder, public data catalog, etc.)  The funder requires that the data uploaded to the blockchain is done using a Smart Contract. Essentially a Smart Contract is a set of “if……then” protocols on the Ethereum network (a specific type of blockchain) which can say “if all data has been cleaned of PII and is appropriately formatted….etc….etc…, it can be accepted onto the blockchain.”  If the requirements written into the Smart Contract are not met, the data is rejected and not uploaded to the blockchain (see point 2 below). So, in the case where proper procedures or best or preferred practices are not met, the data is not shared and remains safe within the confines of a (hopefully) secure and reliable centralized database.

This example demonstrates one of the unsung values of the blockchain. When correctly done (meaning the Smart Contract is properly developed) it can ensure that only the data that is appropriate is shared and is in fact shared only with those meant to have it in a manner where the data cannot be manipulated.  This is an advantage over current practice where human error can result in PII being released or unuseable or incompatible data files being shared.

The blockchain also has inherent quality control protocols around version control that mitigate against manipulation of the data for whatever reason. Hashing is partly a summary labelling of different encrypted data sets on the blockchain where any modification to the data set results in a different hash for that data set.  Hence version control is automatic and easily tracked through the different hashes which are one way only (meaning that once the data is hashed it cannot be reverse engineered to change the original data). Thus, all data on the blockchain is immutable.

2. Decreasing Data Management Resources: Current data management practice is very resource intensive for MEL practitioners.  Data entry, creation of data files, etc. requires ample amounts of time, mostly spent guarding against error, which introduces timeliness issues where processes take so long the data uses its utility by the time it is “ready” for decision makers.  A future post in this series will cover how the blockchain can introduce efficiencies at various points in the data management process (from collection to dissemination). There are many unknowns in this space that require further thinking about the ability to embed automated cleaning and/or analytical functions into the blockchain or compatibility issues around data files and software applications (like STATA or NIVIVO).  This series of posts will highlight broad areas where the blockchain can introduce the benefits of an innovation as well as finer points that still need to be “unpacked” for the benefits to materialize.

3. Distributed ledger enables timely dissemination in a flexible manner:  With the increased focus on the use of evaluation data, there has been a correlated increase in discussion in how evaluation data is shared.

Current data dissemination practices include:

  • depositing them with a data center, data archive, or data bank
  • submitting them to a journal to support a publication
  • depositing them in an institutional repository
  • making them available online via a project or institutional website
  • making them available informally between researchers on a peer-to-peer basis

All these avenues of dissemination are very resource intensive. Each avenue has its own procedures, protocols, and other characteristics that may not be conducive to timely learning. Timelines for publishing in journals is long with incentives towards only publishing positive results, contributing to a dismal utilization rates of results.  Likewise, many institutional evaluation catalogs are difficult to navigate, often incomplete, and generally not user friendly. (We will look at query capabilities on the blockchain later in the blog series).

Using the blockchain to manage and disseminate data could result in more timely and transparent sharing.  Practitioners could upload data to the chain at any point after collection, and with the use of Smart Contracts, data can be widely distributed in a controlled manner.  Data sets can be easily searchable and available in much timelier and user-friendly fashion to a much larger population. This creates the ability to share specific data with specific partners (funders, stakeholders, the general public) in a more automated fashion and on a timelier basis.  Different Smart Contracts can be developed so that funders can see all data as soon as it is collected in the field, while a different Smart Contract with local officials allows them to see data relevant to their locality only after it is entered, cleaned, etc.).

With the help of read/write protocols, anyone can control the extent to which data is shared. Use of the data is immutable, meaning it cannot be changed (in contrast to current practice where we hope the PDF is “good enough” to guard against modification but most times data are pushed out in excel sheets, or something similar, with no way to determine what the “real” data when different versions appear).   

Where are we?

We are in the early stages of understanding, developing and exploring the blockchain in general and with MEL in particular. On September 5th, we’ll be leading a day-long Pre-Conference Workshop on What Blockchain Can Do For MERL. The Pre-Conference Workshop and additions to this blog series will focus on how:

  • The blockchain can introduce efficiencies in MEL data management
  • The blockchain can facilitate “end use” whether it is accountability, developmental, formative, etc.
  • To work with MEL practitioners and other stakeholders to improve the uptake of the blockchain as an innovation by overcoming regulatory, organizational and cultural barriers.  

This process is meant to be collaborative so we invite others to help inform us on what issues they think warrant further exploration.  We look forward to collaborating with others to unpack these issues to help develop thinking that leads to appropriate uptake of blockchain solutions to MEL problems.  

Where are we going?

As it becomes increasingly possible that blockchain will be a disruptive technology, it is critical that we think about how it will affect the work of MEL practitioners.  To this end, stay tuned for a few more posts, including:

  • How can MEL inform Blockchain maturation?
  • Evaluating for Trust in Blockchain applications
  • How can we integrate blockchain into MEL Practices?

We would greatly benefit from feedback on this series to help craft topics that the series can cover.  Please comment below or contact the authors with any feedback, which would be greatly appreciated.

Register here for the September 5th workshop on Blockchain and MERL!

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.

Shailee Adinolfi is an international development professional with over 15 years of experience working at the intersection of financial services, technology, and global development. Recently, she performed business development, marketing, account management, and solution design as Vice President at BanQu, a Blockchain-based identity platform. She held a variety of leadership roles on projects related to mobile banking, financial inclusion, and the development of emerging markets. More about Shailee 

What does Blockchain offer to MERL?

by Shailee Adinolfi

By now you’ve read at least one article on the potential of blockchain, as well as the challenges in its current form. USAID recently published a Primer on Blockchain: How to assess the relevance of distributed ledger technology to international development, which explains that distributed ledgers are “a type of shared computer database that enables participants to agree on the state of a set of facts or events (frequently described as an “authoritative shared truth”) in a peer-to-peer fashion without needing to rely on a single, centralized, or fully trusted party”.

Explained differently, the blockchain introduces cost savings and resource efficiencies by allowing data to be entered, stored and shared in an immutable fashion by substituting the need for a trusted third party with algorithms and cryptography.

The blockchain/Distributed Ledger Technology (DLT) industry is evolving quickly, as are the definitions and terminology. Blockchain may not solve world hunger, but the core promises are agreed upon by many – transparency, auditability, resiliency, and streamlining. The challenges, which companies are racing to be the first to address, include scale (speed of transactions), security, and governance.

It’s not time to sit back wait and see what happens. It’s time to deepen our understanding. Many have already begun pilots across sectors. As this McKinsey article points out, early data from pilots shows strong potential in the Agriculture and Government sectors, amongst others. The article indicates that scale may be as little as 3-5 years away, and that’s not far out.

The Center for Global Development’s Michael Pisa argues that the potential benefits of blockchain do not outweigh the associated costs and complexities right now. He suggests that the development community focus its energies and resources on bringing down barriers to actual implementation, such as standards, interoperability, de-siloing data, and legal and regulatory rules around data storage, privacy and protection.

One area where blockchain may be useful is Monitoring, Evaluation, Research and Learning (MERL). But we need to dig in and understand better what the potentials and pitfalls are.

Join us on September 5th for a one-day workshop on Blockchain and MERL at Chemonics International where we will discuss what blockchain offers to MERL.

This is the first in a series of blogs at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes this post (What does Blockchain Offer to MERL),  Blockchain as an M&E Tool, and future posts on the use of MEL to inform Blockchain maturation, evaluating for trust in Blockchain applications, and integrating  blockchain into MEL Practices. 

 

Improve Data Literacy at All Levels within Your Humanitarian Programme

This post is by Janna Rous at Humanitarian Data. The original was published here on April 29, 2018

Imagine this picture of data literacy at all levels of a programme:

You’ve got a “donor visit” to your programme. The country director and a project officer accompany the donor on a field trip, and they all visit a household within one of the project communities.  All sat around a cup of tea, they started a discussion about data.  In this discussion, the household members explained what data had been collected and why. The country director explained what had surprised him/her in the data.  And the donor discussed how they made a decision to fund the programme based on the data.  What if no one was surprised at the discussion, or how the data was used, because they’d ALL seen and understood the data process?

Data literacy can mean lots of different things depending on who you are.  It could mean knowing how to:

  • collect, analyze and use data;
  • make sense of data and use it for management
  • validate data, be critical of it,
  • tell good from bad data and knowing how credible it is;
  • ensure everyone is confident talking about data.

IS “IMPROVING DATA LITERACY FOR ALL LEVELS” A TOP PRIORITY FOR THE HUMANITARIAN SECTOR?

“YES” data literacy is a priority!  Poor data literacy is still a huge stumbling block for many people in the sector and needs to be improved at ALL levels – from community households to field workers to senior management to donors.  However, there are a few challenges in how this priority is worded.

IS “LITERACY” THE RIGHT WORD?

Suggesting someone is “illiterate” when it comes to data – that doesn’t sit well with most people.  Many aid workers – from senior HQ staff right down to beneficiaries of a humanitarian programme – are well-educated and successful. Not only are they literate, but most speak 2 or more languages!  So to insinuate “illiteracy” doesn’t feel right.

Illiteracy is insulting…

Many of these same people are not super-comfortable with “data”,  but to ask them if they “struggle” with data, or to suggest they “don’t understand” by claiming they are “data illiterate” is insulting (even if you think it’s true!).

Leadership is enticing…

The language you use is extremely important here.  Instead of “literacy”, should you be talking about “leadership”?  What if you framed it as:  Improving data leadership.  Could you harness the desirability of that skill – leadership – so that workshop and training titles played into people’s egos, instead of attacking their egos?

WHAT CAN YOU DO TO IMPROVE DATA LITERACY (LEADERSHIP) WITHIN YOUR OWN ORGANIZATION?

You might be directly involved with helping to improve data literacy within your own organization.  Here are a few ideas on how to improve general data literacy/leadership:

  • Training and courses around data literacy.

While courses that focus on data analysis using computer programming languages such as [R] or Python exist, it might be better to focus on skills-development on more popular software (such as Excel) which is more sustainable. Due to the high turnover of staff within your sector, complex data analysis cannot normally be sustained once an advanced analyst leaves the field.

  • Donor funding to promote data use and the use of technology.

While the sector should not only rely on donors for pushing the agenda of data literacy forward, money is powerful.  If NGOs and agencies are required to show data literacy in order to receive funding, this will drive a paradigm shift in becoming more data-driven as a sector.  There are still big questions on how to fund interoperable tech systems in the sector to maximize the value of that funding in collaboration between multiple agencies.  However, donors who can provide structures and settings for collaboration will be able to promote data literacy across the sector.

  • Capitalize on “trendy” knowledge – what do people want to know about because it makes them look intelligent?

In 2015/16, everyone wanted to know “how to collect digital data”.  A couple years later, most people had shifted – they wanted to know “how to analyze data” and “make a dashboard”.  Now in 2018, GDPR and “Responsible Data” and “Blockchain” are trending – people want to know about it so they can talk about it.  While “trends” aren’t all we should be focusing on, they can often be the hook that gets people at all levels of our sector interested in taking their first steps forward in data literacy.

DATA LITERACY MEANS SOMETHING DIFFERENT FOR EACH PERSON

Data literacy means something completely different depending on who you are, your perspective within a programme, and what you use data for.

To the beneficiary of a programme…

data literacy might just mean understanding why data is being collected and what it is being used for.  It means having the knowledge and power to give and withhold consent appropriately.

To a project manager…

data literacy might mean understanding indicator targets, progress, and the calculations behind those numbers, in addition to how different datasets relate to one another in a complex setting.  Managers need to understand how data is coming together so that they can ask intelligent questions about their programme dashboards.

To an M&E officer…

data literacy might mean an understanding of statistical methods, random selection methodologies, how significant a result may be, and how to interpret results of indicator calculations.  They may need to understand uncertainty within their data and be able to explain this easily to others.

To the Information Management team…

data literacy might mean understanding how to translate programme calculations into computer code.  They may need to create data collection or data analysis or data visualization tools with an easy-to-understand user-interface.  They may ultimately be relied upon to ensure the correctness of the final “number” or the final “product”.

To the data scientist…

data literacy might mean understanding some very complex statistical calculations, using computer languages and statistical packages to find trends, insights, and predictive capabilities within datasets.

To the management team…

data literacy might mean being able to use data results (graphs, charts, dashboards) to explain needs, results, and impact in order to convince and persuade. Using data in proposals to give a good basis for why a programme should exist or using data to explain progress to the board of directors, or even as a basis for why a new programme should start up….or close down.

To the donor…

data literacy might mean an understanding of a “good” needs assessment vs. a “poor one” in evaluating a project proposal, how to prioritize areas and amounts of funding, how to ask tough questions of an individual partner, how to be suspect of numbers that may be too good to be true, how to evaluate quality vs. quantity, or how to see areas of collaboration between multiple partners.  They need to use data to communicate international priorities to their own wider government, board, or citizens.

Use more precise wording

Data literacy means something different to everyone.  So this priority can be interpreted in many different ways depending on who you are.  Within your organization, frame this priority with a more precise wording.  Here are some examples:

  • Improve everyone’s ability to raise important questions based on data.
  • Let’s get better at discussing our data results.
  • Improve our leadership in communicating the meaning behind data.
  • Develop our skills in analyzing and using data to create an impact.
  • Improve our use of data to inform our decisions.

This blog article was based on a recent session at MERL Tech UK 2018.  Thanks to the many voices who contributed ideas.  I’ve put my own spin on them to create this article – so if you disagree, the ideas are mine.  And if you agree – kudos to the brilliant people at the conference!

****

Register now for MERL Tech Jozi, August 1-2 or MERL Tech DC, September 6-7, 2018 if you’d like to join the discussions in person!

 

Reinventing the flat tire… don’t build what is already built!

by Ricardo Santana, MERL Practitioner

One typical factor that delays many projects in international development is the design and creation from scratch of hardware and software to provide a certain feature or accomplish a task. And, while it is true that in some cases a specific design is required, in most cases the outputs can be achieved through solutions already available in the market.

Why is this important? Because we witness over and over again how budgets are wasted in mismanaged projects and programs, delaying solutions, generating skepticism in funders, beneficiaries and other stakeholders and finally delivering a poor result. It is sad to realize that some of these issues may have been avoided simply using solutions and products already available, proved and at reasonable cost.

Then, what do we do? It is hard to find solutions aimed at international development by just browsing through Internet. During MERL Tech London 2018, the NGO Engineering for Change presented their Solutions Library. (Disclaimer: I have contributed to the library by analysing products, software and tools in different application spaces). In this database it is possible to explore and consult many available solutions that may help tackle a specific challenge or need to deliver a good result.

It doesn’t mean that this is the only place on which to rely for everything, or that projects absolutely need to adapt their processes to what is available. But as a professional responsible for evaluating and optimizing projects and programs in government and international development, I know that is always a good place for consulting on different technologies that are designed to help accelerate the overcoming of social inequalities, increasing access to services or automating and simplifying the monitoring, evaluation, research and learning processes.

Through my collaboration with this platform I came to know many different solutions to perform and effectively manage MERL processes. Some of these include: Magpi, Ushaidi, Epicollect5, RapidPro, mWater, SurveyCTO and VOTO Mobile. Some of these are private and some are OpenSource. Some are for managing disaster scenario, others for making poll, for health or for other services. What is impressive is the variety of solutions.

This was a sweet and sour discovery for me. As many other professionals, I wasted important resources and time developing software that was found in robust and previously tested forms that was in many cases a more cost effective and faster solution. However, knowledge is power and now many solutions are on my radar and I have now developed a clear sense of the need to explore before implement.

And that is my humble advice to any who is responsible of deploying a Monitoring, Evaluation, Research and Learning process within their projects. Before we start working like crazy, as we all do, due to our strong commitment to our responsibilities: take some time to carry out proper research on what platforms and software are already available in the market that may suit your needs and evaluate whether there is something feasible or useful or not before re-building every single thing from scratch. That certainly will foster your effectiveness and optimize your delivery cost and time.

As Mariela said in her MERL Tech Lightning Talk: Don’t reinvent the flat tire! You can submit ideas for the Solutions Library or participate as a solutions reviewer too. You can also find more information on the library and how solutions are vetted here at the Library website.

Register now for MERL Tech Jozi, August 1-2 or MERL Tech DC, September 6-7, 2018 if you’d like to join the discussions in person!

Big data or big hype: a MERL Tech debate

by Shawna Hoffman, Specialist, Measurement, Evaluation and Organizational Performance at the Rockefeller Foundation.

Both the volume of data available at our fingertips and the speed with which it can be accessed and processed have increased exponentially over the past decade.  The potential applications of this to support monitoring and evaluation (M&E) of complex development programs has generated great excitement.  But is all the enthusiasm warranted?  Will big data integrate with evaluation — or is this all just hype?

A recent debate that I chaired at MERL Tech London explored these very questions. Alongside two skilled debaters (who also happen to be seasoned evaluators!) – Michael Bamberger and Rick Davies – we sought to unpack whether integration of big data and evaluation is beneficial – or even possible.

Before we began, we used Mentimeter to see where the audience  stood on the topic:

Once the votes were in, we started.

Both Michael and Rick have fairly balanced and pragmatic viewpoints; however, for the sake of a good debate, and to help unearth the nuances and complexity surrounding the topic, they embraced the challenge of representing divergent and polarized perspectives – with Michael arguing in favor of integration, and Rick arguing against.

“Evaluation is in a state of crisis,” Michael argued, “but help is on the way.” Arguments in favor of the integration of big data and evaluation centered on a few key ideas:

  • There are strong use cases for integration. Data science tools and techniques can complement conventional evaluation methodology, providing cheap, quick, complexity-sensitive, longitudinal, and easily analyzable data.
  • Integration is possible. Incentives for cross-collaboration are strong, and barriers to working together are reducing. Traditionally these fields have been siloed, and their relationship has been characterized by a mutual lack of understanding of the other (or even questioning of the other’s motivations or professional rigor).  However, data scientists are increasingly recognizing the benefits of mixed methods, and evaluators are seeing the potential to use big data to increase the number of types of evaluation that can be conducted within real-world budget, time and data constraints. There are some compelling examples (explored in this UN Global Pulse Report) of where integration has been successful.
  • Integration is the right thing to do.  New approaches that leverage the strengths of data science and evaluation are potentially powerful instruments for giving voice to vulnerable groups and promoting participatory development and social justice.   Without big data, evaluation could miss opportunities to reach the most rural and remote people.  Without evaluation (which emphasizes transparency of arguments and evidence), big data algorithms can be opaque “black boxes.”

While this may paint a hopeful picture, Rick cautioned the audience to temper its enthusiasm. He warned of the risk of domination of evaluation by data science discourse, and surfaced some significant practical, technical, and ethical considerations that would make integration challenging.

First, big data are often non-representative, and the algorithms underpinning them are non-transparent. Second, “the mechanistic approaches offered by data science, are antithetical to the very notion of evaluation being about people’s values and necessarily involving their participation and consent,” he argued. It is – and will always be – critical to pay attention to the human element that evaluation brings to bear. Finally, big data are helpful for pattern recognition, but the ability to identify a pattern should not be confused with true explanation or understanding (correlation ≠ causation). Overall, there are many problems that integration would not solve for, and some that it could create or exacerbate.

The debate confirmed that this question is complex, nuanced, and multi-faceted. It helped to remind that there is cause for enthusiasm and optimism, at the same time as a healthy dose of skepticism. What was made very clear is that the future should leverage the respective strengths of these two fields in order to maximize good and minimize potential risks.

In the end, the side in favor of integration of big data and evaluation won the debate by a considerable margin.

The future of integration looks promising, but it’ll be interesting to see how this conversation unfolds as the number of examples of integration continues to grow.

Interested in learning more and exploring this further? Stay tuned for a follow-up post from Michael and Rick. You can also attend MERL Tech DC in September 2018 if you’d like to join in the discussions in person!

Blockchain: the ultimate solution?

by Ricardo Santana, MERL Practitioner

I had the opportunity during MERL Tech London 2018 to attend a very interesting session to discuss blockchains and how can they be applied in the MERL space. This session was led by Valentine Gandhi, Founder of The Development CAFÉ, Zara Rahman, Research and Team Lead at the The Engine Room, and Wayan Vota, Co-founder of Kurante.

The first part of the session was an introduction to blockchain, which is basically an distributed ledger system. Why is it an interesting solution? Because the geographically distributed traces left in multiple devices make for a very robust and secure system. It is not possible to take a unilateral decision to scrap or eliminate data because it would be reflected in the distributed constitution of the data chain. Is it possible to corrupt the system? Well, yes, but what makes it robust and secure is that for that to happen, every single person participating in the blockchain system must agree to do so.

That is the powerful innovation of the technology. It remains somehow to the torrents of technology to share files:  it is very hard to control this when your file storage is not in a single server but rather in an enormous number of end-user terminals.

What I want to share from this session, however, is not how the technology works! That information is readily available on the Internet and other sources.

What I really found interesting was the part of the session where professionals interested in blockchain shared our doubts and the questions that we would need to clarify in order to decide whether blockchain technology would be required or not.

Some of the most interesting shared doubts and concerns around this technology were:

What sources of training and other useful resources are available if you want to implement blockchain?

  • Say the organization or leadership team decides that a blockchain is required for the solution. I am pretty sure it is not hard to find information about blockchain on the Internet, but we all face the same problem — the enormous amount of information available makes it tricky to reach the holy grail that provides just enough information without losing hours to desktop research. It would be incredibly beneficial to have a suggested place where this info can be find, even more if it were a specialized guide aimed at the MERL space.

What are the data space constraints?

  • I found this question very important. It is a key aspect of the design and scalability of the solution. I assume that it will not be an important amount of data but I really don’t know. And maybe it is not a significant amount of information for a desktop or a laptop, but what if we are using cell phones as end terminals too? This need to be addressed so the design is based on facts and not assumptions.

Use cases.

  • Again, there are probably a lot of them to be found all over the Internet, but they are hardly going to be insightful for a specific MERL approach. Is it possible to have a repository of relevant cases for the MERL space?

When is blockchain really required?

  • It would be really helpful to have a simple guide that helps any professional clarify whether the volume or importance of the information is worth the implementation of a Blockchain system or not.

Is there a right to be forgotten in Blockchain?

  • Recent events give a special relevance to this question. Blockchains are very powerful to achieve traceability, but what if I want my information to be eliminated because it is simply my right? This is an important aspect in technologies that have a distributed logic. How to use the powerful advantages of blockchain while allocating the individual rights of every single person to take unilateral decisions on their private or personal information?

I am not an expert in the matter but I do recognize the importance of these questions and the hope is that the people able to address them can pick them up and provide useful answers and guidance to clarify some or all of them.

If you have answers to these questions, or more questions about blockchain and MERL, please add them in the comments!

If you’d like to be a part of discussions like this one, register to attend the next MERL Tech conference! MERL Tech Jozi is happening August 1-2, 2018 and we just opened up registration today! MERL Tech DC is coming up September 6-7. Today’s the last day to submit your session ideas, so hurry up and fill out the form if you have an idea to present or share!

 

 

Takeaways from MERL Tech London

Written by Vera Solutions and originally published here on 16th April 2018.

In March, Zak Kaufman and Aditi Patel attended the second annual MERL Tech London conference to connect with leading thinkers and innovators in the technology for monitoring and evaluation space. In addition to running an Amp Impact demo session, we joined forces with Joanne Trotter of the Aga Khan Foundation as well as Eric Barela and Brian Komar from Salesforce.org to share lessons learned in using Salesforce as a MERL Tech solution. The panel included representatives from Pencils of Promise, the International Youth Foundation, and Economic Change, and was an inspiring showcase of different approaches to and successes with using Salesforce for M&E.

The event packed two days of introspection, demo sessions, debates, and sharing of how technology can drive more efficient program monitoring, stronger evaluation, and a more data-driven social sector. The first day concluded with a (hilarious!) Fail Fest–an open and honest session focused on sharing mistakes in order to learn from them.

At MERL Tech London in 2017, participants identified seven priority areas that the MERL Tech community should focus on:

  1. Responsible data policy and practice
  2. Improving data literacy
  3. Interoperability of data and systems
  4. User-driven, accessible technologies
  5. Participatory MERL/user-centered design
  6. Lean MERL/User-focused MERL
  7. Overcoming “extractive” data approaches

These priorities were revisited this year, and it seemed to us that almost all revolve around a recurrent theme of the two days: focusing on the end user of any MERL technology. The term “end user” was not itself without controversy–after all, most of our MERL tech tools involve more than one kind of user.

When trying to dive into the fourth, fifth, and sixth priorities, we often came back to the issue of who is the proverbial “user” for whom we should be optimizing our technologies. One participant mentioned that regardless of who it is, the key is to maintain a lens of “Do No Harm” when attempting to build user-centered tools.

The discussion around the first and seventh priorities naturally veered into a discussion of the General Data Protection Regulation (GDPR), and how we can do better as a sector by using it as a guideline for data protection beyond Europe.

A heated session with Oxfam, Simprints, and the Engine Room dove into the pros, cons, and considerations of biometrics in international development. The overall sense was that biometrics can offer tremendous value to issues like fraud prevention and healthcare, but also enhance the  sector’s challenges and risks around data protection. This is clearly a topicwhere much movement can be expected in the coming years.

In addition to meeting dozens of NGOs, we connected with numerous tech providers working in the space, including SimPrints, SurveyCTO, Dharma, Social Cops, and DevResults. We’re always energized to learn about others’ tools and to explore integration and collaboration opportunities.

We wrapped up the conference at a happy hour event co-hosted by ICT4D London and Salesforce.org, with three speakers focused on ‘ICT as a catalyst for gender equality’. A highlight from the evening was a passionate talk by Seyi Akiwowo, Founder of Glitch UK, a young organization working to reduce online violence against women and girls. Seyi shared her experience as a victim of online violence and how Glitch is turning the tables to fight back.

We’re looking forward for the first MERL Tech Johannesburg taking place August 1-2, 2018.

 

Evaluating ICT4D projects against the Digital Principles

By Laura Walker McDonald,  This post was originally published on the Digital Impact Alliance’s Blog on March 29, 2018.

As I have written about elsewhere, we need more evidence of what works and what doesn’t in the ICT4D and tech for social change spaces – and we need to hold ourselves to account more thoroughly and share what we know so that all of our work improves. We should be examining how well a particular channel, tool or platform works in a given scenario or domain; how it contributes to development goals in combination with other channels and tools; how the team selected and deployed it; whether it is a better choice than not using technology or using a different sort of technology; and whether or not it is sustainable.

At SIMLab, we developed our Framework for Monitoring and Evaluation of Technology in Social Change projects to help implementers to better measure the impact of their work. It offers resources towards a minimum standard of best practice which implementers can use or work toward, including on how to design and conduct evaluations. With the support of the Digital Impact Alliance (DIAL), the resource is now finalized and we have added new evaluation criteria based on the Principles for Digital Development.

Last week at MERL Tech London, DIAL was able to formally launch this product by sharing a 2-page summary available at the event and engaging attendees in a conversation about how it could be used. At the event, we joined over 100 organizations to discuss Monitoring, Evaluation, Research and Learning related to technology used for social good.

Why evaluate?

Evaluations provide snapshots of the ongoing activity and the progress of a project at a specific point in time, based on systematic and objective review against certain criteria. They may inform future funding and program design; adjust current program design; or to gather evidence to establish whether a particular approach is useful. They can be used to examine how, and how far, technology contributes to wider programmatic goals. If set up well, your program should already have evaluation criteria and research questions defined, well before it’s time to commission the evaluation.

Evaluation criteria provide a useful frame for an evaluation, bringing in an external logic that might go beyond the questions that implementers and their management have about the project (such as ‘did our partnerships on the ground work effectively?’ or ‘how did this specific event in the host country affect operations?’) to incorporate policy and best practice questions about, for example, protection of target populations, risk management, and sustainability. The criteria for an evaluation could be any set of questions that draw on an organization’s mission, values, principles for action; industry standards or other best practice guidance; or other thoughtful ideas of what ‘good’ looks like for that project or organization. Efforts like the Principles for Digital Development can set useful standards for good practice, and could be used as evaluation criteria.

Evaluating our work, and sharing learning, is radical – and critically important

While the potential for technology to improve the lives of vulnerable people around the world is clear, it is also evident that these improvements are not keeping pace with the advances in the sector. Understanding why requires looking critically at our work and holding ourselves to account. There is still insufficient evidence of the contribution technology makes to social change work. What evidence there is often is not shared or the analysis doesn’t get to the core issues. Even more important, the learnings from what has not worked and why have not been documented and absorbed.

Technology-enabled interventions succeed or fail based on their sustainability, business models, data practices, choice of communications channel and technology platform; organizational change, risk models, and user support – among many other factors. We need to build and examine evidence that considers these issues and that tells us what has been successful, what has failed, and why. Holding ourselves to account against standards like the Principles is a great way to improve our practice, and honor our commitment to the people we seek to help through our work.

Using the Digital Principles as evaluation criteria

The Principles for Digital Development are a set of living guidance intended to help practitioners succeed in applying technology to development programs. They were developed, based on some pre-existing frameworks, by a working group of practitioners and are now hosted by the Digital Impact Alliance.

These nine principles could also form a useful set of evaluation criteria, not unlike OECD evaluation criteria, or Sphere standards. Principles overlap, so data can be used to examine more than one criterion, and ot every evaluation would need to consider all of the Digital Principles.

Below are some examples of Digital Principles and sample questions that could initiate, or contribute to, an evaluation.

Design with the User: Great projects are designed with input from the stakeholders and users who are central to the intended change. How far did the team design the project with its users, based on their current tools, workflows, needs and habits, and work from clear theories of change and adaptive processes?

Understand the Existing Ecosystem: Great projects and programs are built, managed, and owned with consideration given to the local ecosystem. How far did the project work to understand the local, technology and broader global ecosystem in which the project is situated? Did it build on existing projects and platforms rather than duplicating effort? Did the project work sensitively within its ecosystem, being conscious of its potential influence and sharing information and learning?

Build for Sustainability: Great projects factor in the physical, human, and financial resources that will be necessary for long-term sustainability. How far did the project: 1) think through the business model, ensuring that the value for money and incentives are in place not only during the funded period but afterwards, and 2) ensure that long-term financial investments in critical elements like system maintenance and support, capacity building, and monitoring and evaluation are in place? Did the team consider whether there was an appropriate local partner to work through, hand over to, or support the development of, such as a local business or government department?

Be Data Driven: Great projects fully leverage data, where appropriate, to support project planning and decision-making. How far did the project use real-time data to make decisions, use open data standards wherever possible, and collect and use data responsibly according to international norms and standards?

Use Open Standards, Open Data, Open Source, and Open Innovation: Great projects make appropriate choices, based on the circumstances and the sensitivity of their project and its data, about how far to use open standards, open the project’s data, use open source tools and share new innovations openly. How far did the project: 1) take an informed and thoughtful approach to openness, thinking it through in the context of the theory of change and considering risk and reward, 2) communicate about what being open means for the project, and 3) use and manage data responsibly according to international norms and standards?

For a more complete set of guidance, see the complete Framework for Monitoring and Evaluating Technology, and the more nuanced and in-depth guidance on the Principles, available on the Digital Principles website.

Technologies in monitoring and evaluation | 5 takeaways

Bloggers: Martijn Marijnis and Leonard Zijlstra. This post originally appeared on the ICCO blog on April 3, 2018.
.

Technologies in monitoring and evaluation | 5 takeaways

On March 19 and 20 ICCO participated in the MERL Tech 2018 in London. The conference explores the possibilities of technology in monitoring, evaluation, learning and research in development. About 200 like-minded participants from various countries participated. Key issues on the agenda were data privacy, data literacy within and beyond your organization, human-centred monitoring design and user-driven technologies. Interesting practices where shared, amongst others in using blockchain technologies and machine learning. Here are our most important takeaways:

1)  In many NGOs data gathering still takes place in silo’s

Oxfam UK shared some knowledgeable insights and practical tips in putting in place an infrastructure that combines data: start small and test, e.g. by building up a strong country use case; discuss with and learn from others; ensure privacy by design and make sure senior leadership is involved. ICCO Cooperation currently faces a similar challenge, in particular in combining our household data with our global result indicators.

2)  Machine learning has potential for NGOs

While ICCO recently started to test machine learning in the food security field (see this blog) other organisations showcased interesting examples: the Wellcome Trust shared a case where they tried to answer the following question: Is the organization informing and influencing policy and if so, how? Wellcome teamed up their data lab and insight & analysis team and started to use open APIs to pull data in combination with natural language processing to identify relevant cases of research supported by the organization. With their 8.000 publications a year this would be a daunting task for a human team. First, publications linked to Wellcome funding were extracted from a European database (EPMC) in combination with end of grant reports. Then WHO’s reference section was scraped to see if and to what extent WHO’s policy was influenced and to identify potential interesting cases for Wellcome’s policy team.

3)  Use a standardized framework for digital development

See digitalpinciples.org. It gives – amongst others – practical guidelines on how to use open standards and open data, how data can be reused, how privacy and security can be addressed, how users can and should be involved in using technologies in development projects. It is a useful framework for evaluating your design.

4)  Many INGOs get nervous these days about blockchain technology

What is it, a new hype or a real game changer? For many it is just untested technology with high risks and little upside for the developing world. But, for example INGOs working in agriculture value chains or in humanitarian relief operations, its potential is definitely consequential enough to merit a closer look. It starts with the underlying principle, that users of a so-called blockchain can transfer value, or assets, between each other without the need for a trusted intermediary. The running history of the transactions is called the blockchain, and each transaction is called a block. All transactions are recorded in a ledger that is shared by all users of a blockchain.

The upside of blockchain applications is the considerable time and money saving aspect of it. Users rely on this shared ledger to provide a transparent view into the details of the assets or values, including who owns them, as well as descriptive information such as quality or location. Smallholder farmers could benefit (e.g. real-time payment on delivery, access to credit), so can international sourcing companies (e.g. traceability of produce without certification), banks (e.g. cost-reductions, risk-reduction), as much as refugees and landless (e.g. registration, identification). Although we haven’t yet seen large-scale adoption of blockchain technology in the development sector, investors like the Bill and Melinda Gates Foundation and various venture capitalists are paying attention to this space.

But one of the main downsides or challenges for blockchain, like with agricultural technology at large, is connecting the technology to viable business models and compelling use cases. With or without tested technology, this is hard enough as it is and requires innovation, perseverance and focus on real value for the end-user; ICCO’s G4AW projects gain experience with blockchain.

5)  Start thinking about data-use incentives

Over the years, ICCO has made significant investments in monitoring & evaluation and data skills training. Yet limited measurable results of increased data use can be seen, like in many other organizations. US-based development consultant Cooper&Smith shared revealing insights into data usage incentives. It tested three INGOs working across five regions globally. The hypothesis was, that better alignment of data-use training incentives leads to increased data use later on. They looked at both financial and non-financial rewards that motivate individuals to behave in a particular way. Incentives included different training formats (e.g. individual, blended), different hardware (e.g. desktop, laptop, mobile phone), recognition (e.g. certificate, presentation at a conference), forms of feedback & support (e.g. one-on-one, peer group) and leisure time during the training (e.g. 2 hours/week, 12 hours/week). Data use was referred to as the practice of collecting, managing, analyzing and interpreting data for making program policy and management decisions.

They found considerable differences in appreciation of the attributes. For instance, respondents overwhelmingly prefer a certificate in data management, but instead they currently receive primarily no recognition or only recognition from their supervisor. Or  one region prefers a certificate while the other prefers attending an international conference as reward. Or that they prefer one-on-one feedback but instead they receive only peer-2-peer support. The lesson here is, that while most organizations apply a ‘one-size fits all’-reward system (or have no reward system at all), this study points at the need to develop a culturally sensitive and geographically smart reward system to see real increase in data usage.

For many NGOs the data revolution has just begun, but we are underway!