All posts by Guest Post

How to Create a MERL Culture within Your Organization

Written by Jana Melpolder, MERL Tech DC Volunteer and former ICT Works Editor. Find Jana on Twitter:  @JanaMelpolder

As organizations grow, they become increasingly aware of how important MERL (Monitoring, Evaluation, Research, and Learning) is to their international development programs. To meet this challenge, new hires need to be brought on board, but more importantly, changes need to happen in the organization’s culture.

How can nonprofits and organizations change to include more MERL? Friday afternoon’s MERL Tech DC  session “Creating a MERL Culture at Your Nonprofit” set out to answer that question. Representatives from Salesforce.org and Samaschool.org were part of the discussion.

Salesforce.org staff members Eric Barela and Morgan Buras-Finlay emphasized that their organization has set aside resources (financial and otherwise) for international and external M&E. “A MERL culture is the foundation for the effective use of technology!” shared Eric Barela.

Data is a vital part of MERL, but those providing it to organizations often need to “hold the hands” of those on the receiving end. What is especially vital is helping people understand this data and gain deeper insight from it. It’s not just about the numbers – it’s about what is meant by those numbers and how people can learn and improve using the data.

According to Salesforce.org, an organization’s MERL culture is comprised of its understanding of the benefit of defining, measuring, understanding, and learning for social impact with rigor. And building or maintaining a MERL culture doesn’t just mean letting the data team do whatever they like or being the ones in charge. Instead, it’s vital to focus on outcomes. Salesforce.org discussed how its MERL staff prioritize keeping a foot in the door in many places and meeting often with people from different departments.

Where does technology fit into all of this? According to Salesforce.org, the push is on keep the technology ethical. Morgan Buras-Finlay described it well, saying “technology goes from building a useful tool to a tool that will actually be used.”

Another participant on Friday’s panel was Samaschool’s Director of Impact, Kosar Jahani. Samaschool describes itself as a San Francisco-based nonprofit focused on preparing low-income populations to succeed as independent workers. The organization has “brought together a passionate group of social entrepreneurs and educators who are reimagining workforce development for the 21st century.”

Samaschool creates a MERL culture through Learning Calls for their different audiences and funders. These Learning Calls are done regularly, they have a clear agenda, and sometimes they even happen openly on Facebook LIVE.

By ensuring a high level of transparency, Samasource is also aiming to create a culture of accountability where it can learn from failures as well as successes. By using social media, doors are opened and people have an easier time gaining access to information that otherwise would have been difficult to obtain.

Kosar explained a few negative aspects of this kind of transparency, saying that there is a risk to putting information in such a public place to view. It can lead to lost future investment. However, the organization feels this has helped build relationships and enhanced interactions.

Sadly, flight delays prevented a third organization. Big Elephant Studios and its founder Andrew Means from attending MERL Tech. Luckily, his slides were presented by Eric Barela. Andrew’s slides highlighted the following three things that are needed to create a MERL Culture:

  • Tools – investments in tools that help an organization acquire, access, and analyze the data it needs to make informed decisions
  • Processes – Investments in time to focus on utilizing data and supporting decision making
  • Culture – Organizational values that ensure that data is invested in, utilized, and listened to

One of Andrew’s main points was that generally, people really do want to gain insight and learn from data. The other members of the panel reiterated this as well.

A few lingering questions from the audience included:

  • How do you measure how culture is changing within an organization?
  • How does one determine if an organization’s culture is more focused on MERL that previously?
  • Which social media platforms and strategies can be used to create a MERL culture that provides transparency to clients, funders, and other stakeholders?

What about you? How do you create and measure the “MERL Culture” in your organization?

Reflecting on MERL Tech 2018 and the Blockchain

by Michael Cooper (emergence.consultant.com), Founder at Emergence; Shailee Adinolfi (shailee.adinolfi@consensys.net), Director of Blockchain Solutions at ConsenSys; and Valentine J Gandhi (v.gandhi@dev-cafe.org), Founder at the Development Café.

Mike and Val at the Blockchain Pre-Workshop at MERL Tech DC.

MERL Tech DC kicked off with a pre-conference workshop on September 5th that focused on what the Blockchain is and how it could influence MEL.

The workshop was broken into four parts: 1) blockchain 101, 2) how the blockchain is influencing and could influence MEL, 3) case studies to demonstrate early lessons learned, and 4) outstanding issues and emerging themes.

This blog focuses and builds on the fourth area. At the end, we provide additional resources that will be helpful to all interested in exploring how the blockchain could disrupt and impact international development at large.  

Workshop Takeaways and Afterthoughts

For our purposes here, we have distilled some of the key takeaways from the workshop. This section includes a series of questions that we will respond to and link to various related reference materials.  

Who are the main blockchain providers and what are they offering?

Any time a new “innovation” is introduced into the international development space, potential users lack knowledge about what the innovation is, the value it can add, and the costs of implementing it. This lack of knowledge opens the door for “snake oil salesmen” who engage in predatory attempts to sell their services to users who don’t have the knowledge to make informed decisions.

We’ve seen this phenomenon play out with blockchain. Take, for example, the numerous Initial Coin Offerings (ICO’s) that defrauded their investors, or the many instances of service providers offering low quality blockchain education trainings and/or project solutions.  

Education is the best defense against being taken advantage of by snake oil salesmen.  If you’re looking for general education about blockchain, we’ve included a collection of helpful tools in the table below. If your group is working to determine whether a blockchain solution is right for the problem at hand, the USAID Blockchain Primer offers easy to use decision trees that can help you. Beyond these, Mercy Corp has just published Block by Block, which outlines the attributes of various distributed ledgers along some very helpful lines that are useful when considering what distributed ledger technology to use.

Words of warning aside, there are agencies that provide genuine blockchain solutions. For a full list of providers please visit www.blockchainomics.tech, an information database run by The Development CAFE on all things blockchain.

Bottom Line: Beware the snake oil salesmen preaching the benefits of blockchain but silent on the feasibility of their solution. Unless the service provider is just as focused on your problem as you are, be wary that they are just trying to pitch a solution (viable or not) and not solve the problem.  Before approaching the companies or service providers, always identify your problem and see if Blockchain is indeed a viable solutions.

How does governance of the blockchain influence its sustainability?

In the past, we’ve seen technology-led social impact solutions make initial gains that diminished over time until there is no sustained impact.  Current evidence shows that many solutions of this sort fail because they are not designed to solve a specific problem in a relevant ecosystem. This insight has given rise to the Digital Development Principles and the Ethical Considerations that should be taken into account for blockchain solutions.  

Bottom Line: Impact is achieved and sustained by the people who use a tool. Thus, blockchain, as a tool, does not sustain impacts on its own. People do so by applying knowledge about the principles and ethics needed for impact. Understanding this, our next step is to generate more customized principles and ethical considerations for blockchain solutions through case studies and other desperately needed research.  

How do the blockchain, big data, and Artificial Intelligence influence each other?

The blockchain is a new type of distributed ledger system that could have massive social implications. Big Data refers to the exponential increase in data we experience through the Internet of Things (IoT) and other data sources (Smart Infrastructure, etc.). Artificial Intelligence (AI) assists in identifying and analyzing this new data at exponentially faster rates than is currently the case.

Blockchain is a distributed ledger, in essence, a database of transactions, just like any other database, it’s a repository, and it is contributing to the growth of Big Data. AI can be used to automate the process of data entry into the blockchain. This is how the three are connected.

The blockchain is considered a leading contender as the ledger of choice for big data because: 1) due to its distributed nature it can handle much larger amounts of data in a more secure fashion than is currently possible with cloud computing, and 2) it is possible to automate the way big data is uploaded to the blockchain. AI tools are easily integrated into blockchain functions to run searches and analyze data, and this opens up the capacity to collect, analyze and report findings on big data in a transparent and secure manner more efficiently than ever before.

Bit by Bit is a very readable and innovative overview of how to conduct social science research in the digital age of big data, artificial intelligence and the blockchain. It gives the reader a quality introduction into some of the dominant themes and issues to consider when attempting to evaluate either a technology lead solution or use technology to conduct social research.

Given its immutability, how can an adaptive management system work with the blockchain?

This is a critical point.  The blockchain is an immutable record, it is almost impossible (meaning it has never been done and there are no simulations where current technology is able to take control of a properly designed blockchain) to hijack, hack, or alter.  Thus the blockchain provides the security needed to mitigate corruption and facilitate audits.

This immutability does not mitigate any type of adaptive management approach, however. Adaptive Management requires small iterative course corrections informed by quality data around what is and is not working.  This data record and the course corrections provide a rich data set that is extremely valuable to replication efforts because they subvert the main barrier to replication — lack of data on what does and does not work. Hence in this case the immutability of the blockchain is a value add to Adaptive Management. This is more of a question of good adaptive management practices rather than whether the blockchain is a viable tool for these purposes.  

It is important to note that you can append information on blocks (not amend), so there will always be a record of previous mistakes (auditability), but the most recent layer of truth is what’s being viewed/queried/verified, etc. Hence, immutability is not a hurdle but a help.

What are the first steps an organization should take when deciding on whether to adopt a blockchain solution?

Each problem that an organization faces is unique, but the following simple steps can help one make a decision:

  • Identify your problem (using tools such as Developmental Evaluation or Principles of Digital Development)
  • Understand the blockchain technology, concepts, functionality, requirements and cost
  • See if your problem can be solved by blockchain rather than a centralized database
  • Consider the advantages and disadvantages
  • Identify the right provider and work with them in developing the blockchain
  • Consider ethical principles and privacy concerns as well as other social inequalities
  • Deploy in pilot phases and evaluate the results using an agile approach
What can be done to protect PII and other sensitive information on a blockchain?

Blockchain uses cryptography to store its data. That PII and other information cannot be viewed by anyone other than those who have access to the ‘keys’. While developing a blockchain, it’s important to ensure that what goes in is protected and that access to is regulated. Another critical step is promoting literacy on the use of blockchain and its features among stakeholders.

References Correlated to Take Aways

This table organizes current reference materials as related to the main questions we discussed in the workshop. (The question is in the left hand column and the reference material with a brief explanation and hyperlink is in the right hand column).  

Question Resources and Considerations
Who are the main blockchain platforms? Who are the providers and what are they offering? Platforms 

Stellar: https://www.stellar.org/

Ethereum: https://www.ethereum.org/

Hyperledger: https://www.hyperledger.org/

R3: Corda: https://www.r3.com/corda-platform/

EOS: https://explorer.eos-classic.io/home

Providers

IBM, ConsenSys, Microsoft, AWS, Cognizant, R3, and others, are offering products and enterprise solutions.

Block by Block is a valuable comparison tool for assessing various platforms. 

How does governance of the blockchain influence its sustainability? See Beeck Center’s Blockchain Ethical Design Framework. Decentralization (how many nodes), equity amongst nodes, rules, transparency are all factors in long-term sustainability.  Likewise the Principles for Digital Development have a lot of evidence behind them for their contributions to sustainability.
How do the blockchain, big data and Artificial Intelligence influence each other? They can be combined in various ways to strengthen a particular service or product. There is no blanket approach, just as there is not blanket solution to any social impact problem.  The key is to know the root cause of the problem at hand and how the function of each tool used separately and in conjunction can address these root causes.
Given its immutability, how can an adaptive management system work with the blockchain? Ask how mistakes are corrected when creating a customized solution, or purchasing a product. Usually, there will be a way to do that, through an easy to use, user interface.
What are the first steps an organization should take when they are deciding on whether to adopt a blockchain solution? Participate in demos, and test some of the solutions for your own purposes or use cases.  Use the USAID Blockchain Primer and reach out to trusted experts to provide advice. Given that the blockchain is primarily open source code, once you have decided that a blockchain is a viable solution for your problem, GitHub is full of open source code that you can modify for your own purposes.

 

How I Learned to Stop Worrying and Love Big Data

by Zach Tilton, a Peacebuilding Evaluation Consultant and a Doctoral Research Associate at the Interdisciplinary PhD in Evaluation program at Western Michigan University. 
 
In 2013 Dan Airley quipped “Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it….” In 2015 the metaphor was imported to the international development sector by Ben Ramalingam, in 2016 it became a MERL Tech DC lightning talk, and has been ringing in our ears ever since. So, what about 2018? Well, unlike US national trends in teenage sex, there are some signals that big or at least ‘bigger’ data is continuing to make its way not only into the realm of digital development, but also evaluation. I recently attended the 2018 MERL Tech DC pre-conference workshop Big Data and Evaluation where participants were introduced to real ways practitioners are putting this trope to bed (sorry, not sorry). In this blog post I share some key conversations from the workshop framed against the ethics of using this new technology, but to do that let me first provide some background.
 
I entered the workshop on my heels. Given the recent spate of security breaches and revelations about micro-targeting, ‘Big Data’ has been somewhat of a boogie-man for myself and others. I have taken some pains to limit my digital data-footprint, have written passionately about big data and surveillance capitalism, and have long been skeptical of big data applications for serving marginalized populations in digital development and peacebuilding. As I found my seat before the workshop started I thought, “Is it appropriate or ethical to use big data for development evaluation?” My mind caught hold of a 2008 Evaluation Café debate between evaluation giants Michael Scriven and Tom Cook on causal inference in evaluation and the ethics of Randomized Control Trials. After hearing Scriven’s concerns about the ethics of withholding interventions from control groups, Cook asks, “But what about the ethics of not doing randomized experiments?” He continues, “What about the ethics of having causal information that is in fact based on weaker evidence and is wrong? When this happens, you carry on for years and years with practices that don’t work whose warrant lies in studies that are logically weaker than experiments provide.”
 
While I sided with Scriven for most of that debate, this question haunted me. It reminded me of an explanation of structural violence by peace researcher Johan Galtung who writes, “If a person died from tuberculosis in the eighteenth century it would be hard to conceive of this as violence since it might have been quite unavoidable, but if he dies from it today, despite all the medical resources in the world, then violence is present according to our definition.” Galtung’s intellectual work on violence deals with the difference between potential and the actual realizations and what increases that difference. While there are real issues with data responsibility, algorithmic biases, and automated discrimination that need to be addressed, if there are actually existing technologies and resources not being used to address social and material inequities in the world today, is this unethical, even violent? “What about the ethics of not using big data?” I asked myself back. The following are highlights of the actually existing resources for using big data in the evaluation of social amelioration.
 

Actually Existing Data

 
During the workshop, Kerry Bruce from Social Impact shared with participants her personal mantra, “We need to do a better job of secondary data analysis before we collect any more primary data.” She challenged us to consider how to make use of the secondary data available to our organizations. She gave examples of potential big data sources such as satellite images, remote sensors, GPS location data, social media, internet searches, call-in radio programs, biometrics, administrative data and integrated data platforms that merge many secondary data files such as public records and social service agency and client files. The key here is there are a ton of actually existing data, many of which are collected passively, digitally, and longitudinally. Despite noting real limitations to accessing existing secondary data, including donor reluctance to fund such work, limited training in appropriate methodologies in research teams, and differences in data availability between contexts, to underscore the potential of using secondary data, she shared a case study where she lead a team to use large amounts of secondary indirect data to identify ecosystems of modern day slavery at a significantly reduced cost than collecting the data first-hand. The outputs of this work will help pinpoint interventions and guide further research into the factors that may lead to predicting and prescribing what works well for stopping people from becoming victims of slavery.
 

Actually Existing Tech (and math)

 
Peter York from BCT Partners provided a primer on big data and data science including the reality-check that most of the work is the unsexy “ETL,” or the extraction, transformation, and loading of data. He contextualized the potential of the so-called big data revolution by reminding participants that the V’s of big data, Velocity, Volume, and Variety, are made possible by the technological and social infrastructure of increasingly networked populations and how these digital connections enable the monitoring, capturing, and tracking of ever increasing aspects of our lives in an unprecedented way. He shared, “A lot of what we’ve done in research were hacks because we couldn’t reach entire populations.” With advances in the tech stacks and infrastructure that connect people and their internet-connected devices with each other and the cloud, the utility of inferential statistics and experimental design lessens when entire populations of users are producing observational behavior data. When this occurs, evaluators can apply machine learning to discover the naturally occurring experiments in big data sets, what Peter terms ‘Data-driven Quasi-Experimental Design.’ This is exactly what Peter does when he builds causal models to predict and prescribe better programs for child welfare and juvenile justice to automate outcome evaluation, taking cues from precision medicine.
 
One example of a naturally occurring experiment was the 1854 Broad Street cholera outbreak in which physician John Snow used a dot map to identify a pattern that revealed the source of the outbreak, the Broad Street water pump. By finding patterns in the data John Snow was able to lay the groundwork for rejecting the false Miasma Theory and replace it with a proto-typical Germ Theory. And although he was already skeptical of miasma theory, by using the data to inform his theory-building he was also practicing a form of proto-typical Grounded Theory. Grounded theory is simply building theory inductively, after data collection and analysis, not before, resulting in theory that is grounded in data. Peter explained, “Machine learning is Grounded Theory on steroids. Once we’ve built the theory, found the pattern by machine learning, we can go back and let the machine learning test the theory.” In effect, machine learning is like having a million John Snows to pour over data to find the naturally occurring experiments or patterns in the maps of reality that are big data.
 
A key aspect of the value of applying machine learning in big data is that patterns more readily present themselves in datasets that are ‘wide’ as opposed to ‘tall.’ Peter continued, “If you are used to datasets you are thinking in rows. However, traditional statistical models break down with more features, or more columns.” So, Peter and evaluators like him that are applying data science to their evaluative practice are evolving from traditional Frequentist to Bayesian statistical approaches. While there is more to the distinction here, the latter uses prior knowledge, or degrees of belief, to determine the probability of success, where the former does not. This distinction is significant for evaluators who are wanting to move beyond predictive correlation to prescriptive evaluation. Peter expounded, Prescriptive analytics is figuring out what will best work for each case or situation.” For example, with prediction, we can make statements that a foster child with certain attributes is 70% not likely to find a home. Using the same data points with prescriptive analytics we can find 30 children that are similar to that foster child and find out what they did to find a permanent home. In a way, only using predictive analytics can cause us to surrender while including prescriptive analytics can cause us to endeavor.
 

Existing Capacity

The last category of existing resources for applying big data for evaluation was mostly captured by the comments of independent evaluation consultant, Michael Bamberger. He spoke of the latent capacity that existed in evaluation professionals and teams, but that we’re not taking full advantage of big data: “Big data is being used by development agencies, but less by evaluators in these agencies. Evaluators don’t use big data, so there is a big gap.”

He outlined two scenarios for the future of evaluation in this new wave of data analytics: a state of divergence where evaluators are replaced by big data analysts and a state of convergence where evaluators develop a literacy with the principles of big data for their evaluative practice. One problematic consideration with this hypothetical is that many data scientists are not interested in causation, as Peter York noted. To move toward the future of convergence, he shared how big data can enhance the evaluation cycle from appraisal and planning through monitoring, reporting and evaluating sustainability. Michael went on to share a series of caveats emptor that include issues with extractive versus inclusive uses of big data, the fallacy of large numbers, data quality control, and different perspectives on theory, all of which could warrant their own blog posts for development evaluation.

While I deepened my basic understandings of data analytics including the tools and techniques, benefits and challenges, and guidelines for big data and evaluation, my biggest take away is reconsidering big data for social good by considering the ethical dilemma of not using existing data, tech, and capacity to improve development programs, possibly even prescribing specific interventions by identifying their probable efficacy through predictive models before they are deployed.

(Slides from the Big Data and Evaluation workshop are available here).

Do you use or have strong feelings about big data for evaluation? Please continue the conversation below.

 

 

MERL on the Money: Are we getting funding for data right?

By Paige Kirby, Senior Policy Advisor at Development Gateway

Time for a MERL pop quiz: Out of US $142.6 billion spent in ODA each year, how much goes to M&E?

A)  $14.1-17.3 billion
B)  $8.6-10 billion
C)  $2.9-4.3 billion

It turns out, the correct answer is C. An average of only $2.9-$4.3 billion — or just 2-3% of all ODA spending — goes towards M&E.

That’s all we get. And despite the growing breadth of logframes and depths of donor reporting requirements, our MERL budgets are likely not going to suddenly scale up.

So, how can we use our drop in the bucket better, to get more results for the same amount of money?

At Development Gateway, we’ve been doing some thinking and applied research on this topic, and have three key recommendations for making the most of MERL funding.

Teamwork

Image Credit: Kjetil Korslien CC BY NC 2.0

When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.

Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.

Consider the Public Option

Image Credit: Development Gateway

And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?

If partner governments and donors invested in country statistical and administrative data systems, we implementers would not have such incentive or need to conduct one-off data collection.

For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.

The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.

Flip the Paradigm

Image Credit: Rafael J M Souza CC BY 2.0

And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.

But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.

Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.

 

The Art and Necessity of Building a Data Culture

By Ben Mann, Policy & engineering nerd. Technology & data evangelist. Working for @DAIGlobal. The original appears here.

We live in the digital era. And the digital era is built on data. Everyone in your business, organization, agency, family, and friend group needs data. We don’t always realize it. Some won’t acknowledge it. But everyone needs and uses data every day to make decisions. One of my colleagues constantly reminds me that we are all data junkies who need that fix to “get sh*t done.”

 

So we all agree that we need data, right? Right.

Now comes the hard part: how do we actually use data? And not just to inform what we should buy on Amazon or who we should follow on Twitter, but how do we do the impossible (and over-used-buzzword of the century) to “make data-driven-decisions?” As I often hear from frustrated friends at conferences or over coffee, there is a collectively identified need for improving data literacy and, at the same time, collective angst over actually improving the who/what/where/when/why/how of data at our companies or organizations.

The short answer: We need to build our own data culture.

It needs to be inclusive and participatory for all levels of data users. It needs to leverage appropriate technology that is paired with responsible processes. It needs champions and data evangelists. It needs to be deep and wide and complex and welcoming where there are no stupid questions.

The long answer: We need to build our own data cultures. And it’s going to be hard. And expensive. And it’s an unreachable destination.

I was blessed to hear Shash Hegde (Microsoft Data Guru extraordinare) talk about modern data strategies for organizations. He lays out 6 core elements of a data strategy that any team needs to address to build a culture that is data-friendly and data-engaged:

Vision: Does your organization know their current state of data? Is there a vision for how it can be used and put to work?

People: Maybe more important than anything else on this list, people matter. They are the core of your user group, the ones who will generate most of your data, manage the systems, and consumer the insights. Do you know their habits, needs, and desires?

Structure: Not to be confused with stars or snowflakes — we mean the structure of your organization. How business units are formed, who manages what, who controls what resources, and how the pieces fit together.

Process: As a systems thinkings person, I know that there is always a process in play. Even the abscence of process is a process in and of itself. Knowing the process and workflow of your data is critical to flow and use of your culture.

Rules: They govern us. They set boundaries and guiding rails, defining our workspaces and playing fields.

Tools+Tech: We almost always start here, but I’d argue it is the least important. With the cloud and modern data platforms, with a sprinkling of AI and ML, it is rarely the bottleneck anymore. It’s important, but should never be the priority.

Building data culture is a journey. It can be endless. You may never achieve it. And unlike the Merry Pranksters, we need a destination to drive towards in building data literacy, use, and acceptance. And if anyone tells you that they can do it cheap or free, please show them the exit ASAP.

Starting your data adventure

At MERLTech DC, we recently hosted a panel on organizational data literacy and our desperate need for more of it. Experts (smarter than me) weighed in on how the heck we get ourselves, our teams, and our companies onto the path to data literacy and a data loving culture.

Three tangible things we agreed on:

💪🏼Be the champion.

Because someone has to, why not you?

👩🏾‍💼Get a senior sponsor.

Unless you are the CEO, you need someone with executive level weight behind you. Trust us (& learn from our own failures).

🧗🏽‍♂️Keep marching on. And invite everyone to join you.

You will face obstacles. You’ll face failures. You may feel like you’re alone. But helping lead organizational change is a rewarding experience — especially with something as awesome as data. It’s a journey everyone should be on and I encourage you to bring along as many coworkers/coconspirators/collaborators as possible. Preferably everyone.

So don’t wait any longer. Start your adventure in your organization today!!

Integrating big data into program evaluation: An invitation to participate in a short survey

As we all know, big data and data science are becoming increasingly important in all aspects of our lives. There is a similar rapid growth in the applications of big data in the design and implementation of development programs. Examples range from the use of satellite images and remote sensors in emergency relief and the identification of poverty hotspots, through the use of mobile phones to track migration and to estimate changes in income (by tracking airtime purchases), social media analysis to track sentiments and predict increases in ethnic tension, and using smart phones on Internet of Things (IOT) to monitor health through biometric indicators.

Despite the rapidly increasing role of big data in development programs, there is speculation that evaluators have been slower to adopt big data than have colleagues working in other areas of development programs. Some of the evidence for the slow take-up of big data by evaluators is summarized in “The future of development evaluation in the age of big data”.  However, there is currently very limited empirical evidence to test these concerns.

To try to fill this gap, my colleagues Rick Davies and Linda Raftree and I would like to invite those of you who are interested in big data and/or the future of evaluation to complete the attached survey. This survey, which takes about 10 minutes to complete asks evaluators to report on the data collection and data analysis techniques that you use in the evaluations you design, manage or analyze; while at the same time asking data scientists how familiar they are with evaluation tools and techniques.

The survey was originally designed to obtain feedback from participants in the MERL Tech conferences on “Exploring the Role of Technology in Monitoring, Evaluation, Research and Learning in Development” that are held annually in London and Washington, DC, but we would now like to broaden the focus to include a wider range of evaluators and data scientists.

One of the ways in which the findings will be used is to help build bridges between evaluators and data scientists by designing integrated training programs for both professions that introduce the tools and techniques of both conventional evaluation practice and data science, and show how they can be combined to strengthen both evaluations and data science research. “Building bridges between evaluators and big data analysts” summarizes some of the elements of a strategy to bring the two fields closer together.

The findings of the survey will be shared through this and other sites, and we hope this will stimulate a follow-up discussion. Thank you for your cooperation and we hope that the survey and the follow-up discussions will provide you with new ways of thinking about the present and potential role of big data and data science in program evaluation.

Here’s the link to the survey – please take a few minute to fill it out!

You can also join me, Kerry Bruce and Pete York on September 5th for a full day workshop on Big Data and Evaluation in Washington DC.

Using WhatsApp to improve family health

Guest post from ​Yolandi Janse van Rensburg, Head of Content & Communities at Every1Mobile. This post first appeared here.

I recently gave a talk at the MERL Tech 2018 conference in Johannesburg about the effectiveness of Whatsapp as a communication channel to reach low-income communities in the urban slums of Nairobi, Kenya and understand their health behaviours and needs.

Mobile Economy Report 2018. Communicating more effectively with a larger audience in hard-to-reach areas has never been easier. Instead of relying on paper questionnaires or instructing field workers to knock on doors, you can now communicate directly with your users, no matter where you are in the world.

With this in mind, some may choose to create a Whatsapp group, send a batch of questions and wait for quality insights to stream in, but in reality, they receive little to no participation from their users.

Why, you ask? Whatsapp can be a useful tool to engage your users, but there are a few lessons we’ve learnt along the way to encourage high levels of participation and generate important insights.

Building trust comes first

Establishing a relationship with the communities you’re targeting can easily be overlooked. Between project deadlines, coordination and insight gathering, it can be easy to neglect forging a connection with our users, offering a window into our thinking, so they can learn more about who we are and what we’re trying to achieve. This is the first step in building trust and acquiring your users’ buy-in to your programme. This lies at the core of Every1Mobile’s programming. The relationship you build with your users can unlock honest feedback that is crucial to the success of your programme going forward.

In late 2017, Every1Mobile ran a 6-week Whatsapp pilot with young mothers and mothers-to-be in Kibera and Kawangware, Nairobi, to better understand their hygiene and nutrition practices in terms of handwashing and preparing a healthy breakfast for their families. The U Afya pilot kicked off with a series of on-the-ground breakfast clubs, where we invited community members to join. It was an opportunity for the mothers to meet us, as well as one another, which made them feel more comfortable to participate in the Whatsapp groups.

Having our users meet beforehand and become acquainted with our local project team ensured that they felt confident enough to share honest feedback, talk amongst themselves and enjoy the Whatsapp chats. As a result, 60% of our users attended every Whatsapp session and 84% attended more than half of the sessions.

Design content using SBCC

At Every1Mobile, we do not simply create engaging copy, our content design is based on research into user behaviour, analytics and feedback, tailored with a human-centric approach to inspire creative content strategies and solutions that nurture an understanding of our users.

When we talk about content design, we mean taking a user need and presenting it in the best way possible. Applying content design principles means we do the hard work for the user. And the reward is communication that is simpler, clearer and faster for our communities

For the U Afya pilot, we incorporated Unilever, our partner’s, behaviour change approach, namely the Five Levers for Change, to influence attitudes and behaviours, and improve family health and nutrition. The approach aims to create sustainable habits using social and behaviour change communication (SBCC) techniques like signposting, pledging, prompts and cues, and peer support. Each week covered a different topic including pregnancy, a balanced diet, an affordable and healthy breakfast, breastfeeding, hygiene and weaning for infants.

Localisation means more than translating words

Low adult literacy in emerging markets can have a negative impact on the outcomes of your behaviour change campaigns. In Kenya, roughly  38.5% of the adult population is illiterate with bottom-of-the-pyramid communities having little formal education. This means translating your content into a local language may not be enough.

To address this challenge for the U Afya pilot, our Content Designers worked closely with our in-country Community Managers to localise the Whatsapp scripts so they are applicable to the daily lives of our users. We translated our Whatsapp scripts into Sheng, even though English and Kiswahili are the official languages in Kenya. Sheng is a local slang blend of English, Kiswahili and ethnic words from other cultures. It is widely spoken by the urban communities with over 3,900 words, idioms and phrases. It’s a language that changes and evolves constantly, which means we needed a translator who has street knowledge of urban life in Nairobi.

Beyond translating our scripts, we integrated real-life references applicable to our target audience. We worked with our project team to find out what the daily lives of the young mothers in Kibera and Kawangware looked like. What products are affordable and accessible? Do they have running water? What do they cook for their families and what time is supper served? Answers to these questions had a direct impact on our use of emojis, recipes and advice in our scripts. For example, we integrated local foods into the content like uji and mandazi for breakfast and indigenous vegetables including ndengu, ngwashi and nduma.

Can WhatsApp can drive behaviour change?

The answer is ‘yes’, mobile has the potential to drive SBCC. We observed an interesting link between shifts in attitude and engagement, with increased self-reported assimilation of new behaviour from women who actively posted during the Whatsapp sessions.

To measure the impact of our pilot on user knowledge, attitudes and behaviours, we designed interactive pre- and post-surveys, which triggered airtime incentives once completed. Surprisingly, the results showed little impact in knowledge with pre-scores registering higher than anticipated, however, we saw a notable decrease in perceived barriers of adopting these new behaviours and a positive impact on self-efficacy and confidence.

WhatsApp can inform the programme design

Your audience can become collaborators and help you design your programme. We used our insights gathered through the U Afya Whatsapp pilot to create a brand new online community platform that offers young mothers in Nairobi a series of online courses called Tunza Class.

We built the community platform based on the three key life stages identified within the motherhood journey, namely pregnancy and birth, newborn care, and mothers with children under five. The platform includes an interactive space called Sistaz Corner where users can share their views, experiences and advice with other mothers in their community.

With a range of SBCC techniques built into the platform, users can get peer support anonymously, and engage field experts on key health issues. Our Responsible Social Network functionality allows users to make friends, build their profile and show off their community activity which further drives overall user engagement on the site. The Every1Mobile platform is built in a way that enables users to access the online community using the most basic web-enabled feature phone, at the lowest cost for our end user, with fast loading and minimal data usage.

Following the site launch in early August 2018, we are now continuing to use our Whatsapp groups so we can gather real-time feedback on site navigation, design, functionality, labelling and content, in order to apply iterative design and ensure the mobile platform is exactly what our users want it to be.

 

How MERL Tech Jozi helped me bridge my own data gap

Guest post from Praekelt.org. The original post appeared on August 15 here.

Our team had the opportunity to enjoy a range of talks at the first ever MERL Tech in Johannesburg. Here are some of their key learnings:

During “Designing the Next Generation of MERL Tech Software” by Mobenzi’s CEO Andi Friedman, we were challenged to apply design thinking techniques to critique both our own as well as our partners’ current projects. I have previously worked on an educational tool that is aimed to improve the quality of learning of students who are based in a disadvantaged community in the Eastern Cape, South Africa. I learned that language barriers are a serious concern when it comes to effectively implementing a new tool.

We mapped out a visual representation of solving a communication issue that one of the partners had for an educational programme implemented in rural Eastern Cape, which included drawing various shapes on paper. What we came up with was to replace the posters that had instructions in words with clear visuals that the students were familiar with. This was inspired by the idea that visuals resonate with people more than words.

-Perez Mnkile, Project Manager

Amy Green Presenting on Video Metrics

I really enjoyed the presentation on video metrics from Girl Effect’s Amy Green. She spoke to us about video engagement on Hara Huru Dara, a vlog series featuring social media influencers. What I found really interesting is how hard it is to measure impact or engagement. Different platforms (YouTube vs Facebook) have different definitions for various measurements (e.g. views) and also use a range of algorithms to reach these measurements. Her talk really helped me understand just how hard MERL can be in a digital age! As our projects expand into new technologies, I’ll definitely be more aware of how complicated seemingly simple metrics (for example, views on a video) may be.

-Jessica Manim, Project Manager

Get it right by getting it wrong: embracing failure as a tool for learning and improvement was a theme visible throughout the two day MERL Tech conference and one session highlighting this theme was conducted by Annie Martin a Research Associate at Akros, who explored challenges in Offline Data Capture.

She referenced a project that took place in Zambia to track participants of an HIV prevention program, highlighting some of the technical challenges the project faced along the way. The project involved equipping field workers with an Android tablet and an Application developed for capturing offline data and synching data, when data connectivity was available. A number of bugs due to insufficient system user testing along with server hosting issues resulted in field workers often not successfully being able to send data or create user IDs.

The lesson, which I believe we strive to include in our developmental processes, is to focus on iterative piloting, testing and learning before deployment. This doesn’t necessarily mean that a bug-free system or service is guaranteed but it does encourage us to focus our attention on the end-users and stakeholders needs, expectations and requirements.

-Neville Tietz, Service Designer

Slide from Panel on WhatsApp and engagement

Sometimes, we don’t fully understand the problems that we are trying to solve. Siziwe Ngcwabe from the African Evidence Network gave the opening talk on evidence-based work. It showed me the importance of fully understanding the problem we are solving and identifying the markers of success or failure before we start rolling out solutions. Once we have established all this, we can then create effective solutions. Rachel Sibande from DIAL, gave a talk on how their organisation is now using data from mobile network providers to anticipate how a disease outbreak will spread, based on the movement patterns of the network’s subscribers. Using this data they can advise ministries to run campaigns in certain areas and increase medical supplies in another. The talk by Siziwe showed me the importance of fully understanding the problem you are trying to solve and how to effectively measure progress. Rachel’s talk really showed me how easy it is to create an effective solution, once you fully understand the problem.

-Katlego Maakane, Project Manager

I really enjoyed the panel discussion on Datafication Discrimination with William Bird, Director of Media Monitoring Africa, Richard Gevers, Director of Open Data Durban, Koketso Moeti, Executive Director of amandla.mobi that was moderated by Siphokazi Mthathi, Executive Director of Oxfam South Africa. The impact that the mass collection of data can have on communities can potentially be used to further discriminate against them, especially when they are not aware on what their data will be used for. For example, information around sexuality can be used to target individuals during a time when there is rapid reversing of anti-discrimination laws in many countries.

I also thought it was interesting how projection models for population movement and the planning of new areas for residential development and public infrastructure in cities in South Africa are flawed, since the development of these models are outsourced by government to the private sector and different government departments often use different forecasts. Essentially the various government departments are all planning cities with different projections further preventing the poorest people from accessing quality services and infrastructure.

For me this session really highlighted the responsibility we have when collecting data in our projects from vulnerable individuals and that we have to ensure that we interrogate what we intend to use this data for. As part of our process, we must investigate how the data could potentially be exploited. We need to empower people to take control of the information they share and be able to make decisions in their best interest.

-Benjamin Vermeulen, Project Manager

Evaluating the money saved by digitizing salary payments

By Zach Andersson, Acting Project Director, LIFT 2. This post originally ran here on the mSTAR blog.

Funded by USAID and led by FHI 360, mSTAR/Liberia ended activities in May 2018 after enrolling 4,870 civil servants across Liberia into mobile salary payments and successfully handing the mobile salary payment program over to the government. This post is part of a summer blog series on mSTAR/Liberia: what went well and why, how we overcame challenges, and lessons for the future.  

Exactly how much could government ministries save by digitizing salary payments? In Liberia, we do the math to find out.

The Government of Liberia, like many governments in developing economies, faces resource constraints which affect public service delivery. With an annual budget of $526 million, the lack of capital is evident across the country, from poor road quality to broken down ambulances.

Difficulties like bad roads, a lack of banks and low liquidity lead health and education workers to leave their shifts for hours and even days to pick up their salaries. From 2016 – 2018, mSTAR worked with Ministry of Health (MOH) and Ministry of Education (MOE) to digitize their workers’ salaries. We believed digital payments could not only save staff time and money spent when traveling to a brick-and-mortar bank each month, but could also keep workers from leaving work to do so. We collected data over the course of the two-year project to assess our progress and pivot, as required, to achieve high satisfaction of salary recipients and create a successful, sustainable system.

Standard survey tools developed and implemented by the mSTAR team demonstrated that when picking up salaries from mobile money agents instead of banks, health and education workers reduced the amount of money they spent by 58 percent and decreased the amount of time missed from their jobs by an average of 12 hours per month.

Going beyond the benefit of mobile money salaries for individuals, mSTAR sought to estimate the monetary value of the productivity lost when staff left work to collect their salaries each month.

How did we do this?

1. The first step was calculating the total self-reported hours missed away from work when collecting salaries via mobile money and direct deposit at the bank by the 194 MOE and 222 MOH staff surveyed. Using assumptions for both ministries of an eight-hour workday, five-day work week and four-week work month, the total number of possible hours per month MOE and MOH could spend on the job was also calculated (160). Keep in mind that banks have restrictive hours (normally 9am-2pm), whereas with mobile money there is more flexibility. Mobile money agents set their own schedules and usually are available after work, which allows health and education staff to remain on the job longer rather than leave their work to collect their pay.

2. With ministry survey samples for both the total hours missed collecting salaries via mobile money and direct deposit, a proportion of all possible work time missed was calculated. To do this, mSTAR aggregated the reported time missed away from work by staff surveyed and converted to hours, then divided by the total number of hours all those staff together could have worked. For the education sample, this came to 0.6 percent through mobile money and 10.9 percent through direct deposit, whereas for health, these were 0.7 percent and 5.9 percent respectively.

3. Referencing a UNCDF High Volume Payments Mapping presentation given June 28, 2017, we calculated the net salary paid for all staff each day.

ZachTable14. We then estimated the “cost” to both Ministries in terms of lost productivity resulting from staff absence from work to collect their salaries. We multiplied the net salary totals for the entire population by the proportions of all possible work time missed by staff surveyed (step #2 above).

ZachTable2.jpg5. Subtracting the mobile money estimated costs of work time missed from the direct deposit estimated costs, the total estimated savings for both Ministries are presented below by workday, work month and work year.

ZachTable3

So, what does this mean?

If all MOH and MOE staff transitioned to mobile salary payments, the government could potentially save around $4 million – the estimated value of the productive work time lost due to staff leaving their jobs to collect their pay.

While the findings provide food for thought for Government, there are a few critical limitations to keep in mind. The proportion of all staff surveyed by mSTAR from both ministries is small – only 3.1 percent of the MOH and 1.1 percent of the MOE. mSTAR cannot say with certainty that the staff surveyed are representative of the wider MOE and MOH populations. Staff surveyed by mSTAR opted into the mobile money salary payment – many other staff chose not to join. Therefore, surveyed staff may be predisposed to missing less time from work than others who have not yet joined (i.e. the rest of the ministry populations), which may reduce the utility of the multipliers.

The Government of Liberia seemed to take these results seriously. At the project closeout event in front of a large audience and media, the Director of Pay, Benefits and Pension at the Civil Service Agency, Roland Kallon, spoke of mobile money as the way of the future. Referencing the savings, he said, “mobile money process is what everyone should be gearing toward because it makes a lot sense…if you compare mobile money with any other mode of payment, anyone will choose mobile money.”

Zach Andersson is a Monitoring and Evaluation Advisor at FHI 360 and an Acting Project Director for FHI 360’s Livelihood and Food Security Technical Assistance (LIFT II) project. Zach has over 10 years of experience in disaster relief and global development program design, operational oversight, research, M&E and information management, proposal development, and technical assistance with successful, extended field deployments to Uganda, Ghana, Haiti, India, Lesotho, Liberia, Malawi and Tanzania. He has produced and administered surveys, facilitated training and workshops, created assessment tools, edited and published guidance documents, built and maintained relationships with government and bilateral stakeholders, led research activities, and collaborated in the development of M&E indicators for disaster relief, economic strengthening and HIV work in several separate roles. 

Evaluating for Trust in Blockchain Applications

by Mike Cooper

This is the fourth in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL,  Blockchain as an M&E Tool, How Can MERL Inform Maturation of the Blockchain, this post, and future posts on integrating  blockchain into MEL practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Enabling trust in an efficient manner is the primary innovation that the blockchain delivers through the use of cryptology and consensus algorithms.  Trust is usually a painstaking relationship building effort that requires iterative interactions to build.  The blockchain alleviates the need for much of the resources required to build this trust, but that does not mean that stakeholders will automatically trust the blockchain application.  There will still need to be trust building mechanisms with any blockchain application and MEL practitioners are uniquely situated to inform how these trust relationships can mature.

Function of trust in the blockchain

Trust is expensive.  You pay fees to banks who provide confidence to sellers who take your debit card as payment and trust that they will receive funds for the transaction.  Agriculture buyers pay fees to third parties (who can certify that the produce is organic, etc.) to validate quality control on products coming through the value chain  Often sellers do not see the money from debit card transaction in their accounts automatically and agriculture actors perpetually face the pressures resulting from being paid for goods and/or services they provided weeks previously. The blockchain could alleviate much of these harmful effects by substituting trust in humans by trust in math.

We pay these third parties because they are trusted agents, and these trusted agents can be destructive rent seekers at times; creating profits that do not add value to the goods and services they work with. End users in these transactions are used to using standard payment services for utility bills, school fees, etc.  This history of iterative transactions has resulted in a level of trust in these processes. It may not be equitable but it is what many are used to and introducing an innovation like blockchain will require an understanding of how these processes are influencing stakeholders, their needs and how they might be nudged to trust something different like a blockchain application.  

How MEL can help understand and build trust

Just as microfinance introduced new methods of sending/receiving money and access to new financial services that required piloting different possible solutions to build this understanding, so will blockchain applications. This is an area where MEL can add value to achieving mass impact, by designing the methods to iteratively build this understanding and test solutions.  

MEL has done this before.  Any project that requires relationship building should be based on understanding the mindset and incentives for relevant actions (behavior) amongst stakeholders to inform the design of the “nudge” (the treatment) intended to shift behavior.

Many of the programs we work on as MEL practitioners involve various forms and levels of relationship building, which is essentially “trust”.  There have been many evaluations of relationship building whether it be in microfinance, agriculture value chains or policy reform.  In each case, “trust” must be defined as a behavior change outcome that is “nudged” based on the framing (mindset) of the stakeholder.  Meaning that each stakeholder, depending on their mindset and the required behavior to facilitate blockchain uptake, will require a customized nudge.  

The role of trust in project selection and design: What does that mean for MEL

Defining “trust” should begin during project selection/design.  Project selection and design criteria/due diligence are invaluable for MEL.  Many of the dimensions of evaluability assessments refer back to the work that is done in the project selection/design phrase (which is why some argue evaluability assessments are essentially project design tools).  When it comes to blockchain, the USAID Blockchain Primer provides some of the earliest thinking for how to select and design blockchain projects, hence it is a valuable resources for MEL practitioners who want to start thinking about how they will evaluate blockchain applications.  

What should we be thinking about?

Relationship building and trust are behaviors, hence blockchain theories of change should have outcomes stated as behavior changes by specific stakeholders (hence the value add of tools like stakeholder analysis and outcome mapping).  However, these Theories of Change (TOC) are only as good as what informs them, hence building a knowledge base of blockchain applications as well as previous lessons learned from evidence on relationship building/trust will be critical to developing a MEL Strategy for blockchain applications.  

If you’d like to discuss this and related aspects, join us on September 5th in Washington, DC, for a one-day workshop on “What can the blockchain offer MERL?”

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.