The four panelists described the kinds of administrative or “routine” data they are using in their work. For example, in Kenya educational records, client information from financial institutions, hospital records of patients, and health outcomes are being used to plan and implement actions related to COVID-19 and to evaluate the impact of different COVID-related policies that governments have put in place or are considering. In Malawi, administrative data is combined with other sources such as Google mobility data to understand how migration might be affecting the virus’ spread. COVID-19 is putting a spotlight on weaknesses and gaps in existing administrative data systems.
Watch the video here:
Listen to just the audio from the event here:
Benefits of administrative data include that:
Data is generated through normal operations and does not require an additional survey to create it
It can be more relevant than a survey because it covers a large swath of the entire population
It is an existing data source during COVID when it’s difficult to collect new data
It can be used to create dashboards for decision-makers at various levels
Data sits in silos and the systems are not designed to be interoperable
Administrative data may leave out those who are not participating in a government program
Data sets are time-bound to the life of the program
Some administrative data systems are outdated and have poor quality data that is not useful for decision-making or analysis
There is a demand for beautiful dashboards and maps but there is insufficient attention to the underlying data processes that would be needed to produce this information so that it can be used
Real-time data is not possible when there is no Internet connectivity
There is insufficient attention to data privacy and protection, especially for sensitive data
Institutions may resist providing data if weakness are highlighted through the data or they think it will make them look bad
Recommendations for better use of administrative data in the public sector:
Understand the data needs of decision-makers and build capacity to understand and use data systems
Map the data that exists, assess its quality, and identify gaps
Design and enact policies and institutional arrangements, tools, and processes to make sure that data is organized and interoperable.
Automate processes with digital tools to make them more seamless.
Focus on enhancing underlying data collection processes to improve the quality of administrative data; this includes making it useful for those who provide the data so that it is not yet another administrative burden with no local value.
Assign accountability for data quality across the entire system.
Learn from the private sector, but remember that the public sector has different incentives and goals.
Rather than fund more research on administrative data, donors should put funds into training on data quality, data visualization, and other skills related to data use and data literacy at different levels of government.
Determine how to improve data quality and use of existing administrative data systems rather than building new ones.
Make administrative data useful to those who are inputting it to improve data quality.
Data is not always available, and it can be costly to produce. One challenge is generating data cheaply and quickly to meet the needs of decision-makers within the operational constraints that enumerators face. Another is ensuring that the process is high quality and also human-centered, so that we are not simply extracting data. This can be a challenge when there is low connectivity and reach, poor networks capacity and access, and low smartphone access. Enumerator training is also difficult when it must be done remotely, especially if enumerators are new to technology and more accustomed to doing paper-based surveys.
Watch the video below.
Listen to just the audio from the session here.
Some recommendations arising from the session included:
Learn and experiment as you try new things. For example, tracking when and why people are dropping off a survey and finding ways to improve the design and approach. This might be related to the time of the call or length of the survey.
It’s not only about phone surveys. There are other tools. For example, WhatsApp has been used successfully during COVID-19 for collecting health data.
Don’t just put your paper processes onto a digital device. Instead, consider how to take greater advantage of digital devices and tools to find better ways of monitoring. For example, could we incorporate sensors into the monitoring from the start? At the same time, be careful not to introduce technologies that are overly complex.
Think about exclusion and access. Who are we excluding when we move to remote monitoring? Children? Women? Elderly people? We might be introducing bias if we are going remote. We also cannot observe if vulnerable people are in a safe place to talk if we are doing remote monitoring. So, we might be exposing people to harm or they could be slipping through the cracks. Also, people self-select for phone surveys. Who is not answering the phone and thus left out of the survey?
Consider providing airtime but make sure this doesn’t create perverse incentives.
Ethics and doing no harm are key principles. If we are forced to deliver programs remotely, this involves experimentation. And we are experimenting with people’s lives during a health crisis. Consider including a complaints channel where people can report any issues.
Ensure data is providing value at the local level, and help teams see what the whole data process is and how their data feeds into it. That will help improve data quality and reduce the tendency to ‘tick the box’ for data collection or find workarounds.
Design systems for interoperability so that the data can overlap, and the data can be integrated with other data for better insights or can be automatically updated. Data standards need to be established so that different systems can capture data in the same way or the same format;
Create a well-designed change management program to bring people on board and support them. Role modeling by leaders can help to promote new behaviors.
Further questions to explore:
How can we design monitoring to be remote from the very start? What new gaps could we fill and what kinds of mixed methods could we use?
What two-way platforms are most useful and how can they be used effectively and ethically?
Can we create a simple overview of opportunities and threats of remote monitoring?
How can we collect qualitative data, e.g., focus groups and in-depth interviews?
How can we keep respondents safe? What are the repercussions of asking sensitive questions?
How can we create data continuity plans during the pandemic?
Over the past decade, monitoring, evaluation, research and learning (MERL) practices have become increasingly digitalized. The COVID-19 pandemic has caused that the process of digitalization to happen with even greater speed and urgency, due to travel restrictions, quarantine, and social distancing orders from governments who are desperate to slow the spread of the virus and lessen its impact.
Data is a necessary and critical part of COVID-19 prevention and response efforts to understand where the virus might appear next, who is most at risk, and where resources should be directed for prevention and response. However we need to be sure that we are not putting people at risk of privacy violations or misuse of personal data and to ensure that we are managing that data responsibly so that we don’t unnecessarily create fear or panic.
Watch the video below:
Listen to the audio from the session here:
MERL Practitioners have clear responsibilities when sharing, presenting, consuming and interpreting data. Individuals and institutions may use data to gain prestige, and this can allow bias to creep in or to justify government decisions. Data quality is critical for informing decisions, and information gaps create the risk of misinformation and flawed understanding. We need to embrace uncertainty and the limitations of the science, provide context and definitions so that our sources are clear, and ensure transparency around the numbers and the assumptions that are underpin our work.
MERL Practitioners should provide contextual information and guidance on how to interpret the data so that people can make sense of it in the right way. We should avoid cherry picking data to prove a point, and we should be aware that data visualization carries power to sway opinions and decisions. It can also influence behavior change in individuals, so we need to take responsibility for that. We also need to find ways to visualize data for lay people and non-technical sectors.
Critical data is needed, yet it might be used in negative or harmful ways, for example, COVID-related stigmatization that can affect human dignity. We must not override ethical and legal principles in our rush to collect data. Transparency around data collection processes and use are also needed, as well as data minimization. Some might be taking advantage of the situation to amass large amounts of data for alternative purposes, which is unethical. Large amounts of data also bring increased risk of data breaches. When people are scared, such as in COVID times, they will be willing to hand over data. We need to ensure that we are providing oversight and keeping watch over government entities, health facilities, and third-party data processors to ensure data is protected and not misused.
MERL Practitioners are seeking more guidance and support on: aspects of consent and confidentiality; bias and interference in data collection by governments and community leaders; overcollection of data leading to fatigue; misuse of sensitive data such as location data; potential for re-identification of individuals; data integrity issues; lack of encryption; and some capacity issues.
Good practices and recommendations include ethical clearance of data and data assurance structures; rigorous methods to reduce bias; third party audits of data and data protection processes; localization and contextualization of data processes and interpretation; and “do no harm” framing.
By Mala Kumar, GitHub Social Impact, Open Source for Good
I lead a program on the GitHub Social Impact team called Open Source for Good — detailed in a previous MERL Tech post and (back when mass gatherings in large rooms were routine) at a lightning talk at the MERL Tech DC conference last year.
Before joining GitHub, I spent a decade wandering around the world designing, managing, implementing, and deploying tech for international development (ICT4D) software products. In my career, I found open source in ICT4D tends to be a polarizing topic, and often devoid of specific arguments. To advance conversations on the challenges, barriers, and opportunities of open source for social good, my program at GitHub led a year-long research project and produced a culminating report, which you can download here.
One of the hypotheses I posed at the MERL Tech conference last year, and that our research subsequently confirmed, is that IT departments and ICT4D practitioners in the social sector* have relatively less budgetary decision-making power than their counterparts at corporate IT companies. This makes it hard for IT and ICT4D staff to justify the use of open source in their work.
In the past year, Open Source for Good has solidified its strategy around helping the social sector more effectively engage with open source. To that aim, we started the MERL Center, which brings together open source experts and MERL practitioners to create resources to help medium and large social sector organizations understand if, how, and when to use open source in their MERL solutions.**
With the world heading into unprecedented economic and social change and uncertainty, we’re more committed than ever at GitHub Social Impact to helping the social sector effectively use open source and to build on a digital ecosystem that already exists.
Thanks to our wonderful working group members, the MERL Center has identified its target audiences, fleshed out the goals of the Center, set up a basic content production process, and is working on a few initial contributions to its two working groups: Case Studies and Beginner’s Guides. I’ll announce more details in the coming months, but I am also excited to announce that we’re committing funds to get a MERL Center public-facing website live to properly showcase the materials the MERL Center produces and how open source can support technology-enabled MERL activities and approaches.
As we ramp up, we’re now inviting more people to join the MERL Center working groups! If you are a MERL practitioner with an interest in or knowledge of open source, or you’re an open source expert with an interest in and knowledge of MERL, we’d love to have you! Please feel free to reach out me with a brief introduction to you and your work, and I’ll help you get on-boarded. We’re excited to have you work with us!
*We define the “social sector” as any organization or company that primarily focuses on social good causes.
Guest post from Janna Rous. Original published here.
So, all of a sudden you’re stuck at home because of the new coronavirus. You’re looking at your M&E commitments and your program commitments. Do you put them all on hold and postpone them until the coronavirus threat has passed and everything goes back to normal? Or is there a way to still get things done”!? This article reviews 8 ways you can adapt your M&E during the pandemic.
Here are a few ideas that you and your team might consider doing to make sure you can stay on track (and maybe even IMPROVE your MEAL practices) even if you might currently be in the middle of a lockdown, or if you think you might be going into a lockdown soon:
1. Phone Call Interviews instead of In-Person Interviews
Do you have any household assessments or baseline surveys or post-distribution monitoring that you had planned in the next 1 to 3 months? Is there a way that you can carry out these interviews by phone or WhatsApp calls? This is the easiest and most direct way to carry on with your current M&E plan. Instead of doing these interviews face-to-face, just get them on a call. I’ve created a checklist to help you prepare for doing phone call interviews – click here to get the “Humanitarian’s Phone Call Interview Checklist”. Here are a few things you need to think through to transition to a phone-call methodology:
You need phone numbers and names of people that need to be surveyed. Do you have these? Or is there a community leader who might be able to help you get these?
You also need to expect that a LOT of people may not answer their phone. So instead of “sampling” people for a survey, you might want to just plan on calling almost everyone on that list.
Just like for a face-to-face interview, you need to know what you’re going to say. So you need to have a script ready for how you introduce yourself and ask for consent to do a phone questionnaire. It’s best to have a structured interview questionnaire that you follow for every phone call, just like you would in a face-to-face assessment.
You also need to have a way to enter data as you ask the questions. This usually depends on what you’re most comfortable with – but I recommend preparing an ODK or KoboToolbox questionnaire, just like you would for an in-person survey, and filling it out as you do the interview over the phone. I find it easiest to enter the data into KoboToolbox “Webform” instead of the mobile app, because I can type information faster into my laptop rather than thumb-type it into a mobile device. But use what you have!
If you’re not comfortable in KoboToolbox, you could also prepare an Excel sheet for directly entering answers – but this will probably require a lot more data cleaning later on.
When you’re interviewing, it’s usually faster to type down the answers in the language you’re interviewing in. If you need your final data collection to be in English, go back and do the translation after you’ve hung up the phone.
If you want a record of the interview, ask if you can record the phone call. When the person says yes, then just record it so you can go back and double check an answer if you need to.
Very practically – if you’re doing lots of phone calls in a day, it is easier on your arm and your neck if you use a headset instead of holding your phone to your ear all day!
2. Collect Videos & Photos Directly from Households and Communities
When you’re doing any in-person MEAL activities, you’re always able to observe evidence. You can look around and SEE impact, you don’t just hear it through an interview or group discussion. But when you’re doing M&E remotely, you can’t double-check to see what impact really looks like. So I recommend:
Connect with as many beneficiaries and team members as possible through WhatsApp or another communication app and collect photos and videos of evidence directly from them.
Video – Maybe someone has a story of impact they can share with you through video. Or if you’re overseeing a Primary Health Care clinic, perhaps you can have a staff member walk you through the clinic with a video so you can do a remote assessment.
Pictures – Maybe you can ask everyone to send you a picture of (for example) their “hand washing station with soap and water” (if you’re monitoring a WASH program). Or perhaps you want evidence that the local water point is functioning.
3. Programme Final Evaluation
It’s a good practice to do a final evaluation review when you reach the end of a program. If you have a program finishing in the next 1-3 months, and you want to do a final review to assess lessons learned overall, then you can also do this remotely!
Make a list of all the stakeholders that would be great to talk to: staff members, a few beneficiaries, government authorities (local and/or national), other NGOs, coordination groups, partner organizations, local community leaders.
Then go in search of either their phone numbers, their email addresses, their Skype accounts, or their WhatsApp numbers and get in touch.
It’s best if you can get on a video chat with as many of them as possible – because it’s much more personal and easy to communicate if you can see one another’s faces! But if you can just talk with audio – that’s okay too.
Prepare a semi-structured interview, a list of questions you want to talk through about the impact, what went well, what could have gone better. And if there’s anything interesting that comes up, don’t worry about coming up with some new questions on the spot or skipping questions that don’t make sense in the context.
You can also gather together any monitoring reports/analysis that was done on the project throughout its implementation period, plus pictures of the interventions.
Use all this information to create a final “lessons learned” evaluation document. This is a fantastic way to continually improve the way you do humanitarian programming.
4. Adapt Your Focus Group Discussion Plan
If everyone is at home because your country has imposed a lockdown, it will be very difficult to do a focus group discussion because….you can’t be in groups! So, with your team decide if it might be better to switch your monitoring activity from collecting qualitative data in group discussions to actually just having one-on-one interviews on the phone with several people to collect the same information.
There are some dynamics that you will miss in one-to-one interviews, information that may only come out during group discussions. (Especially where you’re collecting sensitive or “taboo” data.) Identify what that type of information might be – and either skip those types of questions for now, or brainstorm how else you could collect the information through phone-calls.
5. Adapt Your Key Informant Interviews
If you normally carry out Key Informant Interviews, it would be a great idea to think what “extra” questions you need to ask this month in the midst of the coronavirus pandemic.
If you normally ask questions around your program sector areas, think about just collecting a few extra data points about feelings, needs, fears, and challenges that are a reality in light of Covid-19. Are people facing any additional pressures due to the epidemic? Or are there any new humanitarian needs right now? Are there any upcoming needs that people are anticipating?
It goes without saying that if your Key Informant Interviews are normally in person, you’ll want to carry these out by phone for the foreseeable future!
6. What To Do About Third Party Monitoring
Some programs and donors use Third Party Monitors to assess their program results independently. If you normally hire third party monitors, and you’ve got some third party monitoring planned for the next 1-3 months, you need to get on the phone with this team and make a new plan. Here are a few things you might want to think through with your third party monitors:
But also think through – is it worth it to get a third party monitor to assess results remotely? Is it better to postpone their monitoring? Or is it worth it to carry on regardless?
What is the budget implication? If cars won’t be used, is there any cost-savings? Is there any additional budget they’ll need for air-time costs for their phones?
Make sure there is a plan to gather as much photo and video evidence as possible (see point 2 above!)
If they’re carrying out phone call interviews it would also be a good recommendation to record phone calls if possible and with consent, so you have the records if needed.
7. Manage Expectations – The Coronavirus Pandemic May Impact Your Program Results.
You probably didn’t predict that a global pandemic would occur in the middle of your project cycle and throw your entire plan off. Go easy on yourself and your team! It is most likely that the results you’d planned for might not end up being achieved this year. Your donors know this (because they’re probably also on lockdown). You can’t control the pandemic, but you can control your response. So proactively manage your own expectations, your manager’s expectations and your donor’s expectations.
Get on a Skype or Zoom call with the project managers and review each indicator of your M&E plan. In light of the pandemic, what indicator targets will most likely change?
Look through the baseline numbers in your M&E plan – is it possible that the results at the END of your project might be worse than even your baseline numbers? For example, if you have a livelihoods project, it is possible that income and livelihoods will be drastically reduced by a country-wide lockdown. Or are you running an education program? If schools have been closed, then will a comparison to the baseline be possible?
Once you’ve done a review of your M&E plan, create a very simple revised plan that can be talked through with your program donor.
8. Talk To Your Donors About What You Can Do Remotely
When you’re on the phone with your donors, don’t only talk about revised program indicators.
Also talk about a revised timeframe – is there any flexibility on the program timeframe, or deadlines for interim reporting on indicators? What are their expectations?
Also talk about what you CAN do remotely. Discuss with them the plan you have for carrying on everything possible that can be done remotely.
And don’t forget to discuss financial implications of changes to timeframe.
Just about everyone I know in the ICT4D and MERL communities has interacted with, presented, or created a chart, dashboard, infographic, or other data visualization. We’ve also all seen charts that mislead, confuse, or otherwise fall short of making information more accessible.
The goal of the Data Visualization Society is to collect and establish best practices in data viz, fostering a community that supports members as they grow and develop data visualization skills. With more than 11.5K members from 123 countries on our first birthday, the society has grown faster than any of the founders imagined.
There are three reasons you should join the Data Visualization Society to improve your data visualizations in international development.
Self-service data visualization tools are everywhere, but that doesn’t mean we’re always building usable charts and graphs.
Just about anyone can make a chart if they have a table of data, thanks to the wide range of tools out there (Flourish, RAWgraphs, Datawrapper, Tableau, PowerBI…to name a few). Without a knowledge of data viz fundamentals though, it’s easy to use these tools to create confusing and misleading graphs.
A recent study on user-designed dashboards in DHIS2 (a commonly used data management and analysis platform in global health) found that “while the technical flexibility of [DHIS2] has been taken advantage of by providing platform customization training…the quality of the dashboards created face numerous challenges.” (Aprisa & Sebo, 2020).
The researchers used a framework from Stephen Few to evaluate the frequency of five different kinds of ‘dashboard problems’ on 80 user-designed sample dashboards. The five problem ‘types’ included: context, dashboard layout, visualization technique, logical, and data quality.
Of the 80 dashboards evaluated, 69 (83.1%) had at least one visualization technique problem (Aprisa & Sebo, 2020). Many of the examples shared in the paper could be easily addressed, like transforming the pie chart made of slices representing points in time into a line graph.
With so many tools at our fingerprints, how can we use them to develop meaningful, impactful charts and interactive dashboards? Learning fundamentals of data visualization is an excellent place to start, and DVS offers a free-to-join professional home to learn those fundamentals.
Many of the communities that exist around data visualization are focused on specific tools, which may not be relevant or accessible for your organization.
In ICT4D, we often have to be scrappy and flexible. That means learning how to work with open source tools, hack charts in Excel, and often make decisions about what tool to use driven as much by resource availability as functionality.
There are many great tool specific communities out there: TUGs, PUGs, RLadies, Stack Overflow, and more. DVS emerged out of a need to connect people looking to share best practices across the many disciplines doing data viz: journalists, evaluators, developers, graphic designers, and more. That means not being limited to one tool or platform, so we can look for what fits a given project or audience.
After joining DVS, you’ll receive an invite to the Society’s’ Slack, a community “workspace” with channels on different topics and for connecting different groups of people within the community. You can ask questions about any data viz tool on the #topic-tools channel, and explore emerging and established platforms with honest feedback on how other members have used them in their work.
Data visualization training often means one-off workshops. Attendees leave enthusiastic, but then don’t have colleagues to rely on when they run into new questions or get stuck.
Data visualization isn’t consistently taught as a foundation skill for public health or development professionals.
In university, there may be a few modules within a statistics or evaluation class, but seldom are there dedicated, semester long classes on visualization; those are reserved for computer science and analytics programs (though this seems to be slowing changing!). Continuing education in data viz is usually short workshops, not long-term mentoring relationships.
So what happens when people are asked to “figure it out” on the job? Or attend a two day workshop and come away as a resident data viz expert?
Within DVS, our leadership and our members step up to answer questions and be that coach for people at all stages of learning data visualization. We even have a dedicated feedback space within Slack to share examples of data viz work in progress and get feedback.
DVS also enables informal connections on questions on a wide range of topics. Go to #share-critique, for posting work-in-progress visualizations and seeking feedback from the community. We also host quarterly challenges where you can do hands-on practice with provided data sets to develop your data viz skills and have plans for a formal mentorship program to launch in 2020.
Join DVS today to get its benefits – members from Africa, Asia, and other underrepresented areas are especially encouraged to join us now!
Have any questions? Or ideas on ways DVS can support our global membership base? Find me on Twitter – my DMs are open.
By Alexis Banks, Jennifer Himmelstein, and Rachel Dickinson
Social network analysis (SNA) is a powerful tool for understanding the systems of organizations and institutions in which your development work is embedded. It can be used to create interventions that are responsive to local needs and to measure systems change over time. But, what does SNA really look like in practice? In what ways could it be used to improve your work? Those are the questions we tackled in our recent MERL Tech session, Visualizing Your Network for Adaptive Program Decision Making. ACDI/VOCA and Root Change teamed up to introduce SNA, highlight examples from our work, and share some basic questions to help you get started with this approach.
SNA is the process of mapping and measuring relationships and information flows between people, groups, organizations, and more. Using key SNA metrics enables us to answer important questions about the systems where we work. Common SNA metrics include (learn more here):
Reachability, which helps us determine if one actor, perhaps a local NGO, can access another actor, such as a local government;
Distance, which is used to determine how many steps, or relationships, there are separating two actors;
Degree centrality, which is used to understand the role that a single actor, such as an international NGO, plays in a system by looking at the number of connections with that organization;
Betweenness, which enables us to identify brokers or “bridges” within networks by identifying actors that lie on the shortest path between others; and
Change Over Time, which allows us to see how organizations and relationships within a system have evolved.
SNA in the Program Cycle
SNA can be used throughout the design, implementation, and evaluation phases of the program cycle.
Design: Teams at Root Change and ACDI/VOCA use SNA in the design phase of a program to identify initial partners and develop an early understanding of a system–how organizations do or do not work together, what barriers are preventing collaboration, and what strategies can be used to strengthen the system.
As part of the USAID Local Works program, Root Change worked with the USAID mission in Bosnia and Herzegovina (BiH) to launch a participatory network map that identified over 1,000 organizations working in community development in BiH, many of which had been previously unknown to the mission. It also provided the foundation for a dialogue with system actors about the challenges facing BiH civil society.
To inform project design, ACDI/VOCA’s Feed the Future Tanzania NAFAKA II Activity, funded by USAID conducted a network analysis to understand the networks associated with village based agricultural advisors (VBAAs)–what services they were offering to farmers already, which had the most linkages to rural actors, which actors were service as bottlenecks, and more. This helped the project identify which VBBA’s to work with through small grants and technical assistance (e.g. key actors), and what additional linkages needed to be built between VBAAs and other types of actors.
Implementation: We also use SNA throughout program implementation to monitor system growth, increase collaboration, and inform learning and program design adaptation. ACDI/VOCA’s USAID/Honduras Transforming Market Systems Activity uses network analysis as a tool to track business relationships created through primary partners. For example, one such primary partner is the Honduran chamber of tourism who facilitates business relationships through group training workshops and other types of technical assistance. They can then follow up on these new relationships to gather data on indirect outcomes (e.g. jobs created, sales and more).
Root Change used SNA throughout implementation of the USAID funded Strengthening Advocacy and Civic Engagement (SACE) program in Nigeria. Over five years, more than 1,300 organizations and 2,000 relationships across 17 advocacy issue areas were identified and tracked. Nigerian organizations came together every six months to update the map and use it to form meaningful partnerships, coordinate advocacy strategies, and hold the government accountable.
Evaluating Impact: Finally, our organizations use SNA to measure results at the mid-term or end of project implementation. In Kenya, Root Change developed the capacity of Aga Khan Foundation (AKF) staff to carry out a baseline, and later an end-of-project network analysis of the relationships between youth and organizations providing employment, education, and entrepreneurship support. The latter analysis enabled AKF to evaluate growth in the network and the extent to which gaps identified in the baseline had been addressed.
The Feed The Future Ghana Agricultural Development and Value Chain Enhancement II (ADVACNE II) Project, implemented by ACDI/VOCA and funded by USAID, leveraged existing database data to demonstrate the outgrower business networks that were established as a result of the project. This was an important way of demonstrating one of ADVANCE II’s major outcomes–creating a network of private service providers that serve as resources for inputs, financing, and training, as well as hubs for aggregating crops for sales.
Approaches to SNA
There are a plethora of tools to help you incorporate SNA in your work. These range from bespoke software custom built for each organization, to free, open source applications.
Root Change uses Pando, a web-based, participatory tool that uses relationship surveys to generate real-time network maps that use basic SNA metrics. ACDI/VOCA, on the other hand, uses unique identifiers for individuals and organizations in its routine monitoring and evaluation processes to track relational information for these actors (e.g. cascaded trainings, financing given, farmers’ sales to a buyer, etc.) and an in-house SNA tool.
Applying SNA to Your Work
What do you think? We hope we’ve piqued your interest! Using the examples above, take some time to consider ways that SNA could be embedded into your work at the design, implementation, or evaluation stage of your work using this worksheet. If you get stuck, feel free to reach out (Alexis Banks, firstname.lastname@example.org; Rachel Dickinson, email@example.com; Jennifer Himmelstein, JHimmelstein@acdivoca.org)!
by Mala Kumar, GitHub Open Source for Good program
My name is Mala, and I lead a program at GitHub called Open Source for Good under our Social Impact team. Before joining GitHub, I spent the better part of a decade wandering around the world designing, managing, implementing and deploying tech for international development (ICT4D) software products. Throughout my career, I was told repeatedly that open source (OS) would revolutionize the ICT4D industry. While I have indeed worked on a few interesting OS products, I began suspecting that statement was more complicated than had been presented.
Indeed, after joining GitHub this past April, I confirmed my suspicion. Overall, the adoption of OS in the social sector – defined as the collection of entities that positively advance or promote human rights – lags far behind the commercial, private sector. Why? You may ask.
Here’s one hypothesis we have at GitHub:
After our team’s many years of experience working in the social sector and through the hundreds of conversations we’ve had with fellow social sector actors, we’ve come to believe that IT teams in the social sector have significantly less decision making power and autonomy than commercial, private sector IT teams. This is irrespective of the size, the geographic location, or even the core mission of the organization or company.
In other words, decision-making power in the social sector does not lie with the techies who typically have the best understanding of the technology landscape. Rather, it’s non-techies who tend to make an organization’s IT budgetary decisions. Consequently, when budgetary decision-makers come to GitHub to assess OS tools and they see something like the below, a GitHub repo, they have no idea what they’re seeing. And this is a problem for the sector at large.
We want to help bridge that gap between private sector and social sector tech development. The social sector is quite large, however, so we’ve had to narrow our focus. We’ve decided to target the social sector’s M&E vertical. This is for several reasons:
M&E as a discipline is growing in the social sector
Increasingly more M&E data is being collected digitally
It’s easy to identify a target audience
Linda is great. ☺
How We Hope to Help
Our basic idea is to build a middle “layer” between a GitHub repo and a decision maker’s final budget. I’m calling that a MERL GitHub “Center” until I can come up with a better name.
As a sponsor of MERL Tech DC 2019, we set up our booth smack dab in front of the food and near the coffee, and we took advantage of this prime real estate to learn more about what our potential users would find valuable.
We spent two days talking with as many MERL conference attendees as we could and asked them to complete some exercises. One such exercise was to prioritize the possible features a MERL GitHub Center might have. We’ve summarized the results in the chart below. The top right has two types of features: 1) features most commonly sorted as helpful in using open source and 2) features potential Center users would actually use. From this exercise, we’ve learned that our minimum viable product (MVP) should include all or some of the following:
Use case studies of open source tools
Description of listed tools
A way to search in the Center
Security assessments of the tools
Beginner’s Guide to Open Source for the Social Sector
Installation guides for listed tools
Aggregation of prioritization exercise from ~10 participants
We also spoke to an additional 30+ attendees about the OS tools they currently use. Anecdotally, mobile data collection, GIS, and data visualization were the most common use cases. A few tools are built on or with DHIS2. Many attendees we spoke with are data scientists using R and Python notebooks. DFID and GIZ were mentioned as two large donor organizations that are thinking about OS for MERL funding.
In the coming weeks, we’re going to reach out to many of the attendees we spoke to at MERL Tech to conduct user testing for our forthcoming Center prototype. In the spirit of open source and not duplicating work, we are also speaking with a few potential partners working on different angles to our problem to align our efforts. It’s our hope to build out new tools and product features that will help the MERL community better use and develop OS tools.
How can you get Involved?
Email firstname.lastname@example.org with a brief intro to you and your work in OS for social good.
Guest post from Jo Kaybryn, an international development consultant currently directing evaluation frameworks, evaluation quality assurance services, and leading evaluations for UN agencies and INGOs.
“Upping the Ex Ante” is a series of articles aimed at evaluators in international development exploring how our work is affected by – and affects – digital data and technology. I’ve been having lots of exciting conversations with people from all corners of the universe about our brave new world. But I’ve also been conscious that for those who have not engaged a lot with the rapid changes in technologies around us, it can be a bit daunting to know where to start. These articles explore a range of technologies and innovations against the backdrop of international development and the particular context of evaluation. For readers not yet well versed in technology there are lots of sources to do further research on areas of interest.
series is half way through, with 4 articles published.
in Part 1 the series has gone back to the olden days (1948!) to consider the
origin story of cybernetics and the influences that are present right now in
algorithms and big data. The philosophical and ethical dilemmas are a recurring
theme in later articles.
examines the problems of distance which is something that technology offers
huge strides forwards in, and yet it remains never fully solved, with a
discussion on what blockchains mean for the veracity of data.
considers qualitative data and shines a light on the gulf between our digital
data-centric and analogue-centric worlds and the need for data scientists and social
scientists to cooperate to make sense of it.
looks at quantitative data and the implications for better decision making, why
evaluators really don’t like an algorithmic “black box”; and reflections on how humans’
assumptions and biases leak into our technologies whether digital or analogue.
few articles will see a focus on ethics, psychology and bias; a case study on a
hypothetical machine learning intervention to identify children at risk of
maltreatment (lots more risk and ethical considerations), and some thoughts about putting it all
in perspective (i.e. Don’t
There is no real
evidence base about what does and does not work for applying blockchain
technology to interventions seeking social impacts. Most current blockchain interventions are
driven by developers (programmers) and visionary entrepreneurs. There is little
thinking in current blockchain interventions around designing for “social”
impact (there is an over abundant trust in technology to achieve the outcomes
and little focus on the humans interacting with the technology) and integrating
relevant evidence from behavioral economics, behavior change design, human
centered design, etc.
To build the needed evidence base, Monitoring, Evaluation, Research and Learning (MERL) practitioners will have to not only get to know the broad strokes of blockchain technology but the specifics of token design and tokenomics (the political economics of tokenized ecosystems). Token design could become the focal point for MERL on blockchain interventions since:
If not all, the vast majority of blockchain interventions will involve some type of desired behavior change
The token provides the link between the ledger (which is the blockchain) and the social ecosystem created by the token in which the behavior change is meant to happen
Hence the token is the “nudge” meant to leverage behavior change in the social ecosystem while governing the transactions on the blockchain ledger.
(While this blog will focus on these points, it will not go into a full discussion of what tokens are and how they create ecosystems. But there are some very good resources out there that do this which you can review at your leisure and to the degree that works for you. The Complexity Institute has published a book exploring the various attributes of complexity and main themes involved with tokenomics while Outlier Ventures has published, what I consider, to be the best guidance on token design. The Outlier Ventures guidance contains many of the tools MERL practitioners will be familiar with (problem analysis, stakeholder mapping, etc.) and should be consulted.)
Hence it could be that by understanding token design and its requirements and mapping it against our current MERL thinking, tools and practices, we can develop new thinking and tools that could be the beginning point in building our much-needed evidence base.
What is a “blockchain intervention”?
As MERL practitioners
we roughly define an “intervention” as a group of inputs and activities meant
to leverage outcomes within a given eco-system.
“Interventions” are what we are usually mandated to asses, evaluate and
When thinking about MERL and blockchain, it is useful to think of two categories of “blockchain interventions”.
1) Integrating the blockchain into MERL data collection, entry, management, analysis or dissemination practices and
2) MERL strategies for interventions using the blockchain in some way shape or form.
Here we will focus on the #2 and in so doing demonstrate that while the blockchain is an innovative, potentially disruptive technology, evaluating its applications on social outcomes is still an issue of assessing behavior change against dimensions of intervention design.
Designing for Behavior Change
We generally design
interventions (programs, projects, activities) to “nudge” a certain type of behavior (stated as
outcomes in a theory of change) amongst a certain population (beneficiaries,
stakeholders, etc.). We often attempt to
integrate mechanisms of change into our intervention design, but often do not
for a variety of reasons (lack of understanding, lack of resources, lack of
political will, etc.). This lack of due
diligence in design is partly responsible for the lack of evidence around what
works and what does not work in our current universe of interventions.
Enter blockchain technology, which as MERL practitioners, we will be responsible for assessing in the foreseeable future. Hence, we will need to determine how interventions using the blockchain attempt to nudge behavior, what behaviors they seek to nudge, amongst whom, when and how well the design of the intervention accomplishes these functions. In order to do that we will need to better understand how blockchains use tokens to nudge behavior.
The Centrality of the Token
We have all used tokens before. Stores issue coupons that can only be used at those stores, we get receipts for groceries as soon as we pay, arcades make you buy tokens instead of just using quarters. The coupons and arcade tokens can be considered utility tokens, meaning that they can only be used in a specific “ecosystem” which in this case is a store and arcade respectively. The grocery store receipt is a token because it demonstrates ownership, if you are stopped on the way out the store and you show your receipt you are demonstrating that you now have rights to ownership over the foodstuffs in your bag.
Whether you realize
it or not at the time, these tokens are trying to nudge your behavior. The store gives you the coupon because the
more time you spend in their store trying to redeem coupons, the greatly
likelihood you will spend additional money there. The grocery store wants you to pay for all
your groceries while the arcade wants you to buy more tokens than you end up
If needed, we could design
MERL strategies to assess how well these different tokens nudged the desired
behaviors. We would do this, in part, by thinking about how each token is
designed relative to the behavior it wants (i.e. the value, frequency and
duration of coupons, etc.).
Thinking about these ecosystems and their respective tokens will help us understand the interdependence between 1) the blockchain as a ledger that records transactions, 2) the token that captures the governance structures for how transactions are stored on the blockchain ledger as well as the incentive models for 3) the mechanisms of change in the social eco-system created by the token.
Figure #1: The inter-relationship between the blockchain
(ledger), token and social eco-system
Token Design as Intervention Design
Just as we assess
theories of change and their mechanisms against intervention design, we will
assess blockchain based interventions against their token design in much the
same way. This is because blockchain
tokens capture all the design dimensions of an intervention; namely the problem
to be solved, stakeholders and how they influence the problem (and thus the
solution), stakeholder attributes (as mapped out in something like a
stakeholder analysis), the beneficiary population, assumptions/risks, etc.
Outlier Ventures has adapted what they call a Token
Utility Canvas as a milestone in
their token design process. The canvas
can be correlated to the various dimensions of an evaluability
assessment tool (I am using the evaluability
assessment tool as a demonstration of the necessary dimensions of an
interventions design, meaning that the evaluability assessment tool assesses
the health of all the components of an intervention design). The Token Utility Canvas is a useful
milestone in the token design process that captures many of the problem
diagnostic, stakeholder assessment and other due diligence tools that are
familiar to MERL practitioners who have seen them used in intervention
design. Hence token design could be
largely thought of as intervention design and evaluated as such.
Comparing Token Design with Dimensions of Program Design (as represented in an
This table is not meant to be exhaustive and not all of the fields will be explained here but in general, it could be a useful starting point in developing our own thinking and tools for this emerging space.
The Token as a Tool
for Behavior Change
Coming up with a taxonomy of blockchain interventions and relevant tokens is a necessary task, but all blockchains that need to nudge behavior will have to have a token.
Consider supply chain management. Blockchains are increasingly being used as the ledger system for supply chain management. Supply chains are typically comprised of numerous actors packaging, shipping, receiving, applying quality control protocols to various goods, all with their own ledgers of the relevant goods as they snake their way through the supply chain. This leads to ample opportunities for fraud, theft and high costs associated with reconciling the different ledgers of the different actors at different points in the supply chain. Using the blockchain as the common ledger system, many of these costs are diminished as a single ledger is used with trusted data, hence transactions (shipping, receiving, repackaging, etc.) can happen more seamlessly and reconciliation costs drop.
However even in “simple” applications such as this there are behavior change implications. We still want the supply chain actors to perform their functions in a manner that adds value to the supply chain ecosystem as a whole, rewarding them for good behavior within the ecosystem and punishing for bad.
What if those shippers trying to pass on a faulty product had
already deposited a certain value of currency in an escrow account (housed in a
contract on the blockchain)? Meaning that if they are found to be
attempting a prohibited behavior (passing on faulty products) they surrender a
certain amount automatically from the escrow account in the blockchain smart
contract. How much should be deposited
in the escrow account? What is the ratio
between the degree of punishment and undesired action? These are behavior questions around a
mechanism of change that are dimensions of current intervention designs and will
be increasingly relevant in token design.
The point of this is to demonstrate that even “benign”
applications of the blockchain, like supply chain management, have behavior
change implications and thus require good due diligence in token design.
There is a lot that could be said about the validation function
of this process, who validates that the bad behavior has taken place and should
be punished or that good behavior should be rewarded? There are lessons to be learned from results
based contracting and the role of the validator in such a contracting
vehicle. This “validating” function will
need to be thought out in terms of what can be automated and what needs a
“human touch” (and who is responsible, what methods they should use,
Implications for MERL
If tokens are fundamental to MERL strategies for blockchain
interventions, there are several critical implications:
MERL practitioners will need to be heavily integrated into the due diligence processes and tools for token design
MERL strategies will need to be highly formative, if not developmental, in facilitating the timeliness and overall effectiveness of the feedback loops informing token design
New thinking and tools will need to be developed to assess the relationships between blockchain governance, token design and mechanisms of change in the resulting social ecosystem.
The opportunity cost for impact and “learning” could go up the less MERL practitioners are integrated into the due diligence of token design. This is because the costs to adapt token design are relatively low compared to current social interventions, partly due to the ability to integrate automated feedback.
Blockchain based interventions present us with significant learning opportunities due to our ability to use the technology itself as a data collection/management tool in learning about what does and does not work. Feedback from an appropriate MERL strategy could inform decision making around token design that could be coded into the token on an iterative basis. For example as incentives of stakeholder’s shift (i.e. supply chain shippers incur new costs and their value proposition changes) token adaptation can respond in a timely fashion so long as the MERL feedback that informs the token design is accurate.
There is need to determine what components of these feedback
loops can be completed by automated functions and what requires a “human
touch”. For example, what dimensions of
token design can be informed by smart infrastructure (i.e. temp gauges on
shipping containers in the supply chain) versus household surveys completed by
enumerators? This will be a task to
complete and iteratively improve starting with initial token design and lasting
through the lifecycle of the intervention.
Token design dimensions, outlined in the Token Utility Canvas, and decision-making
will need to result in MERL questions that are correlated to the best strategy
to answer them, automated or human, much the same as we do now in current
While many of our current due diligence tools used in both
intervention and evaluation design (things like stakeholder mapping, problem
analysis, cost benefit analysis, value propositions, etc.), will need to be
adapted to the type of relationships that are within a tokenized eco-systems. These include the relationships of influence
between the social eco-system as well as the blockchain ledger itself (or more
specifically the governance of that ledger) as demonstrated in figure #1.
This could be our, as MERL practitioners, biggest priority. While blockchain interventions could create incredible opportunities for social experimentation, the need for human centered due diligence (incentivizing humans for positive behavior change) in token design is critical. Over reliance on the technology to drive social outcomes is already a well evidenced opportunity cost that could be avoided with blockchain-based solutions if the gap between technologists, social scientists and practitioners can be bridged.