MERL Tech News

Chain Reaction: How Does Blockchain Fit, if at All, into Assessments of Value for Money of Education Projects?

by Cathy Richards

In this panel, “Chain Reaction: How does blockchain fit, if at all, into assessments of value for money of education projects,” hosted by Christine Harris-Van Keuren of Salt Analytics, panelists gave examples of how they’ve used blockchain to store activity and outcomes data and to track the flow of finances. Valentine Gandhi from The Development Café served as the discussant.

Value for money analysis (or benefit-cost analysis, cost-economy, cost-effectiveness, cost-efficiency, or cost-feasibility) is defined as an evaluation of the best use of scarce resources to achieve a desired outcome. In this panel, participants examined the value for money of blockchain by taking on an aspect of an adapted value-for-money framework. The framework takes into account resources, activities, outputs, and outcomes. Panel members were specifically asked to explain what they gained and lost by using blockchain as well as whether they had to use blockchain at all.

Ben Joakim is the founder and CEO of Disberse, a new financial institution built on distributed ledger technology. Disberse aims to ensure greater privacy and security for the aid sector — which serves some of the most vulnerable communities in the world. Joakim notes that in the aid sector, traditional banks are often slow and expensive, which can be detrimental during a humanitarian crisis. In addition, traditional banks can lack transparency, which increases the potential for the mismanagement and misappropriation of funds. Disberse works to tackle those problems by creating a financial institution that is not only efficient but also transparent and decentralised, thus allowing for greater impact with available resources. Additionally, Disberse allows for multi-currency accounts, foreign currency exchanges, instant fund transfers, end-to-end traceability, donation capabilities, regulatory compliance, and cash transfer systems. Since inception, Disberse has delivered pilots in several countries including Swaziland, Rwanda, Ukraine, and Australia.

David Mikhail of UNCDF discussed the organization’s usage of blockchain technologies in the Nepal remittance corridor. In 2017 alone, Nepal received $6.9 billion in remittances. These funds are responsible for 28.4% of the country’s GDP. One of the main challenges for Nepali migrant families is a lack of financial inclusion characterized by credit interest rates as high as 30%, lack of a documented credit history, and lack of sufficient collateral. Secondarily, families have a difficult time building capital once they migrate. Between the high costs of migration, high-interest rate loans, non-stimulative spending that impacts their ability to save and invest, and lack of credit history make it difficult for migrants to break free of the poverty cycle. Due to this, the organization asked itself whether it could create a new credit product tied to remittances to provide capital and fuel domestic economic development. In theory, this solution would drive financial inclusion by channeling remittances through the formal sector. The product would not only leverage blockchain in order to create a documented credit history, but it would also direct the flow of remittances into short and long-term savings or credit products that would help migrants generate income and assets. 

Tara Vassefi presented on her experience at Truepic, a photo and video verification platform that aims to foster a healthy civil society by pushing back against disinformation. They do this by bolstering the value of authentic photos through the use of verified pixel data from the time of capture and through the independent verification of time and location metadata. Hashed references to time, date, location and exact pixelation are stored on the blockchain. The benefits of using this technology are that the data is immutable and it adds a layer of privacy and security to media. The downsides include marginal costs and the general availability of other technologies. Truepic has been used for monitoring and evaluation purposes in Syria, Jordan, Uganda, China, and Latin America to remotely monitor government activities and provide increased oversight at a lower cost. They’ve found that this human-centric approach, which embeds technology into existing systems, can close the trust gap currently found in society.

Smartcards for MERL: Worth the Money?

by Jonathan Tan

In 2014, ACDI/VOCA ran into a common problem: their beneficiaries – smallholder farmers in Ghana – had been attending trainings for several agricultural programs, but monitoring workshop attendance and verifying the identity of each attendee was laborious, inaccurate, and labor-intensive. There were opportunities for errors with transcription and data entry at several points in the reporting process, each causing delays downstream for analysis and decision-making. So they turned to a technological solution: contactless smartcards.

At MERL Tech DC, Nirinjaka Ramasinjatovo and Nicole Chao ran a session called “Smartcards for MERL: Worth the Money” to share ACDI/VOCA’s experiences.

The system was fairly straightforward: after a one-time intake session at each village, beneficiaries are registered in a central database and a smartcard with their name and photo is printed and distributed for each. They hired developers to build a simple graphical interface to the database for trainers to use. At each training, trainers bring laptops equipped with card readers to take attendance, and the attendance data is synchronized with the database upon return to an internet-connected office. 

The speakers discussed several expected and unexpected benefits from introducing the smartcards. Registration was streamlined at trainings, and data collection became faster and more accurate. Attendance and engagement at training sessions also increased. ACDI/VOCA hypothesized that beneficiaries possessing physical tokens associated with the otherwise knowledge-based program reminded them of its impact; one of the speakers recounted observing some farmers wearing the smartcards on lanyards with pride to non-training social events in the community. Finally, improved data tracking enabled analysts at ACDI/VOCA to compare farmers’ attendance rate at training sessions to their reported agricultural yield increases and thus measure their impact more effectively.

Process durations for developing the 2014 smart card system in Ghana (left), vs. the 2018 smart tags in Colombia (right).

Then came the perennial question: what did it cost? And was it worth it? For the 2014 Feed the Future program in Ghana, the smartcard system took 6 months of preparation to be deployed (including requirements gathering, software development, hardware procurement, and training). While the cards were fairly inexpensive at 50 to 60 cents (US) apiece, the system had not insignificant fixed costs: card printers were $1,500 each, and the total software development cost was between $15,000 to $25,000.  

ACDI/VOCA sought to improve on this system in a subsequent 2018 emergency response program in Colombia. Instead of smartcards, beneficiaries were issued with small contactless tags, while enumerators used tablets instead of laptops to administer surveys and track attendance. Crucially, rather than hiring developers to write new software from scratch, they made use of Microsoft PowerApps that were more straightforward to deploy; the PowerApp-based system took far less time to test and train enumerators with. It also had the benefit of being easily modifiable post-deployment (which had not been the case with the smart cards). The contactless tags were also less costly at $0.10 to $0.15 apiece, with readers in the $15-20 range. All in all, the contactless tag system deployed in Colombia proved to be far more cost-effective for ACDI/VOCA than the smart cards had been in the previous Ghana project. 

Based on the two projects discussed, the speakers proposed the following set of questions to consider for future projects:

  1. Is there a high number of beneficiaries in your program?
  2. Does each beneficiary have the potential to receive multiple benefits/programs?
  3. Is there a strong need for identity authentication?
  4. Do you have access to software developers?

If you answered “yes” to all four questions, then it is likely that a smart identification system based on cards, tags, etc. will be worth the upfront investment and maintenance costs. If, however, your answer to some or all of them was “no”, then there were intermediate solutions that could still be implementable, such as using tokens with QR codes or bar codes, which would not be as strict of a proof of identity. 

No Data, No Problem: Extracting Insights from Data-Poor Environments

by Jonathan Tan

In data-poor environments, what can you do to get what you need? For Arpitha Peteru and Bob Lamb of the Foundation for Inclusion, the answer lies at the intersection of science, story, and simulation. 

The session, “No Data, No Problem: Extracting Insights from Data Poor Environments” began with a philosophical assertion: all data is qualitative, but some can be quantified. The speakers were making the argument that the processes we use to extract insights from data are fundamentally influenced by our personal assumptions, interpretations and biases, and misusing data without considering those fundamentals can produce unhelpful insights. As an example, they cited an unnamed cross-national study of fragile stages that committed several egregious data sins:

  1. It assumed that household data aggregated at the national level was reliable.
  2. It used an incoherent unit of analysis. Using a country-level metric in Somalia, for example, makes no sense because it ignored the qualitative differences between Somaliland and the rest of the country. 
  3. It ignored the complex web of interactions among several independent variables to produce pairwise correlation metrics that themselves made no sense. 

For Peteru and Lamb, the indiscriminate application of data analysis methods without understanding the forces behind the data is a failure of imagination. They spoke about the Foundation for Inclusion’s approach to social issues by their appreciation for complex systems. They illustrated the point with a demonstration: when you pour water from a pitcher onto a table, the rate of water leaving the pitcher exactly matches the rate of water hitting the table. If you were to measure both and looked only at the data, the correlation is 1 and you could conclude that the working mechanism was that the table was getting wet because it was leaving the pitcher. But what happens when there are unobserved intermediate steps? What if, for instance, the water was flowing into a cup on the table, which had to overflow before hitting the table? Or what if water was being poured into a balloon, which had to cross a certain threshold before bursting and wetting the table? The data in isolation would tell you very little about how the system actually worked. 

What can you do in the absence of good data? Here, the Foundation for Inclusion turns to stories as a source of information. They argue that talking to domain experts, reviewing local media and gathering individual viewpoints can help by revealing patterns and allowing researchers to formulate potential causal structures. Of course, the further one gets from the empirics, the more uncertainty there must be. And that can be quantified and mitigated with sensitivity tests and the like. Peteru and Lamb’s point here was that even anecdotal information can give you enough to assemble a hypothesized system or set of systems that can then be explored and validated – by way of simulation.

Simulations were the final piece of the puzzle. With researchers seeing increasing access to the hardware and computing knowledge necessary to create simulations of complex systems – systems based on information from the aforementioned stores – the speakers argued that simulations were an increasingly viable method of exploring stories and validating hypothesized causal systems. Of course, there was no one-size-fits-all: they discussed several types of simulations – from agent-based models to Monte Carlo models – as well as when each might be appropriate. For instance, health agencies today already make use of sophisticated simulations to forecast the spread of epidemics, in which collecting sufficient data would simply be too slow to act upon. By simulating thousands of potential outcomes from varying key parameters in the simulations, and systematically eliminating the models that had undesirable outcomes or those that relied on data with high levels of uncertainty, one could, in theory, be left with a handful of simulations whose parameters would be instructive. 

The purpose of data collection is to produce useful, actionable insights. Thus, in its absence, the Foundation for Inclusion argues that the careful application of science, story, and simulation can pave the way forward.

Collecting Data in Hard to Reach Places

Written by Stephanie Jamilla

By virtue of operating in the international development sphere, we oftentimes work in areas that are remote, isolated, and have little or no internet connection. However, as the presenters from Medic Mobile and Vitamin Angels (VA) argued in their talk, “Data Approaches in Hard-to-Reach Places,” it is possible to overcome these barriers and use technology to collect much-needed program data. The session was split neatly into three parts: a presentation by Mourice Barasa, the Impact Lead of Medic Mobile in Kenya, a presentation by Jamie Frederick, M&E Manager, and Samantha Serrano, M&E Specialist, from Vitamin Angels, and an activity for attendees. 

While both presentations discussed data collection in a global health context and used phone applications as the means of data collection, they illustrated two different situations. Barasa focused on the community health app that Medic Mobile is implementing. It is used by community health teams to better manage their health workers and to ease the process of providing care. The app serves many purposes. For example, it is a communication tool that connects managers and health workers as well as a performance management tool that tracks the progress of health workers and the types of cases they have worked on. The overall idea is to provide near real time (NRT) data so that health teams have up-to-date information about who has been seen, what patients need, if patients need to be seen in a health facility, etc. Medic Mobile implemented the app with the Ministry of Health in Siaya, Kenya and currently have 1700 health workers using the tools. While the use of the app is impressive, Barasa explained various barriers that hinder the app from creating NRT data. Health teams rely on the timestamp sent with every entry to know when a household is visited by a health worker. However, a health worker may wait to upload an entry and use the default time on their phone rather than the actual time of visit. Also, poor connectivity, short battery life, and internet subscription costs are of concern. Medic Mobile is working on improvements such as exploring the possibility of using offline servers, finding alternatives to phone charging, and central billing of mobile users have decreased billing from $2000/month to around $100.

Frederick and Serrano expressed similar difficulties in their presentation — particularly about the timeliness of data upload. However, their situation was different. VA used their app for specifically M&E purposes. The organization wanted to validate the extent to which it was reaching its target population, delivering services at the best practice standard, and are truly filling the 30% gap of coverage that national health services miss. Their monitoring design consisted of taking a random sample of 20% of their field partners and using ODK collect with an ONA-programmed survey (which is cloud-based) on Android devices. VA trained 30 monitors to cover countries in Latin America and the Caribbean, Africa, and Asia in which they had partners. While the VA Home Office was able to use the data collected on the app well through the cycle of data collection to action, field partners were having trouble with the data in the analysis, reporting, and action stages. Hence, a potential solution was piloted with three partners in Latin America. VA adjusted the surveys in ONA so that it would display a simple report with internal calculations based on the survey data. This report was developed in NRT, allowing partners to access the data quickly. VA also formatted the report so that the data was easily consumable. VA also made sure to gather feedback from partners about the usefulness of monitoring results to ensure that partners also valued collecting this data. 

These two presentations reinforced that while there is the ability to collect data in difficult places, there will always be barriers as well, whether they are technical or human-related. The group discussion activity revealed other challenges. The presenters prompted the audience with four questions:

  1. What are data collection approaches you have used in hard-to-reach places?
  2. What have been some challenges with these approaches?
  3. How have the data been used?
  4. What have been some challenges with use of these data?

In my group of five, we talked mainly about hindrances to data collection in our own work, such as the cost of some technology. Another that came up was how there is a gap between having the data visualizing them well but ensuring that the data we do collect actually translates into action.

Overall, the session helped me think through how important it is to consider potential challenges in the initial design of the data collection and analysis process. The experiences of Medic Mobile and Vitamin Angels demonstrated what difficulties we all will face when collecting data in hard-to-reach places but also that those difficulties can ultimately be overcome.

Using Cutting-Edge Technology to Measure Global Perceptions of Gender-Based Violence Online

By Jonathan Tan, Data Science Intern at the World Bank

Online communities have emerged as a powerful tool for giving a voice to the marginalized. However, it also opens up opportunities for harmful behaviors that exist offline to be amplified in scale and reach. Online violence, in particular, is disproportionately targeted at women and minority groups. How do we measure these behaviors and their impact on online communities? And how can donors and implementers use that information to develop programs addressing this violence? In response to these questions, Paulina Rudnicka of the American Bar Association Rule of Law Initiative (ABA ROLI), Chai Senoy, of the United States Agency for International Development (USAID) and Mercedes Fogarassy, RIWI Corp. entered into a public-private partnership to administer a large-scale online survey. Using RIWI’s global trend tracking technology, the survey received over 40,000 complete responses from respondents in 15 countries, and featured 17 questions on the “nature, prevalence, impacts, and responses to GBV online”. 

What is GBV? The speakers specifically define Gender-Based Violence (GBV) as “the use of the Internet to engage in activities that result in harm or suffering to a person or group of people online or offline because of their gender.” They noted that the definition was based heavily on text from the 1993 UN Declaration on the Elimination of Violence Against Women. They also noted that the declaration, and the human rights standards around it, predated the emergence of GBV online. Many online spaces have been designed with male users in mind by default, and consequently, the needs of female users have been systematically ignored.  

Why is it important? GBV online is often an extension of offline GBV that has been prevalent throughout history: it has roots in sexist behavior; reinforces existing gender inequalities; and is often trivialized by law enforcement officials. However, the online medium allows GBV to be easily scalable – online GBV exists wherever the internet reaches – and replicable, leading to disproportionately large impacts on targeted individuals. Outside of the direct impacts (e.g. with cyberbullying, blackmail, extortion, doxing), it often has persistent emotional and psychological impacts on its victims. Further, GBV often has a chilling effects on freedom of expression in terms of silencing and self-censorship, making its prevalence and impact particularly difficult to measure. 

What can we do? In order to formulate an effective response to GBV online, we need good data on people’s experiences online. It needs to be comprehensive, gender-disaggregated, and collected at national, regional, and global levels. On a broader level, states and firms can proactively prevent GBV online through human rights due diligence. 

Why was the survey special? The survey, with over 40,000 completed responses and 170,000 unique engagements in 15 countries, was the largest study on GBV online to date. The online-only survey was administered to any respondent with internet access; whereas most prior surveys focused primarily on respondents from developed countries, this survey focused on respondents from developing countries. Speed was a notable factor – the entire survey was completed within a week. Further, given the sensitive nature of the subject matter, respondents’ data privacy was prioritized: personal identifying information (PII) was not captured, and no history of having answered the anonymous survey was accessible to respondents after submission. 

How was this accomplished? RIWI was used to conduct the survey. RIWI takes advantage of inactive or abandoned registered, non-trademarked domains. When a user inadvertently lands on one of these domains, they have a random chance of stumbling into a RIWI survey. The user can choose to participate, while remaining anonymous. The respondent’s country, region, or sub-city level is auto-detected with precision through RIWI to deliver the survey in the appropriate language. RIWI provided the research team with correlations of significance and all unweighted and weighted data for validation.

What did the survey find? Among the most salient findings: 

  • 40% of respondents had felt not personally safe from harassment and violence while online, of whom 44% had experienced online violence due to their gender. 
  • Of the surveyed countries, India and Uganda reported the highest rates of GBV online (13% of all respondents), while Kazakhstan reported the lowest rates (6%). 
  • 42% of respondents reported not taking safety precautions online, such as customizing privacy settings on apps, turning off features like “share my location”, and being conscious about not sharing personally identifiable information online 
  • 85% of respondents that had experienced GBV online reported subsequently experiencing fear for their own safety, fear for someone close to them, feeling anxiety or depression, or reducing time online. 

What’s next? Subsequent rounds of the survey will include more than the original 15 countries. Further, since the original survey did not collect personal identifying information from respondents, subsequent rounds will supplement the original questions by collecting additional qualitative data.

Living Our Vision: Applying the Principles of Digital Development as an Evaluative Methodology

by: Sylvia Otieno, MPA candidate at George Washington University and Consultant at the World Bank’s IEG; and Allana Nelson, Senior Manager for the Digital Principles at DIAL

For nearly a decade, the Principles of Digital Development (Digital Principles)  have served to guide practitioners in developing and implementing digital tools in their programming. The plenary session at MERL Tech DC 2019 titled “Living Our Vision: Applying the Principles of Digital Development as an Evaluative Methodology” introduced attendees to four evaluation tools that have been developed to help organizations incorporate the Digital Principles into their design, planning, and assessments. 

Laura Walker MacDonald explaining the Monitoring and Evaluation Framework. (Photo by Christopher Neu)

This panel – organized and moderated by Allana Nelson, Senior Manager for the Digital Principles stewardship at the Digital Impact Alliance (DIAL) – highlighted digital development frameworks and tools developed by SIMLab, USAID in collaboration with John Snow Inc., Digital Impact Alliance (DIAL) in collaboration with TechChange, and the Response Innovation Lab. These frameworks and toolkits were built on the good practice guidance provided by the Principles for Digital Development. They are intended to assist development practitioners to be more thoughtful about how they use technology and digital innovations in their programs and organizations. Furthermore, the toolkits assist organizations with building evidence to inform program development. 

Laura Walker McDonald, Senior Director for Insights and Impact at DIAL, presented the Monitoring and Evaluation Framework (developed during her time at SIMLab), which assists practitioners in measuring the impact of their work and the contribution of inclusive technologies to their impact and outcomes. This Monitoring and Evaluation Framework was developed out of the need for more evidence of the successes and failures of technology for social change. “We have almost no evidence of how innovation is brought to scale. This work is trying to reflect publicly the practice of sharing learnings and evaluations. Technology and development isn’t as good as it could be because of this lack of evidence,” McDonald said. The Principles for Digital Development provide the Framework’s benchmarks. McDonald continues to refine this Framework based on feedback from community experts, and she welcomes input that can be shared through this document.

Christopher Neu, COO of TechChange, introduced the new, cross-sector Digital Principles Maturity Matrix Tool for Proposal Evaluation that his team developed on behalf of DIAL. The Maturity Matrix tool helps donors and implementers asses how the Digital Principles are planned to be used during the program proposal creation process. Donors may use the tool to evaluate proposal responses to their funding opportunities, and implementers may use the tool as they write their proposals. “This is a tool to give donors and implementers a way to talk about the Digital Principles in their work. This is the beginning of the process, not the end,” Neu said during the session. Users of the Maturity Matrix Tool score themselves on a rating between one and three against metrics that span each of the nine Digital Principles and across the four stages of the Digital Principles project lifecycle. A program is scored one when it loosely incorporates the identified activity or action into proposals and implementation. A score of two indicates that the program is clearly in line with best practices or that the proposal’s writers have at least thought considerably about them. Those who incorporate the Digital Principles on a deeper level and provide an action plan to increase engagement earn a score of three. It is important to note that not every project will require the same level of Digital Principles Maturity, and not every Digital Principle may be required to be used in a program. The scores are intended to provide donors and organizations evidence that they are making the best and most responsible investment in technology. 

Steve Ollis, Senior Digital Health Advisor at John Snow Inc., presented the Digital Health Investment Review Tool (DHIRT), which assists donors investing in Digital Health programs to make informed decisions about their funding. The tool asks donors to adhere to the Digital Principles and the Principles of Donor Alignment for Digital Health (Digital Investment Principles), which are also based on the Digital Principles. When implementing this tool, practitioners can assess implementer proposals across 12 criteria. After receiving a score between one to five (one being nascent and five being optimized), organizations can better assess how effectively they incorporate the Digital Principles and other best practices (including change management) into their project proposals. 

Max Vielle, Global Director of Response Innovation Lab, introduced the Innovation Evidence Toolkit, which helps technology innovators in the humanitarian sector build evidence to thoughtfully develop and assess their prototypes and pilots. “We wanted to build a range of tools for implementors to assess their ability to scale the project,” Vielle said of the toolkit. Additionally, the tool assists innovators in determining the scalability of their technologies. The Innovation Evidence Toolkit helps humanitarian innovators and social entrepreneurs think through how they use technology when developing, piloting, and scaling their projects. “We want to remove the barriers for non-humanitarian actors to act in humanitarian responses to get services to people who need them,” Vielle said. This accessible toolkit can be used by organizations with varying levels of capacity and is available offline for those working in low-connectivity environments. 

Participants discuss the use of different tools for evaluating the Principles. (Photo by Christopher Neu)

Evidence-based decision making is key to improving the use of technologies in the development industry. The coupling of the Principles of Digital Development and evaluation methodologies will assist development practitioners, donors, and innovators not only in building evidence, but also in effectively implementing programs that align with the Digital Principles.

Big Data, Big Responsibilities

by Catherine Gwin

Big data comes with big responsibilities, where both the funder and recipient organization have ethical and data security obligations.

Big data allows organizations to count and bring visibility to marginalized populations and to improve on decision-making. However, concerns of data privacy, security and integrity pose challenges within data collection and data preservation. What does informed consent look like in data collection? What are the potential risks we bring to populations? What are the risks of compliance?

Throughout the MERL Tech DC panel, “Big Data, Big Responsibilities,” Mollie Woods, Senior Monitoring, Evaluation and Learning (MEL) Advisor of ChildFund International and Michael Roytman, Founder and Board Director of Dharma Platform, unpacked some of the challenges based on their experiences. Sam Scarpino, Dharma’s Chief Strategy Officer, served as the session moderator, posing important questions about this area.

The session highlighted three takeaways organizations should consider when approaching data security.

1) Language Barriers between Evaluators and Data Scientists

Both Roytman and Woods agreed that the divide between evaluators and data scientists is the lack of knowledge of the others’ field. How do you ask a question when you didn’t know you had to?

In Woods’ experience, the Monitoring and Evaluation team and IT team each have a role in data security, but work independently. The evolving field of M+E inhibits time for staying attuned to what data security needs. Additionally, the organization’s limited resources can impede the IT team from supporting programmatic data security. 

A potential solution ChildFund has considered is investing in an IT person with a focus on MERL who has experience and knowledge in the international or humanitarian sphere. However, many organizations fall short when it comes to financing data security. In addition, identifying an individual with these skills can be challenging.

2) Data Collection

Data breaches exposes confidential information, which puts vulnerable populations at risk of exploitative use of their data and potential harm. As we gather data, this constitutes a question about what informed consent looks like? Are we communicating the risks to beneficiaries of releasing their personal information? 

In Woods’ experience, ChildFund approaches data security through a child-safeguarding lens across stakeholders and program participants, where all are responsible for data security. Its child safeguarding policy entails data security protocol and privacy; however, Woods mentioned the dissemination and implementation across countries is a lingering question. Many in-country civil society organizations lack capacity, knowledge, and resources to implement data security protocols, especially if they are working in a country context that does not have laws, regulations or frameworks related to data security and privacy. Currently, ChildFund is advocating for refresher trainings on policy for all involved global partnerships to be updated on organizational changes.

3) Data Preservation

The issue of data breaches is a privacy concern when organizations’ data includes sensitive information of individuals. This puts beneficiaries at-risk of exploitation by bad actors. Roytman explained that there are specific actors, risks, and threats that affect specific kinds of data; though, humanitarian aid organizations are not always a primary target. Nonetheless, this shouldn’t distract organizations from potential risks, but open discussion around how to identify and mitigate risks? 

Protecting sensitive data requires a proper security system, something that not all platforms provide, especially if they are free. Ultimately, security is a financial investment that requires time in order to avoid and mitigate risks and potential threats. In order to increase support and investment in security, ChildFund is working with Dharma to pilot a small program to demonstrate the use of big data analytics with a built in data security system.

Roytman suggested approaching ethical concerns by applying the CIA Triad: Confidentiality, Availability and Integrity. There will always be tradeoffs, he said. If we don’t properly invest in data security and mitigate potential risks, there will be additional challenges to data collection. If we don’t understand data security, how can we ensure informed consent?

Many organizations find themselves doing more harm than good due to lack of funding. Big data can be an inexpensive approach to collecting large quantities of data, but if it leads to harm, there is a problem. This is is a complex issue to resolve, however, as Roytman concluded, the opposite of complexity is not simplicity, but rather transparency.

See Dharma’s blog post about this session here.

Related Resources and Articles

Messaging Platforms: Best Practices, Costs, Security, and Privacy

Session by: Maurice Sayinzoga (Digital Impact Alliance), Boris Maguire (Echo Mobile), Christoph Pimmer (Learning Across Frontiers), and Charles Copley

Written by: Cathy Richards

“By 2019, an estimated 3.9 billion people will be using messaging apps (Activate & WSJ Tech). NGOs and the development community have already begun to embrace the opportunity afforded by these platforms to reach more people and track progress and success of their programs.” Led by Maurice Sayinzoga (Digital Impact Alliance), Boris Maguire (Echo Mobile), Christoph Pimmer (Learning Across Frontiers), and Charles Copley (Praekelt.Org) this panel session provided an overview of the current landscape of messaging platforms, described how these platforms are being leveraged for development and relief work, and discussed the related opportunities and challenges of implementation.

Dr. Christoph Pimmer spoke about his study of nurses and nursing students who participated in WhatsApp-moderated professional groups during placements and school-to-work transitions. He found that these groups generally enhanced participants’ knowledge and resilience and that they reduced professional isolation and stress. The project involved not only formal training and education but also informal learning and problem solving — such as knowledge transfers. The challenges encountered by Dr. Pimmer mainly dealt with the unregulated nature of these groups such that there were risks around patients’ privacy breaches, blurred boundaries compared to the traditional healthcare provider/client relationship, increased proliferation of misinformation, and on occasional inappropriate use of the chat groups at the bedside. His recommendations included leveraging pre-existing social capital by encouraging local leaders to initiate the groups themselves and act as moderators. Similarly, these local leaders can develop ground rules on the scope and accepted behavior of these groups. 

Maurize Sayinzoga listed affordability as the top concern around the use of messaging platforms. His recommendations include:

  1. Go where people’s attention already lies: Try to not only understand regional communication preferences (do people prefer SMS over WhatsApp?) but also the community’s communication behaviors and preferences among different demographics. 
  2. Prioritize user needs over implementer needs: Conduct user research on the various platforms and select based on user appeal not necessarily ease of integration. Understand the costs that users will pay to use your system along with their willingness to pay. Remember that SMS is still an option.
  3. Partner for scale and technical expertise: Make sure you have enough resources to grow. Governments can help overcome the challenges to scale while third party developers can fill technical gaps. Similarly, messaging app providers can help overcome limitations of features and policies while partners can help provide content. 
  4. Prioritize content and personnel: Systems are only as good as the content they provide. Make sure to develop maintain sectoral expertise and make plans to handle user feedback and inquiries. 
  5. When possible, engage with more users through the use of multiple channels: Make sure to assess who and how many people can access each messaging platform, the cost savings for users when with provided multiple channels, as well as the costs and potential complications that multiple channels can add to a project. Prepare to manage these parallel systems when possible.
  6. Take into account the gender gap: The gap in internet use increased in Africa between 2013 and today. Women in low and middle-income countries are 10% less likely to own a mobile phone.

Charles Copley (above) spoke about how his organization was partnering with WhatsApp to programmatically deliver messages in development contexts. They are conducting research studies around the use of this tool to improve outcomes using methods and tools such as 2x2x2 factorials, sequential multiphase adaptive randomized trials, experiments as markets and natural language processing — specifically around the maternal health context with the MomConnect Connect project and related Turn application. Copley’s tips include:

  1. Balance individual privacy with overall good: Aim for a consent-driven model. While certain data would be useful for developing improved health systems, this data still has to be used responsibly.
  2. Consider anonymous groups: Develop the capacity to host anonymous chat groups in which individuals do not know each other’s names or numbers. 
  3. Websites are still foreign in certain contexts. It is much more common to interact with a Facebook page or WhatsApp contact than to visit an actual website. Additionally, mobile surveys can be data-heavy.

5 tips for operationalizing Responsible Data policy

By Alexandra Robinson and Linda Raftree

MERL and development practitioners have long wrestled with complex ethical, regulatory, and technical aspects of adopting new data approaches and technologies. The topic of responsible data has gained traction over the past 5 years or so, and a handful of early adopters have developed and begun to operationalize institutional RD policies. Translating policy into practical action, however, can feel daunting to organizations. Constrained budgets, complex internal bureaucracies, and ever-evolving technology and regulatory landscapes make it hard to even know where to start. 

The Principles for Digital Development provide helpful high level standards, and donor guidelines (such as USAID’s Responsible Data Considerations) offer additional framing. But there’s no one-size-fits-all policy or implementation plan that organizations can simply copy and paste in order to tick all the responsible data boxes. 

We don’t think organizations should do that anyway, given that each organization’s context and operating approach is different, and policy means nothing if it’s not rolled out through actual practice and behavior change!

In September, we hosted a MERL Tech pre-workshop on Operationalizing Responsible Data to discuss and share different ways of turning responsible data policy into practice. Below we’ve summarized some tips shared at the workshop. RD champions in organizations of any size can consider these when developing and implementing RD policy.

1. Understand Your Context & Extend Empathy

  • Before developing policy, conduct a non-punitive assessment (a.k.a. a landscape assessment, self-assessment or staff research process) on existing data practices, norms, and decision-making structures . This should engage everyone who will using or affected by the new policies and practices. Help everyone relax and feel comfortable sharing how they’ve been managing data up to now so that the organization can then improve. (Hint: avoid the term ‘audit’ which makes everyone nervous.)
  • Create ‘safe space’ to share and learn through the assessment process:
    • Allow staff to speak anonymously about their challenges and concerns whenever possible
    • Highlight and reinforce promising existing practices
    • Involve people in a ‘self-assessment’
    • Use participatory workshops (e.g. work with a team to map a project’s data flows or conduct a Privacy Impact Assessment or a Risk-Benefits Assessment) – this allows everyone who participates to gain RD awareness while also learning new practical tools along with highlighting any areas that need attention. The workshop lead or “RD champion” can also then get a better sense of the wider organizations knowledge, attitudes and practices as related to RD
    • Acknowledge (and encourages institutional leaders to affirm) that most staff don’t have “RD expert” written into their JDs; reinforce that staff will not be ‘graded’ or evaluated on skills they weren’t hired for.
  • Identify organizational stakeholders likely to shape, implement, or own aspects of RD policy and tailor your engagement strategies to their perspectives, motivations, and concerns. Some may feel motivated financially (avoiding fines or the cost of a data breach); others may be motivated by human rights or ethics; whereas some others might be most concerned with RD with respect to reputation, trust, funding and PR.
  • Map organizational policies, major processes (like procurement, due diligence, grants management), and decision making structures to assess how RD policy can be integrated into these existing activities.

2. Consider Alternative Models to Develop RD Policy 

  • There is no ‘one size fits all’ approach to developing RD policy. As the (still small, but promising) number of organizations adopting policy grows, different approaches are emerging. Here are some that we’ve seen:
    • Top-down: An institutional-level policy is developed, normally at the request of someone on the leadership team/senior management. It is then adapted and applied across projects, offices, etc. 
      • Works best when there is strong leadership buy-in for RD policy and a focal point (e.g. an ‘Executive Sponsor’) coordinating policy formation and navigating stakeholders
    • Bottom-up: A group of staff are concerned about RD but do not have support or interest from senior leadership, so they ‘self-start’ the learning process and begin shaping their own practices, joining together, meeting, and communicating regularly until they have wider buy-in and can approach leadership with a use case and budget request for an organization-wide approach.
      • Good option if there is little buy-in at the top and you need to build a case for why RD matters.
    • Project- or Team-Generated: Development and application of RD policies are piloted within a targeted project or projects or on one team. Based on this smaller slice of the organization, the project or team documents its challenges, process, and lessons learned to build momentum for and inform the development of future organization-wide policy. 
      • Promising option when organizational awareness and buy-in for RD is still nascent and/or resources to support RD policy formation and adoption (staff, financial, etc.) are limited.
    • Hybrid approach: Organizational policy/policies are developed through pilot testing across a reasonably-representative sample of projects or contexts. For example, an organization with diverse programmatic and geographical scope develops and pilots policies in a select set of country offices that can offer different learning and experiences; e.g., a humanitarian-focused setting, a development-focused setting, and a mixed setting; a small office, medium sized office and large office; 3-4 offices in different regions; offices that are funded in various ways; etc.  
      • Promising option when an organization is highly decentralized and works across a diverse country contexts and settings. Supports the development of approaches that are relevant and responsive to diverse capacities and data contexts.

3. Couple Policy with Practical Tools, and Pilot Tools Early and Often

  • In order to translate policy into action, couple it with practical tools that support existing organizational practices. 
  • Make sure tools and processes empower staff to make decisions and relate clearly to policy standards or components; for example:
    • If the RD policy includes a high-level standard such as, “We ensure that our partnerships with technology companies align with our RD values,” give staff tools and guidance to assess that alignment. 
  • When developing tools and processes, involve target users early and iteratively. Don’t worry if draft tools aren’t perfectly formatted. Design with users to ensure tools are actually useful before you sink time into tools that will sit on a shelf at best, and confuse or overburden staff at worst. 

4. Integrate and “Right-Size” Solutions 

  • As RD champions, it can be tempting to approach RD policy in a silo, forgetting it is one of many organizational priorities. Be careful to integrate RD into existing processes, align RD with decision-making structures and internal culture, and do not place unrealistic burdens on staff.
  • When building tools and processes, work with stakeholders to develop responsibility assignment charts (e.g. RACI, MOCHA) and determine decision makers.
  • When developing responsibility matrices, estimate the hours each stakeholder (including partners, vendors, and grantees) will dedicate to a particular tool or process. Work with anticipated end users to ensure that processes:
    • Can realistically be carried out within a normal workload
    • Will not excessively burden staff and partners
    • Are realistically proportionate to the size, complexity, and risk involved in a particular investment or project

5. Bridge Policy and Behavior Change through Accompaniment & Capacity Building 

  • Integrating RD policy and practices requires behavior change and can feel technically intimidating to staff. Remember to reassure staff that no one (not even the best resourced technology firms!), has responsible data mastered, and that perfection is not the goal.
  • In order to feel confident using new tools and approaches to make decisions, staff need knowledge to analyze information. Skills and knowledge required will be different according to role, so training should be adapted accordingly. While IT staff may need to know the ins and outs of network security, general program officers certainly do not. 
  • Accompany staff as they integrate RD processes into their work. Walk alongside them, answering questions along the way, but more importantly, helping staff build confidence to develop their own internal RD compass. That way the pool of RD champions will grow!

What approaches have you seen work in your organization?

MERL Tech DC 2019 Feedback Report

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of the conference and related community are to:

  • Improve development, tech, data & MERL literacy
  • Help people find and use evidence & good practices
  • Promote ethical and appropriate use of technology
  • Build and strengthen a “MERL Tech community”
  • Spot trends and future-scope for the sector
  • Transform and modernize MERL in an intentionally responsible and inclusive way

Our sixth MERL Tech DC conference took place on September 5-6, 2019, and we held four pre-workshops on September 4. Some 350 people from 194 organizations joined us for the 2-days, and another 100 people attended the pre-workshops. About 56% of participants attended for the first time, whereas 44% were returnees.

Who attended?

Attendees came from a wide range of organization types and professions.

Conference Themes

The theme for this year’s conference was “Taking Stock” and we had 4 sub-themes:

  1. Tech and Traditional MERL
  2. Data, Data, Data
  3. Emerging Approaches to MERL
  4. The Future of MERL

State of the Field Research

A small team shared their research on “The MERL Tech State of the Field” organized into the above 4 themes. The research will be completed and shared on the MERL Tech site before the end of 2019. (We’ll be presenting it at the South African Evaluation Association Conference in October and at the American Evaluation Association conference in November)

As always, MERL Tech conference sessions were related to: technology for MERL, MERL on ICT4D and Digital Development programs, MERL of MERL Tech, data for decision-making, ethical and responsible data approaches and cross-disciplinary community building. (See the full agenda here):

We checked in with participants on the last day to see how the field had shifted since 2015, when our keynote speaker (Ben Ramalingam) gave some suggestions on how tech could improve MERL.

Ben’s future vision
Where MERL Tech 2019 sessions fell on the expired-tired-wired schematic.
What participants would add to the schematic to update it for 2019 and beyond.

Diversity and Inclusion

We have been making an effort to improve diversity and inclusion at the conference and in the MERL Tech space. An unofficial estimate on speaker racial and gender diversity is below. As compared to 2018 when we first began tracking, the number of women of color speakers increased by 5% and women of color by 2%. The number of white female speakers decreased by 6% and the number of white male speakers went down by 1%. Our gender balance remained fairly consistent.

Where we are failing on diversity and inclusion is at having speakers and participants from outside of North America and Europe – that likely has to do with cost and visas which affect who can attend. It also has to do with who organizations select to represent them at MERL Tech. We’re continuing to try to find ways to collaborate with groups working on MERL Tech in different regions. We believe that new and/or historically marginalized voices should be more involved in shaping the future of the sector and the future of MERL Tech. (If you would like to support us on this or get involved, please contact Linda!)

Post Conference Feedback

Some 25% of participants filled in the post-conference survey and 85% rated their experience “good” or “awesome” (up from 70% in 2018). Answers did not significantly differ based on whether a participant had attended previously or not. Another 8.5% rated sessions via the “Sched” conference agenda app, with an average session satisfaction rating of 9.1 out of 10.

The top rated session was on “Decolonizing Data and Technology in MERL.” As one participant said, “It shook me out of my complacency. It is very easy to think of the tech side of the work we do as ‘value free’, but this is not the case. Being a development practitioner it is important for me to think about inequality in tech and data further than just through the implementation of the projects we run.” Another noted that “As a white, gay male who has a background in international and intercultural education, it was great to see other fields bringing to light the decolonizing mindset in an interactive way. The session was enlightening and brought up conversation that is typically talked about in small groups, but now it was highlighted in front of the entire audience.”

Sign up for MERL Tech News if you’d like to read more about this and other sessions. We’re posting a series of posts and session summaries.

Key suggestions for improving next time were similar to those we hear every year: less showcasing and pitching, ensure that titles match what is actually delivered at the session, ensuring that presenters are well-prepared, and making sessions relevant, practical and applicable.

Additionally, several people commented that the venue had some issues with noise from conversations in the common area spilling into breakout rooms and making it hard to focus. Participants also complained that there was a large amount of trash and waste produced, and suggested more eco-friendly catering for next time.

Access the full feedback report here.

Where/when should the conference happen?

As noted, we are interested in finding a model for MERL Tech that allows for more diversity of voices and experiences, so we asked participants how often and where they thought we should do MERL Tech in the future. The majority (44.3%) felt we should run MERL Tech in DC every 2 years and somewhere else in the year in between. Some 23% said to keep it in DC every year, and around 15% suggested multiple MERL Tech conferences each year in DC and elsewhere. (We were pleased that no one selected the option of “stop doing MERL Tech altogether, it’s unnecessary.”)

Given this response, we will continue exploring options for partners who would like to support financially and logistically to enable MERL Tech to happen outside of DC. Please contact Linda if you’d like to be involved or have ideas on how to make this happen.

New ways to get involved!

Last year, the idea of having a GitHub repository was raised, and this year we were excited to have GitHub join us. They had come up with the idea of creating a MERL Tech Center on GitHub as well, so it was a perfect match! More info here.

We also had a request to create a MERL Tech Slack channel (which we have done). Please get in touch with Linda by email or via Slack if you’d like to join us there for ongoing conversations on data collection, open source, technology (or other channels you request!)

As always you can also follow us on Twitter and MERL Tech News.