All posts by Guest Post

Visualizing Your Network for Adaptive Program Decision Making

By Alexis Banks, Jennifer Himmelstein, and Rachel Dickinson

Social network analysis (SNA) is a powerful tool for understanding the systems of organizations and institutions in which your development work is embedded. It can be used to create interventions that are responsive to local needs and to measure systems change over time. But, what does SNA really look like in practice? In what ways could it be used to improve your work? Those are the questions we tackled in our recent MERL Tech session, Visualizing Your Network for Adaptive Program Decision Making. ACDI/VOCA and Root Change teamed up to introduce SNA, highlight examples from our work, and share some basic questions to help you get started with this approach.

MERL Tech 2019 participants working together to apply SNA to a program.

SNA is the process of mapping and measuring relationships and information flows between people, groups, organizations, and more. Using key SNA metrics enables us to answer important questions about the systems where we work. Common SNA metrics include (learn more here):

  • Reachability, which helps us determine if one actor, perhaps a local NGO, can access another actor, such as a local government;
  • Distance, which is used to determine how many steps, or relationships, there are separating two actors;
  • Degree centrality, which is used to understand the role that a single actor, such as an international NGO, plays in a system by looking at the number of connections with that organization;
  • Betweenness, which enables us to identify brokers or “bridges” within networks by identifying actors that lie on the shortest path between others; and
  • Change Over Time, which allows us to see how organizations and relationships within a system have evolved.
Using betweenness to address bottlenecks.

SNA in the Program Cycle

SNA can be used throughout the design, implementation, and evaluation phases of the program cycle.

Design: Teams at Root Change and ACDI/VOCA use SNA in the design phase of a program to identify initial partners and develop an early understanding of a system–how organizations do or do not work together, what barriers are preventing collaboration, and what strategies can be used to strengthen the system.

As part of the USAID Local Works program, Root Change worked with the USAID mission in Bosnia and Herzegovina (BiH) to launch a participatory network map that identified over 1,000 organizations working in community development in BiH, many of which had been previously unknown to the mission. It also provided the foundation for a dialogue with system actors about the challenges facing BiH civil society.

To inform project design, ACDI/VOCA’s Feed the Future Tanzania NAFAKA II Activity, funded by USAID conducted a network analysis to understand the networks associated with village based agricultural advisors (VBAAs)–what services they were offering to farmers already, which had the most linkages to rural actors, which actors were service as bottlenecks, and more. This helped the project identify which VBBA’s to work with through small grants and technical assistance (e.g. key actors), and what additional linkages needed to be built between VBAAs and other types of actors.

NAFAKA II Tanzania

Implementation: We also use SNA throughout program implementation to monitor system growth, increase collaboration, and inform learning and program design adaptation. ACDI/VOCA’s USAID/Honduras Transforming Market Systems Activity uses network analysis as a tool to track business relationships created through primary partners. For example, one such primary partner is the Honduran chamber of tourism who facilitates business relationships through group training workshops and other types of technical assistance. They can then follow up on these new relationships to gather data on indirect outcomes (e.g. jobs created, sales and more).

Root Change used SNA throughout implementation of the USAID funded Strengthening Advocacy and Civic Engagement (SACE) program in Nigeria. Over five years, more than 1,300 organizations and 2,000 relationships across 17 advocacy issue areas were identified and tracked. Nigerian organizations came together every six months to update the map and use it to form meaningful partnerships, coordinate advocacy strategies, and hold the government accountable.

SACE participants explore a hand drawn network map.

Evaluating Impact: Finally, our organizations use SNA to measure results at the mid-term or end of project implementation. In Kenya, Root Change developed the capacity of Aga Khan Foundation (AKF) staff to carry out a baseline, and later an end-of-project network analysis of the relationships between youth and organizations providing employment, education, and entrepreneurship support. The latter analysis enabled AKF to evaluate growth in the network and the extent to which gaps identified in the baseline had been addressed.

AKF’s Youth Opportunities Map in Kenya

The Feed The Future Ghana Agricultural Development and Value Chain Enhancement II (ADVACNE II) Project, implemented by ACDI/VOCA and funded by USAID, leveraged existing database data to demonstrate the outgrower business networks that were established as a result of the project. This was an important way of demonstrating one of ADVANCE II’s major outcomes–creating a network of private service providers that serve as resources for inputs, financing, and training, as well as hubs for aggregating crops for sales.

Approaches to SNA
There are a plethora of tools to help you incorporate SNA in your work. These range from bespoke software custom built for each organization, to free, open source applications.

Root Change uses Pando, a web-based, participatory tool that uses relationship surveys to generate real-time network maps that use basic SNA metrics. ACDI/VOCA, on the other hand, uses unique identifiers for individuals and organizations in its routine monitoring and evaluation processes to track relational information for these actors (e.g. cascaded trainings, financing given, farmers’ sales to a buyer, etc.) and an in-house SNA tool.

Applying SNA to Your Work
What do you think? We hope we’ve piqued your interest! Using the examples above, take some time to consider ways that SNA could be embedded into your work at the design, implementation, or evaluation stage of your work using this worksheet. If you get stuck, feel free to reach out (Alexis Banks, abanks@rootchange.org; Rachel Dickinson, rdickinson@rootchange.org; Jennifer Himmelstein, JHimmelstein@acdivoca.org)!

Practicing Safe Monitoring and Evaluation in the 21st Century

By Stephen Porter. Adapted from the original post published here.

Monitoring and evaluation practice can do harm. It can harm:

  • the environment by prioritizing economic gain over species that have no voice
  • people who are invisible to us when we are in a position of power
  • by asking for information that can then be misused.

In the quest for understanding What Works, the focus is often too narrowly on program goals rather than the safety of people. A classic example in the environmental domain is the use of DDT: “promoted as a wonder-chemical, the simple solution to pest problems large and small. Today, nearly 40 years after DDT was banned in the U.S., we continue to live with its long-lasting effects.” The original evaluation of its effects had failed to identify harm and emphasized its benefits. Only when harm to the ecosystem became more apparent was evidence presented in Rachel Carson’s book Silent Spring. We should not have to wait for failure to be so apparent before evaluating for harm.

Join me, Veronica Olazabal, Rodney Hopson, Dugan Fraser and Linda Raftree, for a session on “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, 8-9am, Room CC M100 H, at the American Evaluation Association Conference in Minneapolis.

Ethical standards have been developed for evaluators, which are discussed at conferences and included in professional training. Yet institutional monitoring and evaluation practices still struggle to fully get to grips with the reality of harm in the pressure to get results reported. If we want monitoring and evaluation to be safer for the 21st Century we need to shift from training and evaluator-to-evaluator discussions to changing institutional practices.

At a workshop convened by Oxfam and the Rockefeller Foundation in 2019, we sought to identify core issues that could cause harm and get to grips with areas where institutions need to change practices. The workshop brought together partners from UN agencies, philanthropies, research organizations and NGOs. This meeting sought to give substance to issues. It was noted by a participant that though the UNEG Norms and Standards and UNDP’s evaluation policy are designed to make evaluation safe, in practice there is little consideration given to capturing or understanding the unintended or perverse consequences of programs or policies. The workshop explored this and other issues and identified three areas of practice that could help to reframe institutional monitoring and evaluation in a practical manner.

1. Data rights, privacy and protection: 

In working on rights in the 21st Century, data and Information are some of the most important ‘levers’ pulled to harm and disadvantage people. Oxfam has had a Responsible Data in Program policy in place since 2015 goes some way towards recognizing this.But we know we need to more fully implement data privacy and protection measures in our work.

At Oxfam, work is continuing to build a rights-based approach which already includes aligned confederation-wide Data Protection Policies, implementation of responsible data management policy and practices and other tools aligned with the Responsible Data Policy and European Privacy law, including a responsible data training pack.

Planned and future work includes stronger governance, standardized baseline measures of privacy & information security, and communications/guidance/change management. This includes changes in evaluation protocols related to how we assess risk to the people we work with, who gets access to the data and ensure consent for how the data will be used.

This is a start, but consistent implementation is hard and if we know we aren’t competent at operating the controls within our reach, it becomes more difficult in how we call others out if they are causing harm when they misuse theirs.

2. Harm prevention lens for evaluation

The discussion highlighted that evaluation has not often sought to understand the harm of practices or interventions. When they do, however, the results can powerfully shed new light on an issue. A case that starkly illustrates potential under-reporting is that of the UN Military Operation in Liberia (UNMIL). UNMIL was put in place with the aim “to consolidate peace, address insecurity and catalyze the broader development of Liberia”. Traditionally we would evaluate this objective. Taking a harm lens we may evaluate the sexual exploitation and abuse related to the deployment. The reporting system highlights low levels of abuse, 14 from 2007 – 2008 and 6 in 2015. A study by Beber, Gilligan, Guardado and Karim, however, estimated through representative randomized survey that more than half of eighteen- to thirty-year-old women in greater Monrovia have engaged in transactional sex and that most of them (more than three-quarters, or about 58,000 women) have done so with UN personnel, typically in exchange for money.

Changing evaluation practice should not just focus on harm in the human systems, but also provide insight in the broader ecosystem. Institutionally there needs to be championship for identifying harm within and through monitoring and evaluation practice and changes in practice.

3. Strengthening safeguarding and evaluation skills

We need to resource teams appropriately so they have the capacity to be responsive to harm and reflective on the potential for harm. This is both about tools and procedures and conceptual frames.

Tools and procedures can include, for example:

  • Codes-of-conduct that create a safe environment for reporting issues
  • Transparent reporting lines to safeguarding/safe programming advisors
  • Training based on actual cases
  • Safe data protocols (see above)

All of these fall by the way-side, however, if the values and concepts that guide implementation are absent. Rodney Hopson at the workshop, drawing on environmental policy and concepts of ecology, presented a frame to increasing evaluators’ usefulness in complex ecologies where safeguarding issues are prevalent, that emphasizes:

  • Relationships – the need to identify and relate to key interests, interactions, variables and stakeholders amid dynamic and complex issues in an honest manner that is based on building trust.
  • Responsibilities – acting with propriety, doing what is proper, fair, right, just in evaluation against standards.
  • Relevance – being accurate and meaningful technically, culturally and contextually.

Safe monitoring and evaluation in the 21st Century does not just seek ‘What Works’ and will need to be relentless at looking at ‘How we can work differently?’. This includes us understanding connectivity in harm between human and environmental systems. The three areas noted here are a start of a conversation and a challenge to institutions to think more about what it means to be safe in monitoring and evaluation practice.

Planning to attend the American Evaluation Association Conference this week? Join us for the session “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, from 8- 9:00 AM) in room CC M100 H.

Panelists will discuss ideas to better address harm in regards to: (i) harm identification and mitigation in evaluation practice; (ii) responsible data practice evaluation in complex ecologies, (iii) understanding harm in an international development context, and (iv) evaluation in complex ecologies.

The panel will be chaired by  Veronica M Olazabal, (Senior Advisor & Director, Measurement, Evaluation and Organizational Performance, The Rockefeller Foundation) , with speakers Stephen Porter (Evaluation Strategy Advisor, World Bank), Linda Raftree (Independent Consultant, Organizer of MERL Tech), Dugan Fraser (Prof & Director CLEAR-AA – University of the Witwatersrand, Johannesburg) and Rodney Hopson (Prof of Evaluation, Department of Ed Psych, University of Illinois Urbana-Champaign). View the full program here: https://lnkd.in/g-CHMEj 

Ethics and unintended consequences: The answers are sometimes questions

by Jo Kaybryn

Our MERL Tech DC session, “Ethics and unintended consequences of digital programs and digital MERL” was a facilitated discussion about some of the challenges we face in the Wild West of digital and technology-enabled MERL and the data that it generates. Here are some of the things that stood out from discussions with participants and our experience.

Purposes

Sometimes we are not clear on why we are collecting data.  ‘Just because we can’ is not a valid reason to collect or use data and technology.  What purposes are driving our data collection and use of technology? What is the problem we are trying to solve? A lack of specificity can allow us stray into speculative data collection — if we’re collecting data on X, then it’s a good opportunity to collect data on Y “in case we need it in the future”. Do we ever really need it in the future? And if we do go back to it, we often find that because we didn’t collect the data on Y with a specific purpose, it’s not the “right” data for our needs. So, let’s always ask ourselves why are we collecting this data, do we really need it?

Tensions

Projects are increasingly under pressure to be more efficient and cost-effective in their data collection, yet the need or desire to conduct more robust assessments can requires the collection of data on multiple dimensions within a community. These two dynamics are often in conflict with each other. Here are three questions that can help guide our decision making:

  • Are there existing data sets that are “good enough” to meet the M&E needs of a project? Often there are, and they are collected regularly enough to be useful. Lean on partners who understand the data space to help map out what exists and what really needs to be collected. Leverage partners who are innovating in the data space – can machine learning and AI-produced data meet 80% of your needs? If so, consider it.
  • What data are we critically in need of to assess a project? Build an efficient data collection methodology that considers respondent burden and potentially includes multiple channels for receiving responses to increase inclusivity.
  • What will the data be used for? Sensitive contexts and life or death decisions require a different level of specificity and periodicity than less sensitive projects. Think about data from this lens when deciding which information to collect, how often to collect it, and who to collect it from.

Access

It is worth exploring questions of access in our data collection practices. Who has access to the data and the technology?  Do the people about whom the data is, have access to it?  Have we considered the harms that could come from the collection, storage, and use of data? For instance, while it can be useful to know where all the clients are who are accessing a pregnancy clinic to design better services, an unintended consequence may involve others having the ability to identify people who are pregnant, which pregnant people might not like these others to know. What can we do to protect the privacy of vulnerable populations? Also, going digital can be helpful, but if a person or community implicated in a data collection endeavour does not have access to technology or to a charging point – are we not just increasing or reinforcing inequality?

Transparency

While we often advocate for transparency in many parts of our industry, we are not always transparent about our data practices. Are we willing to tell others, to tell community members, why we are collecting data, using technology, and how we are using information?  If we are clear on our purpose, but not willing for it to be transparent, then it might be a good reason to reconsider. Yet, transparency does not equate accountability, so what are the mechanisms for ensuring greater accountability towards the people and communities we seek to serve?

Power and patience

One of the issues we’re facing is power imbalances. The demands that are made of us from donors about data, and the technology solutions that are presented to us, all make us feel like we’re not in control. But the rules haven’t been written yet — we get to write them.

One of the lessons from the responsible data workshop leading up to the conference was that organisations can get out in front of demands for data by developing their own data management and privacy policies. From this position it is easier to enter into dialogues and negotiations, with the organisational policy as your backstop. Therefore, it is worth asking, Who has power? For what? Where does it reside and how can we rebalance it?

Literacy underpins much of this – linguistic, digital, identity, ethical literacy.  Often when it comes to ‘digital’ we immediately fall under the spell of the tyranny of the urgent.  Therefore,  in what ways can we adopt a more ‘patient’ or ‘reflective’ practice with respect to digital?

For more information, see:

Three Tips for Bridging Tech Development and International Development 

by Stephanie Jamilla

The panel “Technology Adoption and Innovation in the Industry: How to Bridge the International Development Industry with Technology Solutions” proved to be an engaging conversation between four technology and international development practitioners. Admittedly, as someone who comes from more of a development background, some parts of this conversation were hard to follow. However, three takeaways stuck out to me after hearing the insights and experiences of Aasit Nanavati, a Director of DevResults, Joel Selanikio, CEO and Co-Founder of Magpi, Nancy Hawa, a Sofware Engineer from DevResults, and Mike Klein, a Director from IMC Worldwide and the panel moderator. 

“Innovation isn’t always creation.”

The fact that organizations often think about innovation and creation as synonymous actually creates barriers to entry for tech in the development market. When asked to speak about these barriers, all three panelists mentioned that clients oftentimes want highly customized tools when, they could achieve their goals with what already exists in the market. Nanavati (whose quote titles this section) followed his point about innovation not always requiring creation by asserting that innovation is sometimes just a matter of implementing existing tools really well. Hawa added to this idea by arguing that sometimes development practitioners and organizations should settle for something that’s close enough to what they want in order to save money and resources. When facing clients’ unrealistic expectations about original creation, consultancies should explain that the super-customized system the client asks for may actually be unusable because of the level of complexity this customization would introduce. While this may be hard to admit, communicating with candor is better than the alternative — selling a bad product for the sake of expanding business. 

An audience member asked how one could convince development clients to accept the non-customized software. In response, Hawa suggested that consultancies talk about software in a way that non-tech clients understand. Say something along the lines of, “Why recreate Microsoft Excel or Gmail?” Later in the discussion, Selanikio offered another viewpoint. He never tries to persuade clients to use Magpi. Rather, he does business with those who see the value of Magpi for their needs. This method may be effective in avoiding a tense situation between the service provider and client when the former is unable to meet the unrealistic demands of the latter.

We need to close the gap in understanding between the tech and development fields.

Although not explicitly stated, one main conclusion that can be drawn from the panel is that a critical barrier keeping technology from effectively penetrating development is miscommunication and misunderstanding between actors from the two fields. By learning how to communicate better about the technology’s true capacity, clients’ unrealistic expectations, and the failed initiatives that often result from the mismatch between the two, future failures-in-the-making can be mitigated. Interestingly, all three panelists are, in themselves, bridges between these two fields, as they were once development implementors before turning to the tech field. Nanavati and Selanikio used to work in the public health sphere in epidemiology, and Hawa was a special education teacher. Since the panelists were once in their clients’ positions, they better understand the problems their clients face and reflect this understanding in the useful tech they develop. Not all of us have expertise in both fields. However, we must strive to understand and accept the viewpoints of each other to effectively incorporate technology in development. 

Grant funding has its limitations.

This is not to say that you cannot produce good tech outputs with grant funding. However, using donations and grants to fund the research and development of your product may result in something that caters to the funders’ desires rather than the needs of the clients you aim to work with. Selanikio, while very grateful to the initial funders of Magpi, found that once the company began to grow, grants as a means of funding no longer worked for the direction that he wanted to go. As actors in the international development sphere, the majority of us are mission-driven, so when the funding streams hinder you from following that mission, then it may be worth considering other options. For Magpi, this involved having both a free and paid version of its platform. Oftentimes, clients transition from the free to paid version and are willing to pay the fee when Magpi proves to be the software that they need. Creative tech solutions require creative ways to fund them in order to keep their integrity.

Technology can greatly aid development practitioners to make a positive impact in the field. However, using it effectively requires that all those involved speak candidly about the capacity of the tech the practitioner wants to employ and set realistic expectations. Each panelist offered level-headed advice on how to navigate these relationships but remained optimistic about the role of tech in development. 

Chain Reaction: How Does Blockchain Fit, if at All, into Assessments of Value for Money of Education Projects?

by Cathy Richards

In this panel, “Chain Reaction: How does blockchain fit, if at all, into assessments of value for money of education projects,” hosted by Christine Harris-Van Keuren of Salt Analytics, panelists gave examples of how they’ve used blockchain to store activity and outcomes data and to track the flow of finances. Valentine Gandhi from The Development Café served as the discussant.

Value for money analysis (or benefit-cost analysis, cost-economy, cost-effectiveness, cost-efficiency, or cost-feasibility) is defined as an evaluation of the best use of scarce resources to achieve a desired outcome. In this panel, participants examined the value for money of blockchain by taking on an aspect of an adapted value-for-money framework. The framework takes into account resources, activities, outputs, and outcomes. Panel members were specifically asked to explain what they gained and lost by using blockchain as well as whether they had to use blockchain at all.

Ben Joakim is the founder and CEO of Disberse, a new financial institution built on distributed ledger technology. Disberse aims to ensure greater privacy and security for the aid sector — which serves some of the most vulnerable communities in the world. Joakim notes that in the aid sector, traditional banks are often slow and expensive, which can be detrimental during a humanitarian crisis. In addition, traditional banks can lack transparency, which increases the potential for the mismanagement and misappropriation of funds. Disberse works to tackle those problems by creating a financial institution that is not only efficient but also transparent and decentralised, thus allowing for greater impact with available resources. Additionally, Disberse allows for multi-currency accounts, foreign currency exchanges, instant fund transfers, end-to-end traceability, donation capabilities, regulatory compliance, and cash transfer systems. Since inception, Disberse has delivered pilots in several countries including Swaziland, Rwanda, Ukraine, and Australia.

David Mikhail of UNCDF discussed the organization’s usage of blockchain technologies in the Nepal remittance corridor. In 2017 alone, Nepal received $6.9 billion in remittances. These funds are responsible for 28.4% of the country’s GDP. One of the main challenges for Nepali migrant families is a lack of financial inclusion characterized by credit interest rates as high as 30%, lack of a documented credit history, and lack of sufficient collateral. Secondarily, families have a difficult time building capital once they migrate. Between the high costs of migration, high-interest rate loans, non-stimulative spending that impacts their ability to save and invest, and lack of credit history make it difficult for migrants to break free of the poverty cycle. Due to this, the organization asked itself whether it could create a new credit product tied to remittances to provide capital and fuel domestic economic development. In theory, this solution would drive financial inclusion by channeling remittances through the formal sector. The product would not only leverage blockchain in order to create a documented credit history, but it would also direct the flow of remittances into short and long-term savings or credit products that would help migrants generate income and assets. 

Tara Vassefi presented on her experience at Truepic, a photo and video verification platform that aims to foster a healthy civil society by pushing back against disinformation. They do this by bolstering the value of authentic photos through the use of verified pixel data from the time of capture and through the independent verification of time and location metadata. Hashed references to time, date, location and exact pixelation are stored on the blockchain. The benefits of using this technology are that the data is immutable and it adds a layer of privacy and security to media. The downsides include marginal costs and the general availability of other technologies. Truepic has been used for monitoring and evaluation purposes in Syria, Jordan, Uganda, China, and Latin America to remotely monitor government activities and provide increased oversight at a lower cost. They’ve found that this human-centric approach, which embeds technology into existing systems, can close the trust gap currently found in society.

Smartcards for MERL: Worth the Money?

by Jonathan Tan

In 2014, ACDI/VOCA ran into a common problem: their beneficiaries – smallholder farmers in Ghana – had been attending trainings for several agricultural programs, but monitoring workshop attendance and verifying the identity of each attendee was laborious, inaccurate, and labor-intensive. There were opportunities for errors with transcription and data entry at several points in the reporting process, each causing delays downstream for analysis and decision-making. So they turned to a technological solution: contactless smartcards.

At MERL Tech DC, Nirinjaka Ramasinjatovo and Nicole Chao ran a session called “Smartcards for MERL: Worth the Money” to share ACDI/VOCA’s experiences.

The system was fairly straightforward: after a one-time intake session at each village, beneficiaries are registered in a central database and a smartcard with their name and photo is printed and distributed for each. They hired developers to build a simple graphical interface to the database for trainers to use. At each training, trainers bring laptops equipped with card readers to take attendance, and the attendance data is synchronized with the database upon return to an internet-connected office. 

The speakers discussed several expected and unexpected benefits from introducing the smartcards. Registration was streamlined at trainings, and data collection became faster and more accurate. Attendance and engagement at training sessions also increased. ACDI/VOCA hypothesized that beneficiaries possessing physical tokens associated with the otherwise knowledge-based program reminded them of its impact; one of the speakers recounted observing some farmers wearing the smartcards on lanyards with pride to non-training social events in the community. Finally, improved data tracking enabled analysts at ACDI/VOCA to compare farmers’ attendance rate at training sessions to their reported agricultural yield increases and thus measure their impact more effectively.

Process durations for developing the 2014 smart card system in Ghana (left), vs. the 2018 smart tags in Colombia (right).

Then came the perennial question: what did it cost? And was it worth it? For the 2014 Feed the Future program in Ghana, the smartcard system took 6 months of preparation to be deployed (including requirements gathering, software development, hardware procurement, and training). While the cards were fairly inexpensive at 50 to 60 cents (US) apiece, the system had not insignificant fixed costs: card printers were $1,500 each, and the total software development cost was between $15,000 to $25,000.  

ACDI/VOCA sought to improve on this system in a subsequent 2018 emergency response program in Colombia. Instead of smartcards, beneficiaries were issued with small contactless tags, while enumerators used tablets instead of laptops to administer surveys and track attendance. Crucially, rather than hiring developers to write new software from scratch, they made use of Microsoft PowerApps that were more straightforward to deploy; the PowerApp-based system took far less time to test and train enumerators with. It also had the benefit of being easily modifiable post-deployment (which had not been the case with the smart cards). The contactless tags were also less costly at $0.10 to $0.15 apiece, with readers in the $15-20 range. All in all, the contactless tag system deployed in Colombia proved to be far more cost-effective for ACDI/VOCA than the smart cards had been in the previous Ghana project. 

Based on the two projects discussed, the speakers proposed the following set of questions to consider for future projects:

  1. Is there a high number of beneficiaries in your program?
  2. Does each beneficiary have the potential to receive multiple benefits/programs?
  3. Is there a strong need for identity authentication?
  4. Do you have access to software developers?

If you answered “yes” to all four questions, then it is likely that a smart identification system based on cards, tags, etc. will be worth the upfront investment and maintenance costs. If, however, your answer to some or all of them was “no”, then there were intermediate solutions that could still be implementable, such as using tokens with QR codes or bar codes, which would not be as strict of a proof of identity. 

No Data, No Problem: Extracting Insights from Data-Poor Environments

by Jonathan Tan

In data-poor environments, what can you do to get what you need? For Arpitha Peteru and Bob Lamb of the Foundation for Inclusion, the answer lies at the intersection of science, story, and simulation. 

The session, “No Data, No Problem: Extracting Insights from Data Poor Environments” began with a philosophical assertion: all data is qualitative, but some can be quantified. The speakers were making the argument that the processes we use to extract insights from data are fundamentally influenced by our personal assumptions, interpretations and biases, and misusing data without considering those fundamentals can produce unhelpful insights. As an example, they cited an unnamed cross-national study of fragile stages that committed several egregious data sins:

  1. It assumed that household data aggregated at the national level was reliable.
  2. It used an incoherent unit of analysis. Using a country-level metric in Somalia, for example, makes no sense because it ignored the qualitative differences between Somaliland and the rest of the country. 
  3. It ignored the complex web of interactions among several independent variables to produce pairwise correlation metrics that themselves made no sense. 

For Peteru and Lamb, the indiscriminate application of data analysis methods without understanding the forces behind the data is a failure of imagination. They spoke about the Foundation for Inclusion’s approach to social issues by their appreciation for complex systems. They illustrated the point with a demonstration: when you pour water from a pitcher onto a table, the rate of water leaving the pitcher exactly matches the rate of water hitting the table. If you were to measure both and looked only at the data, the correlation is 1 and you could conclude that the working mechanism was that the table was getting wet because it was leaving the pitcher. But what happens when there are unobserved intermediate steps? What if, for instance, the water was flowing into a cup on the table, which had to overflow before hitting the table? Or what if water was being poured into a balloon, which had to cross a certain threshold before bursting and wetting the table? The data in isolation would tell you very little about how the system actually worked. 

What can you do in the absence of good data? Here, the Foundation for Inclusion turns to stories as a source of information. They argue that talking to domain experts, reviewing local media and gathering individual viewpoints can help by revealing patterns and allowing researchers to formulate potential causal structures. Of course, the further one gets from the empirics, the more uncertainty there must be. And that can be quantified and mitigated with sensitivity tests and the like. Peteru and Lamb’s point here was that even anecdotal information can give you enough to assemble a hypothesized system or set of systems that can then be explored and validated – by way of simulation.

Simulations were the final piece of the puzzle. With researchers seeing increasing access to the hardware and computing knowledge necessary to create simulations of complex systems – systems based on information from the aforementioned stores – the speakers argued that simulations were an increasingly viable method of exploring stories and validating hypothesized causal systems. Of course, there was no one-size-fits-all: they discussed several types of simulations – from agent-based models to Monte Carlo models – as well as when each might be appropriate. For instance, health agencies today already make use of sophisticated simulations to forecast the spread of epidemics, in which collecting sufficient data would simply be too slow to act upon. By simulating thousands of potential outcomes from varying key parameters in the simulations, and systematically eliminating the models that had undesirable outcomes or those that relied on data with high levels of uncertainty, one could, in theory, be left with a handful of simulations whose parameters would be instructive. 

The purpose of data collection is to produce useful, actionable insights. Thus, in its absence, the Foundation for Inclusion argues that the careful application of science, story, and simulation can pave the way forward.

Collecting Data in Hard to Reach Places

Written by Stephanie Jamilla

By virtue of operating in the international development sphere, we oftentimes work in areas that are remote, isolated, and have little or no internet connection. However, as the presenters from Medic Mobile and Vitamin Angels (VA) argued in their talk, “Data Approaches in Hard-to-Reach Places,” it is possible to overcome these barriers and use technology to collect much-needed program data. The session was split neatly into three parts: a presentation by Mourice Barasa, the Impact Lead of Medic Mobile in Kenya, a presentation by Jamie Frederick, M&E Manager, and Samantha Serrano, M&E Specialist, from Vitamin Angels, and an activity for attendees. 

While both presentations discussed data collection in a global health context and used phone applications as the means of data collection, they illustrated two different situations. Barasa focused on the community health app that Medic Mobile is implementing. It is used by community health teams to better manage their health workers and to ease the process of providing care. The app serves many purposes. For example, it is a communication tool that connects managers and health workers as well as a performance management tool that tracks the progress of health workers and the types of cases they have worked on. The overall idea is to provide near real time (NRT) data so that health teams have up-to-date information about who has been seen, what patients need, if patients need to be seen in a health facility, etc. Medic Mobile implemented the app with the Ministry of Health in Siaya, Kenya and currently have 1700 health workers using the tools. While the use of the app is impressive, Barasa explained various barriers that hinder the app from creating NRT data. Health teams rely on the timestamp sent with every entry to know when a household is visited by a health worker. However, a health worker may wait to upload an entry and use the default time on their phone rather than the actual time of visit. Also, poor connectivity, short battery life, and internet subscription costs are of concern. Medic Mobile is working on improvements such as exploring the possibility of using offline servers, finding alternatives to phone charging, and central billing of mobile users have decreased billing from $2000/month to around $100.

Frederick and Serrano expressed similar difficulties in their presentation — particularly about the timeliness of data upload. However, their situation was different. VA used their app for specifically M&E purposes. The organization wanted to validate the extent to which it was reaching its target population, delivering services at the best practice standard, and are truly filling the 30% gap of coverage that national health services miss. Their monitoring design consisted of taking a random sample of 20% of their field partners and using ODK collect with an ONA-programmed survey (which is cloud-based) on Android devices. VA trained 30 monitors to cover countries in Latin America and the Caribbean, Africa, and Asia in which they had partners. While the VA Home Office was able to use the data collected on the app well through the cycle of data collection to action, field partners were having trouble with the data in the analysis, reporting, and action stages. Hence, a potential solution was piloted with three partners in Latin America. VA adjusted the surveys in ONA so that it would display a simple report with internal calculations based on the survey data. This report was developed in NRT, allowing partners to access the data quickly. VA also formatted the report so that the data was easily consumable. VA also made sure to gather feedback from partners about the usefulness of monitoring results to ensure that partners also valued collecting this data. 

These two presentations reinforced that while there is the ability to collect data in difficult places, there will always be barriers as well, whether they are technical or human-related. The group discussion activity revealed other challenges. The presenters prompted the audience with four questions:

  1. What are data collection approaches you have used in hard-to-reach places?
  2. What have been some challenges with these approaches?
  3. How have the data been used?
  4. What have been some challenges with use of these data?

In my group of five, we talked mainly about hindrances to data collection in our own work, such as the cost of some technology. Another that came up was how there is a gap between having the data visualizing them well but ensuring that the data we do collect actually translates into action.

Overall, the session helped me think through how important it is to consider potential challenges in the initial design of the data collection and analysis process. The experiences of Medic Mobile and Vitamin Angels demonstrated what difficulties we all will face when collecting data in hard-to-reach places but also that those difficulties can ultimately be overcome.

Using Cutting-Edge Technology to Measure Global Perceptions of Gender-Based Violence Online

By Jonathan Tan, Data Science Intern at the World Bank

Online communities have emerged as a powerful tool for giving a voice to the marginalized. However, it also opens up opportunities for harmful behaviors that exist offline to be amplified in scale and reach. Online violence, in particular, is disproportionately targeted at women and minority groups. How do we measure these behaviors and their impact on online communities? And how can donors and implementers use that information to develop programs addressing this violence? In response to these questions, Paulina Rudnicka of the American Bar Association Rule of Law Initiative (ABA ROLI), Chai Senoy, of the United States Agency for International Development (USAID) and Mercedes Fogarassy, RIWI Corp. entered into a public-private partnership to administer a large-scale online survey. Using RIWI’s global trend tracking technology, the survey received over 40,000 complete responses from respondents in 15 countries, and featured 17 questions on the “nature, prevalence, impacts, and responses to GBV online”. 

What is GBV? The speakers specifically define Gender-Based Violence (GBV) as “the use of the Internet to engage in activities that result in harm or suffering to a person or group of people online or offline because of their gender.” They noted that the definition was based heavily on text from the 1993 UN Declaration on the Elimination of Violence Against Women. They also noted that the declaration, and the human rights standards around it, predated the emergence of GBV online. Many online spaces have been designed with male users in mind by default, and consequently, the needs of female users have been systematically ignored.  

Why is it important? GBV online is often an extension of offline GBV that has been prevalent throughout history: it has roots in sexist behavior; reinforces existing gender inequalities; and is often trivialized by law enforcement officials. However, the online medium allows GBV to be easily scalable – online GBV exists wherever the internet reaches – and replicable, leading to disproportionately large impacts on targeted individuals. Outside of the direct impacts (e.g. with cyberbullying, blackmail, extortion, doxing), it often has persistent emotional and psychological impacts on its victims. Further, GBV often has a chilling effects on freedom of expression in terms of silencing and self-censorship, making its prevalence and impact particularly difficult to measure. 

What can we do? In order to formulate an effective response to GBV online, we need good data on people’s experiences online. It needs to be comprehensive, gender-disaggregated, and collected at national, regional, and global levels. On a broader level, states and firms can proactively prevent GBV online through human rights due diligence. 

Why was the survey special? The survey, with over 40,000 completed responses and 170,000 unique engagements in 15 countries, was the largest study on GBV online to date. The online-only survey was administered to any respondent with internet access; whereas most prior surveys focused primarily on respondents from developed countries, this survey focused on respondents from developing countries. Speed was a notable factor – the entire survey was completed within a week. Further, given the sensitive nature of the subject matter, respondents’ data privacy was prioritized: personal identifying information (PII) was not captured, and no history of having answered the anonymous survey was accessible to respondents after submission. 

How was this accomplished? RIWI was used to conduct the survey. RIWI takes advantage of inactive or abandoned registered, non-trademarked domains. When a user inadvertently lands on one of these domains, they have a random chance of stumbling into a RIWI survey. The user can choose to participate, while remaining anonymous. The respondent’s country, region, or sub-city level is auto-detected with precision through RIWI to deliver the survey in the appropriate language. RIWI provided the research team with correlations of significance and all unweighted and weighted data for validation.

What did the survey find? Among the most salient findings: 

  • 40% of respondents had felt not personally safe from harassment and violence while online, of whom 44% had experienced online violence due to their gender. 
  • Of the surveyed countries, India and Uganda reported the highest rates of GBV online (13% of all respondents), while Kazakhstan reported the lowest rates (6%). 
  • 42% of respondents reported not taking safety precautions online, such as customizing privacy settings on apps, turning off features like “share my location”, and being conscious about not sharing personally identifiable information online 
  • 85% of respondents that had experienced GBV online reported subsequently experiencing fear for their own safety, fear for someone close to them, feeling anxiety or depression, or reducing time online. 

What’s next? Subsequent rounds of the survey will include more than the original 15 countries. Further, since the original survey did not collect personal identifying information from respondents, subsequent rounds will supplement the original questions by collecting additional qualitative data.

Living Our Vision: Applying the Principles of Digital Development as an Evaluative Methodology

by: Sylvia Otieno, MPA candidate at George Washington University and Consultant at the World Bank’s IEG; and Allana Nelson, Senior Manager for the Digital Principles at DIAL

For nearly a decade, the Principles of Digital Development (Digital Principles)  have served to guide practitioners in developing and implementing digital tools in their programming. The plenary session at MERL Tech DC 2019 titled “Living Our Vision: Applying the Principles of Digital Development as an Evaluative Methodology” introduced attendees to four evaluation tools that have been developed to help organizations incorporate the Digital Principles into their design, planning, and assessments. 

Laura Walker MacDonald explaining the Monitoring and Evaluation Framework. (Photo by Christopher Neu)

This panel – organized and moderated by Allana Nelson, Senior Manager for the Digital Principles stewardship at the Digital Impact Alliance (DIAL) – highlighted digital development frameworks and tools developed by SIMLab, USAID in collaboration with John Snow Inc., Digital Impact Alliance (DIAL) in collaboration with TechChange, and the Response Innovation Lab. These frameworks and toolkits were built on the good practice guidance provided by the Principles for Digital Development. They are intended to assist development practitioners to be more thoughtful about how they use technology and digital innovations in their programs and organizations. Furthermore, the toolkits assist organizations with building evidence to inform program development. 

Laura Walker McDonald, Senior Director for Insights and Impact at DIAL, presented the Monitoring and Evaluation Framework (developed during her time at SIMLab), which assists practitioners in measuring the impact of their work and the contribution of inclusive technologies to their impact and outcomes. This Monitoring and Evaluation Framework was developed out of the need for more evidence of the successes and failures of technology for social change. “We have almost no evidence of how innovation is brought to scale. This work is trying to reflect publicly the practice of sharing learnings and evaluations. Technology and development isn’t as good as it could be because of this lack of evidence,” McDonald said. The Principles for Digital Development provide the Framework’s benchmarks. McDonald continues to refine this Framework based on feedback from community experts, and she welcomes input that can be shared through this document.

Christopher Neu, COO of TechChange, introduced the new, cross-sector Digital Principles Maturity Matrix Tool for Proposal Evaluation that his team developed on behalf of DIAL. The Maturity Matrix tool helps donors and implementers asses how the Digital Principles are planned to be used during the program proposal creation process. Donors may use the tool to evaluate proposal responses to their funding opportunities, and implementers may use the tool as they write their proposals. “This is a tool to give donors and implementers a way to talk about the Digital Principles in their work. This is the beginning of the process, not the end,” Neu said during the session. Users of the Maturity Matrix Tool score themselves on a rating between one and three against metrics that span each of the nine Digital Principles and across the four stages of the Digital Principles project lifecycle. A program is scored one when it loosely incorporates the identified activity or action into proposals and implementation. A score of two indicates that the program is clearly in line with best practices or that the proposal’s writers have at least thought considerably about them. Those who incorporate the Digital Principles on a deeper level and provide an action plan to increase engagement earn a score of three. It is important to note that not every project will require the same level of Digital Principles Maturity, and not every Digital Principle may be required to be used in a program. The scores are intended to provide donors and organizations evidence that they are making the best and most responsible investment in technology. 

Steve Ollis, Senior Digital Health Advisor at John Snow Inc., presented the Digital Health Investment Review Tool (DHIRT), which assists donors investing in Digital Health programs to make informed decisions about their funding. The tool asks donors to adhere to the Digital Principles and the Principles of Donor Alignment for Digital Health (Digital Investment Principles), which are also based on the Digital Principles. When implementing this tool, practitioners can assess implementer proposals across 12 criteria. After receiving a score between one to five (one being nascent and five being optimized), organizations can better assess how effectively they incorporate the Digital Principles and other best practices (including change management) into their project proposals. 

Max Vielle, Global Director of Response Innovation Lab, introduced the Innovation Evidence Toolkit, which helps technology innovators in the humanitarian sector build evidence to thoughtfully develop and assess their prototypes and pilots. “We wanted to build a range of tools for implementors to assess their ability to scale the project,” Vielle said of the toolkit. Additionally, the tool assists innovators in determining the scalability of their technologies. The Innovation Evidence Toolkit helps humanitarian innovators and social entrepreneurs think through how they use technology when developing, piloting, and scaling their projects. “We want to remove the barriers for non-humanitarian actors to act in humanitarian responses to get services to people who need them,” Vielle said. This accessible toolkit can be used by organizations with varying levels of capacity and is available offline for those working in low-connectivity environments. 

Participants discuss the use of different tools for evaluating the Principles. (Photo by Christopher Neu)

Evidence-based decision making is key to improving the use of technologies in the development industry. The coupling of the Principles of Digital Development and evaluation methodologies will assist development practitioners, donors, and innovators not only in building evidence, but also in effectively implementing programs that align with the Digital Principles.