MERL Tech News

Measuring Local Ownership in International Development Projects

by Rachel Dickinson, Technical Officer for Research and Learning, Root Change

“Localization”, measuring local ownership, USAID’s Journey to Self-Reliance… We’re all talking about these ideas and policies, and trying to figure out how to incorporate them in our global development projects, but how do we know if we are making progress on these goals? What do we need to measure?

Root Change and Keystone Accountability, under a recent USAID Local Works research grant, created the Pando Localization Learning System (LLS) as both a tool and a methodology for measuring and tracking local ownership within projects in real time. Pando LLS is an online platform that uses network maps and feedback surveys to assess system health, power dynamics, and collaboration within a local development system. It gives development practitioners simple, easy-to-use visuals and indicators, which can be shared with stakeholders and used to identify opportunities for strengthening local development systems.

We launched the Pando platform at MERL Tech DC in 2018, and this year we wanted to share (and get reactions to) a new set of localization measures and a reflective approach we have embedded in the tool. 

Analysis of local ownership on Pando LLS is organized around four key measures. Under each we have determined a series of indicators pulling from both social network analysis (SNA) and feedback survey questions. For those interested in geeking out on the indicators themselves, visit our White Paper on the Pando Localization Learning System (LLS), but the four measures are: 

1) Leadership measures whether local actors can voice concerns, set priorities and define success in our projects. It measures whether we, as outsiders, are soliciting input from local actors. In other words, it looks at whether project design and implementation is bottom-up.

2) Mutuality measures whether strong reciprocal, or two-way, relationships exist. It measures whether we, as external actors, respond to and act on feedback from local actors. It’s the respect and trust required for success in any interaction. 

3) Connectivity measures whether the local system motivates and incentivizes local actors to work together to solve problems. It measures whether we, as program implementers, promote collaboration and connection between local actors. It asks whether the local system is actually improving, and if we are playing the right roles. 

4) Financing measures whether dependency on external financial resources is decreasing, and local financial opportunities are becoming stronger. It measures whether we, as outsiders, are preparing local organizations to be more resilient and adaptive. It explores the timeless question of money and resources. 

Did you notice how each of these measures assesses not only local actors and their system, but also our role as outsiders? This takes us to the reflective approach.

The Pando LLS approach emphasizes dialogue with system actors and self-reflection by development practitioners. It pushes us to question our assumptions about the systems where we work and tasks us with developing project activities and M&E plans that involve local actors. The theories behind the approach can also be found in our White Paper, but here are the basic steps: 

  • Listen to local actors by inviting them to map their relationships, share feedback, and engage in dialogue about the results;
  • Co-create solutions and learn through short-term experiments that aim to improve relationships and strengthen the local system;
  • Incorporate what’s working back into development projects and celebrate failures as progress; and 
  • Repeat the listen, reflect, and adapt cycles 3-4 times a year to ensure each one is small and manageable.

What do you think of this method for measuring and promoting local ownership? Do we have the measures right? How are you measuring local ownership in your work? Would you be interested in testing the Pando LLS approach together? We’d love to hear from you! Email me at rdickinson@rootchange.org to share your feedback, questions, or ideas! 

Tech Is Easy, People Are Hard: Behavioral Design Considerations to Improve Mobile Engagement

By Cathy Richards

Mobile platforms are often a go-to when it comes to monitoring and evaluation in developing communities and markets. One provider of these platforms, Echo Mobile, is often asked, “what sort of response rate can I expect for my SMS survey?” or, “what percentage of my audience will engage my IVR initiative?” In this session at MERL Tech DC in September, Boris Maguire, CEO of Echo Mobile, walked participants through various case studies which highlight that the answer to that question largely depends on the project’s individual context and that there is ultimately no one size fits all solution. 

Echo Mobile is a platform that allows users to have powerful conversations over SMS, voice, and USSD for purposes such as monitoring and evaluation, field reporting, feedback, information access, market research and customer service. The platform’s user segments include consumer goods (20%), education and health (16%), M&E/Research (15%), agriculture and conservation (14%), finance and consulting (13%) and media and advocacy (7%). Its user types are primarily business (35%), non-profit (31%) and social enterprises (29%). 

The team at Echo Mobile has learned that regardless of the chosen mobile engagement technology, achieving MERL goals often rests on the design and psychology behind the mobile engagement strategy – the content, tone, language, and timing of communications and the underlying incentives of the audience. More often than not, the most difficult parts in mobile engagement are the human aspects (psychological, emotional, strategic) rather than the technological implementation. 

Because of this, Echo Mobile chose to dive deeper into the factors they believed influenced mobile engagement the most. Some of their beliefs included:

  • Responder characteristics: Who are you trying to engage with? It’s important to figure out who you are engaging with and tailor your strategy to them.
  • Social capital and trust: Do these responders have a reason to trust you? What is the nature of your relationship with them?
  • Style, tone & content: What specific words are you using to engage with them? Are you showing that you want to know more and that you care about them?
  • Convenience: What is the level of effort, time and resources that responders have to invest in order to engage with your organization?
  • Incentives/relevance: Do they have a reason to engage with your organization? Do they think you’ll understand them better? Will they get more of what they need?

Through informal analysis, Echo Mobile found that the factors most highly correlated with high rates of engagement are the time of day in which recipients receive the messaging, followed by reminders to engage. Financial incentives were found to be the least effective. However, case studies prove that context ultimately adds the most important component of the mobile engagement strategy.

In the first case study, a BBOXX team in Rwanda sought to understand the welfare impact of solar consumption amongst their customers via SMS surveys. They first ran a set of small experiments, modifying survey financial incentives, timing, length, and language to see which moved the needle on response rates and compare the results to what customers told them in focus groups. In this case, Echo Mobile found that reminders in the morning and surveys in the evening nearly doubled their response rates. The choice to opt or dive in also affected response rates.

In the second case study, a UN agency nearly doubled SMS engagement rates from 40,000 Kenyan teachers by dropping financial incentives and tweaking the structure, tone and content of their messaging. In this case, incentive amounts once again did not do much to increase engagement but rather the ability to opt or dive in, reminders, and content/tone made the biggest difference. 

In short, Echo Mobile’s biggest takeaways are that:

  • Convenience is king
  • One can harass but not bore
  • Financial incentives are often overrated

Several participants also shared their experiences with mobile engagement and cited factors such as survey length and consent as important. 

Visualizing Your Network for Adaptive Program Decision Making

By Alexis Banks, Jennifer Himmelstein, and Rachel Dickinson

Social network analysis (SNA) is a powerful tool for understanding the systems of organizations and institutions in which your development work is embedded. It can be used to create interventions that are responsive to local needs and to measure systems change over time. But, what does SNA really look like in practice? In what ways could it be used to improve your work? Those are the questions we tackled in our recent MERL Tech session, Visualizing Your Network for Adaptive Program Decision Making. ACDI/VOCA and Root Change teamed up to introduce SNA, highlight examples from our work, and share some basic questions to help you get started with this approach.

MERL Tech 2019 participants working together to apply SNA to a program.

SNA is the process of mapping and measuring relationships and information flows between people, groups, organizations, and more. Using key SNA metrics enables us to answer important questions about the systems where we work. Common SNA metrics include (learn more here):

  • Reachability, which helps us determine if one actor, perhaps a local NGO, can access another actor, such as a local government;
  • Distance, which is used to determine how many steps, or relationships, there are separating two actors;
  • Degree centrality, which is used to understand the role that a single actor, such as an international NGO, plays in a system by looking at the number of connections with that organization;
  • Betweenness, which enables us to identify brokers or “bridges” within networks by identifying actors that lie on the shortest path between others; and
  • Change Over Time, which allows us to see how organizations and relationships within a system have evolved.
Using betweenness to address bottlenecks.

SNA in the Program Cycle

SNA can be used throughout the design, implementation, and evaluation phases of the program cycle.

Design: Teams at Root Change and ACDI/VOCA use SNA in the design phase of a program to identify initial partners and develop an early understanding of a system–how organizations do or do not work together, what barriers are preventing collaboration, and what strategies can be used to strengthen the system.

As part of the USAID Local Works program, Root Change worked with the USAID mission in Bosnia and Herzegovina (BiH) to launch a participatory network map that identified over 1,000 organizations working in community development in BiH, many of which had been previously unknown to the mission. It also provided the foundation for a dialogue with system actors about the challenges facing BiH civil society.

To inform project design, ACDI/VOCA’s Feed the Future Tanzania NAFAKA II Activity, funded by USAID conducted a network analysis to understand the networks associated with village based agricultural advisors (VBAAs)–what services they were offering to farmers already, which had the most linkages to rural actors, which actors were service as bottlenecks, and more. This helped the project identify which VBBA’s to work with through small grants and technical assistance (e.g. key actors), and what additional linkages needed to be built between VBAAs and other types of actors.

NAFAKA II Tanzania

Implementation: We also use SNA throughout program implementation to monitor system growth, increase collaboration, and inform learning and program design adaptation. ACDI/VOCA’s USAID/Honduras Transforming Market Systems Activity uses network analysis as a tool to track business relationships created through primary partners. For example, one such primary partner is the Honduran chamber of tourism who facilitates business relationships through group training workshops and other types of technical assistance. They can then follow up on these new relationships to gather data on indirect outcomes (e.g. jobs created, sales and more).

Root Change used SNA throughout implementation of the USAID funded Strengthening Advocacy and Civic Engagement (SACE) program in Nigeria. Over five years, more than 1,300 organizations and 2,000 relationships across 17 advocacy issue areas were identified and tracked. Nigerian organizations came together every six months to update the map and use it to form meaningful partnerships, coordinate advocacy strategies, and hold the government accountable.

SACE participants explore a hand drawn network map.

Evaluating Impact: Finally, our organizations use SNA to measure results at the mid-term or end of project implementation. In Kenya, Root Change developed the capacity of Aga Khan Foundation (AKF) staff to carry out a baseline, and later an end-of-project network analysis of the relationships between youth and organizations providing employment, education, and entrepreneurship support. The latter analysis enabled AKF to evaluate growth in the network and the extent to which gaps identified in the baseline had been addressed.

AKF’s Youth Opportunities Map in Kenya

The Feed The Future Ghana Agricultural Development and Value Chain Enhancement II (ADVACNE II) Project, implemented by ACDI/VOCA and funded by USAID, leveraged existing database data to demonstrate the outgrower business networks that were established as a result of the project. This was an important way of demonstrating one of ADVANCE II’s major outcomes–creating a network of private service providers that serve as resources for inputs, financing, and training, as well as hubs for aggregating crops for sales.

Approaches to SNA
There are a plethora of tools to help you incorporate SNA in your work. These range from bespoke software custom built for each organization, to free, open source applications.

Root Change uses Pando, a web-based, participatory tool that uses relationship surveys to generate real-time network maps that use basic SNA metrics. ACDI/VOCA, on the other hand, uses unique identifiers for individuals and organizations in its routine monitoring and evaluation processes to track relational information for these actors (e.g. cascaded trainings, financing given, farmers’ sales to a buyer, etc.) and an in-house SNA tool.

Applying SNA to Your Work
What do you think? We hope we’ve piqued your interest! Using the examples above, take some time to consider ways that SNA could be embedded into your work at the design, implementation, or evaluation stage of your work using this worksheet. If you get stuck, feel free to reach out (Alexis Banks, abanks@rootchange.org; Rachel Dickinson, rdickinson@rootchange.org; Jennifer Himmelstein, JHimmelstein@acdivoca.org)!

What’s Happening with Tech and MERL?

by Linda Raftree, Independent Consultant and MERL Tech organizer

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before roll-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Practicing Safe Monitoring and Evaluation in the 21st Century

By Stephen Porter. Adapted from the original post published here.

Monitoring and evaluation practice can do harm. It can harm:

  • the environment by prioritizing economic gain over species that have no voice
  • people who are invisible to us when we are in a position of power
  • by asking for information that can then be misused.

In the quest for understanding What Works, the focus is often too narrowly on program goals rather than the safety of people. A classic example in the environmental domain is the use of DDT: “promoted as a wonder-chemical, the simple solution to pest problems large and small. Today, nearly 40 years after DDT was banned in the U.S., we continue to live with its long-lasting effects.” The original evaluation of its effects had failed to identify harm and emphasized its benefits. Only when harm to the ecosystem became more apparent was evidence presented in Rachel Carson’s book Silent Spring. We should not have to wait for failure to be so apparent before evaluating for harm.

Join me, Veronica Olazabal, Rodney Hopson, Dugan Fraser and Linda Raftree, for a session on “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, 8-9am, Room CC M100 H, at the American Evaluation Association Conference in Minneapolis.

Ethical standards have been developed for evaluators, which are discussed at conferences and included in professional training. Yet institutional monitoring and evaluation practices still struggle to fully get to grips with the reality of harm in the pressure to get results reported. If we want monitoring and evaluation to be safer for the 21st Century we need to shift from training and evaluator-to-evaluator discussions to changing institutional practices.

At a workshop convened by Oxfam and the Rockefeller Foundation in 2019, we sought to identify core issues that could cause harm and get to grips with areas where institutions need to change practices. The workshop brought together partners from UN agencies, philanthropies, research organizations and NGOs. This meeting sought to give substance to issues. It was noted by a participant that though the UNEG Norms and Standards and UNDP’s evaluation policy are designed to make evaluation safe, in practice there is little consideration given to capturing or understanding the unintended or perverse consequences of programs or policies. The workshop explored this and other issues and identified three areas of practice that could help to reframe institutional monitoring and evaluation in a practical manner.

1. Data rights, privacy and protection: 

In working on rights in the 21st Century, data and Information are some of the most important ‘levers’ pulled to harm and disadvantage people. Oxfam has had a Responsible Data in Program policy in place since 2015 goes some way towards recognizing this.But we know we need to more fully implement data privacy and protection measures in our work.

At Oxfam, work is continuing to build a rights-based approach which already includes aligned confederation-wide Data Protection Policies, implementation of responsible data management policy and practices and other tools aligned with the Responsible Data Policy and European Privacy law, including a responsible data training pack.

Planned and future work includes stronger governance, standardized baseline measures of privacy & information security, and communications/guidance/change management. This includes changes in evaluation protocols related to how we assess risk to the people we work with, who gets access to the data and ensure consent for how the data will be used.

This is a start, but consistent implementation is hard and if we know we aren’t competent at operating the controls within our reach, it becomes more difficult in how we call others out if they are causing harm when they misuse theirs.

2. Harm prevention lens for evaluation

The discussion highlighted that evaluation has not often sought to understand the harm of practices or interventions. When they do, however, the results can powerfully shed new light on an issue. A case that starkly illustrates potential under-reporting is that of the UN Military Operation in Liberia (UNMIL). UNMIL was put in place with the aim “to consolidate peace, address insecurity and catalyze the broader development of Liberia”. Traditionally we would evaluate this objective. Taking a harm lens we may evaluate the sexual exploitation and abuse related to the deployment. The reporting system highlights low levels of abuse, 14 from 2007 – 2008 and 6 in 2015. A study by Beber, Gilligan, Guardado and Karim, however, estimated through representative randomized survey that more than half of eighteen- to thirty-year-old women in greater Monrovia have engaged in transactional sex and that most of them (more than three-quarters, or about 58,000 women) have done so with UN personnel, typically in exchange for money.

Changing evaluation practice should not just focus on harm in the human systems, but also provide insight in the broader ecosystem. Institutionally there needs to be championship for identifying harm within and through monitoring and evaluation practice and changes in practice.

3. Strengthening safeguarding and evaluation skills

We need to resource teams appropriately so they have the capacity to be responsive to harm and reflective on the potential for harm. This is both about tools and procedures and conceptual frames.

Tools and procedures can include, for example:

  • Codes-of-conduct that create a safe environment for reporting issues
  • Transparent reporting lines to safeguarding/safe programming advisors
  • Training based on actual cases
  • Safe data protocols (see above)

All of these fall by the way-side, however, if the values and concepts that guide implementation are absent. Rodney Hopson at the workshop, drawing on environmental policy and concepts of ecology, presented a frame to increasing evaluators’ usefulness in complex ecologies where safeguarding issues are prevalent, that emphasizes:

  • Relationships – the need to identify and relate to key interests, interactions, variables and stakeholders amid dynamic and complex issues in an honest manner that is based on building trust.
  • Responsibilities – acting with propriety, doing what is proper, fair, right, just in evaluation against standards.
  • Relevance – being accurate and meaningful technically, culturally and contextually.

Safe monitoring and evaluation in the 21st Century does not just seek ‘What Works’ and will need to be relentless at looking at ‘How we can work differently?’. This includes us understanding connectivity in harm between human and environmental systems. The three areas noted here are a start of a conversation and a challenge to institutions to think more about what it means to be safe in monitoring and evaluation practice.

Planning to attend the American Evaluation Association Conference this week? Join us for the session “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, from 8- 9:00 AM) in room CC M100 H.

Panelists will discuss ideas to better address harm in regards to: (i) harm identification and mitigation in evaluation practice; (ii) responsible data practice evaluation in complex ecologies, (iii) understanding harm in an international development context, and (iv) evaluation in complex ecologies.

The panel will be chaired by  Veronica M Olazabal, (Senior Advisor & Director, Measurement, Evaluation and Organizational Performance, The Rockefeller Foundation) , with speakers Stephen Porter (Evaluation Strategy Advisor, World Bank), Linda Raftree (Independent Consultant, Organizer of MERL Tech), Dugan Fraser (Prof & Director CLEAR-AA – University of the Witwatersrand, Johannesburg) and Rodney Hopson (Prof of Evaluation, Department of Ed Psych, University of Illinois Urbana-Champaign). View the full program here: https://lnkd.in/g-CHMEj 

Ethics and unintended consequences: The answers are sometimes questions

by Jo Kaybryn

Our MERL Tech DC session, “Ethics and unintended consequences of digital programs and digital MERL” was a facilitated discussion about some of the challenges we face in the Wild West of digital and technology-enabled MERL and the data that it generates. Here are some of the things that stood out from discussions with participants and our experience.

Purposes

Sometimes we are not clear on why we are collecting data.  ‘Just because we can’ is not a valid reason to collect or use data and technology.  What purposes are driving our data collection and use of technology? What is the problem we are trying to solve? A lack of specificity can allow us stray into speculative data collection — if we’re collecting data on X, then it’s a good opportunity to collect data on Y “in case we need it in the future”. Do we ever really need it in the future? And if we do go back to it, we often find that because we didn’t collect the data on Y with a specific purpose, it’s not the “right” data for our needs. So, let’s always ask ourselves why are we collecting this data, do we really need it?

Tensions

Projects are increasingly under pressure to be more efficient and cost-effective in their data collection, yet the need or desire to conduct more robust assessments can requires the collection of data on multiple dimensions within a community. These two dynamics are often in conflict with each other. Here are three questions that can help guide our decision making:

  • Are there existing data sets that are “good enough” to meet the M&E needs of a project? Often there are, and they are collected regularly enough to be useful. Lean on partners who understand the data space to help map out what exists and what really needs to be collected. Leverage partners who are innovating in the data space – can machine learning and AI-produced data meet 80% of your needs? If so, consider it.
  • What data are we critically in need of to assess a project? Build an efficient data collection methodology that considers respondent burden and potentially includes multiple channels for receiving responses to increase inclusivity.
  • What will the data be used for? Sensitive contexts and life or death decisions require a different level of specificity and periodicity than less sensitive projects. Think about data from this lens when deciding which information to collect, how often to collect it, and who to collect it from.

Access

It is worth exploring questions of access in our data collection practices. Who has access to the data and the technology?  Do the people about whom the data is, have access to it?  Have we considered the harms that could come from the collection, storage, and use of data? For instance, while it can be useful to know where all the clients are who are accessing a pregnancy clinic to design better services, an unintended consequence may involve others having the ability to identify people who are pregnant, which pregnant people might not like these others to know. What can we do to protect the privacy of vulnerable populations? Also, going digital can be helpful, but if a person or community implicated in a data collection endeavour does not have access to technology or to a charging point – are we not just increasing or reinforcing inequality?

Transparency

While we often advocate for transparency in many parts of our industry, we are not always transparent about our data practices. Are we willing to tell others, to tell community members, why we are collecting data, using technology, and how we are using information?  If we are clear on our purpose, but not willing for it to be transparent, then it might be a good reason to reconsider. Yet, transparency does not equate accountability, so what are the mechanisms for ensuring greater accountability towards the people and communities we seek to serve?

Power and patience

One of the issues we’re facing is power imbalances. The demands that are made of us from donors about data, and the technology solutions that are presented to us, all make us feel like we’re not in control. But the rules haven’t been written yet — we get to write them.

One of the lessons from the responsible data workshop leading up to the conference was that organisations can get out in front of demands for data by developing their own data management and privacy policies. From this position it is easier to enter into dialogues and negotiations, with the organisational policy as your backstop. Therefore, it is worth asking, Who has power? For what? Where does it reside and how can we rebalance it?

Literacy underpins much of this – linguistic, digital, identity, ethical literacy.  Often when it comes to ‘digital’ we immediately fall under the spell of the tyranny of the urgent.  Therefore,  in what ways can we adopt a more ‘patient’ or ‘reflective’ practice with respect to digital?

For more information, see:

Creating and Measuring Impact in Digital Social and Behavior Change Communication 

By Jana Melpolder

People are accessing the Internet, smartphones, and social media like never before, and the social and behavior change communication community is exploring the use of digital tools and social media for influencing behavior. The MERL Tech session, “Engaging for responsible change in a connected world: Good practices for measuring SBCC impact” was put together by Linda Raftree, Khwezi Magwaza, and Yvonne MacPherson, and it set out to help dive into Digital Social and Behavior Change Communication (SBCC).

Linda is the MERL Tech Organizer, but she also works as an independent consultant. She has worked as an Advisor for Girl Effect on research and digital safeguarding in digital behavior change programs with adolescent girls. She also recently wrote a landscaping paper for iMedia on Digital SBCC. Linda opened the session by sharing lessons from the paper, complemented by learning drawn from research and practice at Girl Effect.

Linda shares good practices from a recent landscape report on digital SBCC.

Digital SBCC is expanding due to smartphone access. In the work with Girl Effect, it was clear that even when girls in lower income communities did not own smartphones they often borrowed them. Project leaders should consider several relevant theories on influencing human behavior, such as social cognitive theory, behavioral economics, and social norm theory. Additionally, an ethical issue in SBCC projects is whether there is transparency about the behavior change efforts an organization is carrying out, and whether people even want their behaviors to be challenged or changed.

When it comes to creating a SBCC project, Linda shared a few tips: 

  • Users are largely unaware of data risks when sharing personal information online
  • We need to understand peoples’ habits. Being in tune with local context is important, as is design for habits, preferences, and interests.
  • Avoid being fooled by vanity metrics. For example, even if something had a lot of clicks, how do you know an action was taken afterwards? 
  • Data can be sensitive to deal with. For some, just looking at information online, such as facts on contraception, can put them at risk. Be sure to be careful of this when developing content.

The session’s second presenter was Khwezi Magwaza who has worked as a writer and radio, digital, and television producer. She worked as a content editor for Praekelt.org and also served as the Content Lead at Girl Effect. Khwezi is currently providing advisory to an International Rescue Committee platform in Tanzania that aims to support improved gender integration in refugee settings. Lessons from Khwezi from working in digital SBCC included:

  • Sex education can be taboo, and community healthcare workers are often people’s first touch point. 
  • There is a difference between social behavior change and, more precisely, individual behavior change. 
  • People and organizations working in SBCC need to think outside the box and learn how to measure it in non-traditional ways. 
  • Just because something is free doesn’t mean people will like it. We need to aim for high quality, modern, engaging content when creating SBCC programs.
  • It’s also critical to hire the right staff. Khwezi suggested building up engineering capacity in house rather than relying entirely on external developers. Having a digital company hand something over to you that you’re stuck with is like inheriting a dinosaur. Organizations need to have a real working relationship with their tech supplier and to make sure the tech can grow and adapt as the program does.
Panelists discuss digital SBCC with participants.

The third panelist from the session was Yvonne MacPherson, the U.S. Director of BBC Media Action, which is the BBC’s international NGO that was made to use communication and media to further development. Yvonne noted that:

  • Donors often want an app, but it’s important to push back on solely digital platforms. 
  • Face-to-face contact and personal connections are vital in programs, and social media should not be the only form of communication within SBCC programs.
  • There is a need to look at social media outreach experiences from various sectors to learn, but that the contexts that INGOs and national NGOs are working in is different from the environments where most people with digital engagement skills have worked, so we need more research and it’s critical to understand local context and behaviors of the populations we want to engage.
  • Challenges are being seen with so-called “dark channels,” (WhatsApp, Facebook Messenger) where many people are moving and where it becomes difficult to track behaviors. Ethical issues with dark channels have also emerged, as there are rich content options on them, but researchers have yet to figure out how to obtain consent to use these channels for research without interrupting the dynamic within channels.

I asked Yvonne if, in her experience and research, she thought Instagram or Facebook influencers (like celebrities) influenced young girls more than local community members could. She said there’s really no one answer for that one. There actually needs to be a detailed ethnographic research or study to understand the local context before making any decisions on design of an SBCC campaign. It’s critical to understand the target group — what ages they are, where do they come from, and other similar questions.

Resources for the Reader

To learn more about digital SBCC check out these resources, or get in touch with each of the speakers on Twitter:

Three Tips for Bridging Tech Development and International Development 

by Stephanie Jamilla

The panel “Technology Adoption and Innovation in the Industry: How to Bridge the International Development Industry with Technology Solutions” proved to be an engaging conversation between four technology and international development practitioners. Admittedly, as someone who comes from more of a development background, some parts of this conversation were hard to follow. However, three takeaways stuck out to me after hearing the insights and experiences of Aasit Nanavati, a Director of DevResults, Joel Selanikio, CEO and Co-Founder of Magpi, Nancy Hawa, a Sofware Engineer from DevResults, and Mike Klein, a Director from IMC Worldwide and the panel moderator. 

“Innovation isn’t always creation.”

The fact that organizations often think about innovation and creation as synonymous actually creates barriers to entry for tech in the development market. When asked to speak about these barriers, all three panelists mentioned that clients oftentimes want highly customized tools when, they could achieve their goals with what already exists in the market. Nanavati (whose quote titles this section) followed his point about innovation not always requiring creation by asserting that innovation is sometimes just a matter of implementing existing tools really well. Hawa added to this idea by arguing that sometimes development practitioners and organizations should settle for something that’s close enough to what they want in order to save money and resources. When facing clients’ unrealistic expectations about original creation, consultancies should explain that the super-customized system the client asks for may actually be unusable because of the level of complexity this customization would introduce. While this may be hard to admit, communicating with candor is better than the alternative — selling a bad product for the sake of expanding business. 

An audience member asked how one could convince development clients to accept the non-customized software. In response, Hawa suggested that consultancies talk about software in a way that non-tech clients understand. Say something along the lines of, “Why recreate Microsoft Excel or Gmail?” Later in the discussion, Selanikio offered another viewpoint. He never tries to persuade clients to use Magpi. Rather, he does business with those who see the value of Magpi for their needs. This method may be effective in avoiding a tense situation between the service provider and client when the former is unable to meet the unrealistic demands of the latter.

We need to close the gap in understanding between the tech and development fields.

Although not explicitly stated, one main conclusion that can be drawn from the panel is that a critical barrier keeping technology from effectively penetrating development is miscommunication and misunderstanding between actors from the two fields. By learning how to communicate better about the technology’s true capacity, clients’ unrealistic expectations, and the failed initiatives that often result from the mismatch between the two, future failures-in-the-making can be mitigated. Interestingly, all three panelists are, in themselves, bridges between these two fields, as they were once development implementors before turning to the tech field. Nanavati and Selanikio used to work in the public health sphere in epidemiology, and Hawa was a special education teacher. Since the panelists were once in their clients’ positions, they better understand the problems their clients face and reflect this understanding in the useful tech they develop. Not all of us have expertise in both fields. However, we must strive to understand and accept the viewpoints of each other to effectively incorporate technology in development. 

Grant funding has its limitations.

This is not to say that you cannot produce good tech outputs with grant funding. However, using donations and grants to fund the research and development of your product may result in something that caters to the funders’ desires rather than the needs of the clients you aim to work with. Selanikio, while very grateful to the initial funders of Magpi, found that once the company began to grow, grants as a means of funding no longer worked for the direction that he wanted to go. As actors in the international development sphere, the majority of us are mission-driven, so when the funding streams hinder you from following that mission, then it may be worth considering other options. For Magpi, this involved having both a free and paid version of its platform. Oftentimes, clients transition from the free to paid version and are willing to pay the fee when Magpi proves to be the software that they need. Creative tech solutions require creative ways to fund them in order to keep their integrity.

Technology can greatly aid development practitioners to make a positive impact in the field. However, using it effectively requires that all those involved speak candidly about the capacity of the tech the practitioner wants to employ and set realistic expectations. Each panelist offered level-headed advice on how to navigate these relationships but remained optimistic about the role of tech in development. 

Chain Reaction: How Does Blockchain Fit, if at All, into Assessments of Value for Money of Education Projects?

by Cathy Richards

In this panel, “Chain Reaction: How does blockchain fit, if at all, into assessments of value for money of education projects,” hosted by Christine Harris-Van Keuren of Salt Analytics, panelists gave examples of how they’ve used blockchain to store activity and outcomes data and to track the flow of finances. Valentine Gandhi from The Development Café served as the discussant.

Value for money analysis (or benefit-cost analysis, cost-economy, cost-effectiveness, cost-efficiency, or cost-feasibility) is defined as an evaluation of the best use of scarce resources to achieve a desired outcome. In this panel, participants examined the value for money of blockchain by taking on an aspect of an adapted value-for-money framework. The framework takes into account resources, activities, outputs, and outcomes. Panel members were specifically asked to explain what they gained and lost by using blockchain as well as whether they had to use blockchain at all.

Ben Joakim is the founder and CEO of Disberse, a new financial institution built on distributed ledger technology. Disberse aims to ensure greater privacy and security for the aid sector — which serves some of the most vulnerable communities in the world. Joakim notes that in the aid sector, traditional banks are often slow and expensive, which can be detrimental during a humanitarian crisis. In addition, traditional banks can lack transparency, which increases the potential for the mismanagement and misappropriation of funds. Disberse works to tackle those problems by creating a financial institution that is not only efficient but also transparent and decentralised, thus allowing for greater impact with available resources. Additionally, Disberse allows for multi-currency accounts, foreign currency exchanges, instant fund transfers, end-to-end traceability, donation capabilities, regulatory compliance, and cash transfer systems. Since inception, Disberse has delivered pilots in several countries including Swaziland, Rwanda, Ukraine, and Australia.

David Mikhail of UNCDF discussed the organization’s usage of blockchain technologies in the Nepal remittance corridor. In 2017 alone, Nepal received $6.9 billion in remittances. These funds are responsible for 28.4% of the country’s GDP. One of the main challenges for Nepali migrant families is a lack of financial inclusion characterized by credit interest rates as high as 30%, lack of a documented credit history, and lack of sufficient collateral. Secondarily, families have a difficult time building capital once they migrate. Between the high costs of migration, high-interest rate loans, non-stimulative spending that impacts their ability to save and invest, and lack of credit history make it difficult for migrants to break free of the poverty cycle. Due to this, the organization asked itself whether it could create a new credit product tied to remittances to provide capital and fuel domestic economic development. In theory, this solution would drive financial inclusion by channeling remittances through the formal sector. The product would not only leverage blockchain in order to create a documented credit history, but it would also direct the flow of remittances into short and long-term savings or credit products that would help migrants generate income and assets. 

Tara Vassefi presented on her experience at Truepic, a photo and video verification platform that aims to foster a healthy civil society by pushing back against disinformation. They do this by bolstering the value of authentic photos through the use of verified pixel data from the time of capture and through the independent verification of time and location metadata. Hashed references to time, date, location and exact pixelation are stored on the blockchain. The benefits of using this technology are that the data is immutable and it adds a layer of privacy and security to media. The downsides include marginal costs and the general availability of other technologies. Truepic has been used for monitoring and evaluation purposes in Syria, Jordan, Uganda, China, and Latin America to remotely monitor government activities and provide increased oversight at a lower cost. They’ve found that this human-centric approach, which embeds technology into existing systems, can close the trust gap currently found in society.

Smartcards for MERL: Worth the Money?

by Jonathan Tan

In 2014, ACDI/VOCA ran into a common problem: their beneficiaries – smallholder farmers in Ghana – had been attending trainings for several agricultural programs, but monitoring workshop attendance and verifying the identity of each attendee was laborious, inaccurate, and labor-intensive. There were opportunities for errors with transcription and data entry at several points in the reporting process, each causing delays downstream for analysis and decision-making. So they turned to a technological solution: contactless smartcards.

At MERL Tech DC, Nirinjaka Ramasinjatovo and Nicole Chao ran a session called “Smartcards for MERL: Worth the Money” to share ACDI/VOCA’s experiences.

The system was fairly straightforward: after a one-time intake session at each village, beneficiaries are registered in a central database and a smartcard with their name and photo is printed and distributed for each. They hired developers to build a simple graphical interface to the database for trainers to use. At each training, trainers bring laptops equipped with card readers to take attendance, and the attendance data is synchronized with the database upon return to an internet-connected office. 

The speakers discussed several expected and unexpected benefits from introducing the smartcards. Registration was streamlined at trainings, and data collection became faster and more accurate. Attendance and engagement at training sessions also increased. ACDI/VOCA hypothesized that beneficiaries possessing physical tokens associated with the otherwise knowledge-based program reminded them of its impact; one of the speakers recounted observing some farmers wearing the smartcards on lanyards with pride to non-training social events in the community. Finally, improved data tracking enabled analysts at ACDI/VOCA to compare farmers’ attendance rate at training sessions to their reported agricultural yield increases and thus measure their impact more effectively.

Process durations for developing the 2014 smart card system in Ghana (left), vs. the 2018 smart tags in Colombia (right).

Then came the perennial question: what did it cost? And was it worth it? For the 2014 Feed the Future program in Ghana, the smartcard system took 6 months of preparation to be deployed (including requirements gathering, software development, hardware procurement, and training). While the cards were fairly inexpensive at 50 to 60 cents (US) apiece, the system had not insignificant fixed costs: card printers were $1,500 each, and the total software development cost was between $15,000 to $25,000.  

ACDI/VOCA sought to improve on this system in a subsequent 2018 emergency response program in Colombia. Instead of smartcards, beneficiaries were issued with small contactless tags, while enumerators used tablets instead of laptops to administer surveys and track attendance. Crucially, rather than hiring developers to write new software from scratch, they made use of Microsoft PowerApps that were more straightforward to deploy; the PowerApp-based system took far less time to test and train enumerators with. It also had the benefit of being easily modifiable post-deployment (which had not been the case with the smart cards). The contactless tags were also less costly at $0.10 to $0.15 apiece, with readers in the $15-20 range. All in all, the contactless tag system deployed in Colombia proved to be far more cost-effective for ACDI/VOCA than the smart cards had been in the previous Ghana project. 

Based on the two projects discussed, the speakers proposed the following set of questions to consider for future projects:

  1. Is there a high number of beneficiaries in your program?
  2. Does each beneficiary have the potential to receive multiple benefits/programs?
  3. Is there a strong need for identity authentication?
  4. Do you have access to software developers?

If you answered “yes” to all four questions, then it is likely that a smart identification system based on cards, tags, etc. will be worth the upfront investment and maintenance costs. If, however, your answer to some or all of them was “no”, then there were intermediate solutions that could still be implementable, such as using tokens with QR codes or bar codes, which would not be as strict of a proof of identity.