All posts by Linda Raftree

About Linda Raftree

Linda Raftree supports strategy, program design, research, and technology in international development initiatives. She co-founded MERLTech in 2014 and Kurante in 2013. Linda advises Girl Effect on digital safety, security and privacy and supports the organization with research and strategy. She is involved in developing responsible data policies for both Catholic Relief Services and USAID. Since 2011, she has been advising The Rockefeller Foundation’s Evaluation Office on the use of ICTs in monitoring and evaluation. Prior to becoming an independent consultant, Linda worked for 16 years with Plan International. Linda runs Technology Salons in New York City and advocates for ethical approaches for using ICTs and digital data in the humanitarian and development space. She is the co-author of several publications on technology and development, including Emerging Opportunities: Monitoring and Evaluation in a Tech-Enabled World with Michael Bamberger. Linda blogs at Wait… What? and tweets as @meowtree. See Linda’s full bio on LInkedIn.

Join the Responsible Data in MERL Initiative in Anglophone Africa

MERL Tech and CLEAR Anglophone Africa hosted three “gLocal evaluation” events in late June under the theme “How to conduct Digital MERL during COVID-19.” These covered the topics of responsible use of data, remote monitoring, and administrative data use.

Key aspects coming out of the events were the need for 1) guidance on data governance and 2) orientation on responsible data practices. Both policy and practice need to be contextualized for the African context and aimed at supporting African monitoring, evaluation, research and learning (MERL) practitioners in their work.

As a follow-on activity, CLEAR Anglophone Africa is calling on M&E practitioners to join up to be a part of this responsible data project for African MERL Practitioners. CLEAR Anglophone Africa and MERL Tech will be collaborating on this responsible data initiative.

Click here to register your interest in the Responsible Data project!

For more information, watch the video from CLEAR Anglophone Africa, and sign up to participate!

 

New Research! The State of the Field of MERL Tech, 2014-2019

The year 2020 is a compelling time to look back and pull together lessons from five years of convening hundreds of monitoring, evaluation, research, and learning and technology practitioners who have joined us as part of the MERL Tech community. The world is in the midst of the global COVID-19 pandemic, and there is an urgent need to know what is happening, where, and to what extent. Data is a critical piece of the COVID-19 response — it can mean the difference between life and death. And technology use is growing due to stay-at-home orders and a push for “remote monitoring” and data collection from a distance.

At the same time, we’re witnessing (and I hope, also joining in with) a global call for justice — perhaps a tipping point — in the wake of decades of racist and colonialist systems that operate at the level of nations, institutions, organizations, the global aid and development systems, and the tech sector. There is no denying that these power dynamics and systems have shaped the MERL space as a whole, and the MERL Tech space as well.

Moments of crisis tend to test a field, and we live in extreme times. The coming decade will demand a nimble, adaptive, fair, and just use of data for managing complexity and for gaining longer-term understanding of change and impact. Perhaps most importantly, in 2020 and beyond, we need meaningful involvement of stakeholders at every level and openness to a re-shaping of our sector and its relationships and power dynamics.

It is in this time of upheaval and change that we are releasing a set of four papers that aim to take stock of the field from 2014-2019 as launchpad for shaping the future of MERL Tech. In September 2018, the papers’ authors began reviewing the past five years of MERL Tech events to identify lessons, trends, and issues in this rapidly changing field. They also reviewed the literature base in an effort to determine what we know, what we yet need to understand about technology in MERL, and what are the gaps in the formal literature. No longer is this a nascent field, yet it is one that is hard to keep up with, given that it is fast paced and constantly shifting with the advent of new technologies. We have learned many lessons over the past five years, but complex political, technical, and ethical questions remain.

The State of the Field series includes four papers:

MERL Tech State of the Field: The Evolution of MERL Tech: Linda Raftree, independent consultant and MERL Tech Conference organizer.

 

What We Know About Traditional MERL Tech: Insights from a Scoping Review: Zach Tilton, Michael Harnar, and Michele Behr, University of Western Michigan; Soham Banerji and Manon McGuigan, independent consultants; and Paul Perrin, Gretchen Bruening, John Gordley and Hannah Foster, University of Notre Dame; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Big Data to Data Science: Moving from “What” to “How” in the MERL Tech SpaceKecia Bertermann, Luminate; Alexandra Robinson, Threshold.World; Michael Bamberger, independent consultant; Grace Lyn Higdon, Institute of Development Studies; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Emerging Technologies and Approaches in Monitoring, Evaluation, Research, and Learning for International Development Programs: Kerry Bruce and Joris Vandelanotte, Clear Outcomes; and Valentine Gandhi, The Development CAFE and Social Impact.

Through these papers, we aim to describe the State of the Field up to 2019 and to offer a baseline point in time from which the wider MERL Tech community can take action to make the next phase of MERL Tech development effective, responsible, ethical, just, and equitable. We share these papers as conversation pieces and hope they will generate more discussion in the MERL Tech space about where to go from here.

We’d like to start or collaborate on a second round of research to delve into areas that were under-researched or less developed. Your thoughts are most welcome on topics that need more research, and if you are conducting research about MERL Tech, please get in touch and we’re happy to share here on MERL Tech News or to chat about how we could work together!

EES podcasts and webinar series on emerging technologies

Guest post, Lauren Weiss, European Evaluation Society

As you may be aware, the European Evaluation Society’s biennial conference has been postponed to September 2021, due to the COVID-19 pandemic.

In the meantime, EES is continuing to work for you, and we are excited to announce the launch of two new initiatives.

First, our new podcast series, EvalEdge, is now available! It focuses on the role of evaluation in shaping how new and emerging technologies can be adapted in international development and in larger society. It explores the latest technological developments, from dig data and geospatial analysis, to blockchain and Internet of Things (IoTs).

Our first episode features MERL Tech’s co-founder Linda Raftree, who discusses innovative examples of using big data, the ethical considerations to be aware of, and much more! Check it out here!

Building on this momentum, EES is also launching a webinar series titled “Emerging Data Landscapes in M&E.” In partnership with Dev CAFÉ, MERL Tech, and the World Bank IEG, this series is devoted to discussing the use of innovative technologies in the world of evaluation.

The first event, “Geospatial, location and big data: Where have we been and where can we go? will take place on 28 July, 15:00 CEST (9:00 EST).

This interactive and free webinar will provide concrete examples of using geospatial and location data to improve our M&E practices. It will also discuss the barriers to using such technologies and brainstorm on ways to overcome them, by inviting feedback and questions from the online audience.

It will include speakers from the World Bank IEG, the European Commission’s DEVCO/ESS, and the Global Environment Facility. You can find more information on our website.

To register for this webinar click here.

We look forward to seeing you on 28 July for this exciting discussion!

For now, to learn more about EES’ upcoming activities, visit our website, or sign up for our monthly newsletter by emailing secretariat@europeanevaluation.org. You can also follow us on Twitter, LinkedIn and Facebook.

Use of Administrative Data for the COVID-19 Response

Administrative data is that which is collected as part of regular activities that occur during program implementation. It has not been tapped sufficiently for learning and research. As the COVID-19 pandemic advances, how might administrative data be used to help with the COVID response, and other national or global pandemics.

At the final event in the MERL Tech and CLEAR-Anglophone Africa series for  gLOCAL Evaluation Week, we were joined by Kwabena Boakye, Ministry of Monitoring and Evaluation, Ghana; Bosco Okumu, National Treasury and Planning, Kenya; Stephen Taylor, Department of Basic Education, South Africa; and Andrea Fletcher, Cooper-Smith.

The four panelists described the kinds of administrative or “routine” data they are using in their work. For example, in Kenya educational records, client information from financial institutions, hospital records of patients, and health outcomes are being used to plan and implement actions related to COVID-19 and to evaluate the impact of different COVID-related policies that governments have put in place or are considering. In Malawi, administrative data is combined with other sources such as Google mobility data to understand how migration might be affecting the virus’ spread. COVID-19 is putting a spotlight on weaknesses and gaps in existing administrative data systems.

Watch the video here:

Listen to just the audio from the event here:

Summary:

Benefits of administrative data include that:

  • Data is generated through normal operations and does not require an additional survey to create it
  • It can be more relevant than a survey because it covers a large swath of the entire population
  • It is an existing data source during COVID when it’s difficult to collect new data
  • It can be used to create dashboards for decision-makers at various levels

Challenges include:

  • Data sits in silos and the systems are not designed to be interoperable
  • Administrative data may leave out those who are not participating in a government program
  • Data sets are time-bound to the life of the program
  • Some administrative data systems are outdated and have poor quality data that is not useful for decision-making or analysis
  • There is a demand for beautiful dashboards and maps but there is insufficient attention to the underlying data processes that would be needed to produce this information so that it can be used
  • Real-time data is not possible when there is no Internet connectivity
  • There is insufficient attention to data privacy and protection, especially for sensitive data
  • Institutions may resist providing data if weakness are highlighted through the data or they think it will make them look bad

Recommendations for better use of administrative data in the public sector:

  • Understand the data needs of decision-makers and build capacity to understand and use data systems
  • Map the data that exists, assess its quality, and identify gaps
  • Design and enact policies and institutional arrangements, tools, and processes to make sure that data is organized and interoperable.
  • Automate processes with digital tools to make them more seamless.
  • Focus on enhancing underlying data collection processes to improve the quality of administrative data; this includes making it useful for those who provide the data so that it is not yet another administrative burden with no local value.
  • Assign accountability for data quality across the entire system.
  • Learn from the private sector, but remember that the public sector has different incentives and goals.
  • Rather than fund more research on administrative data, donors should put funds into training on data quality, data visualization, and other skills related to data use and data literacy at different levels of government.
  • Determine how to improve data quality and use of existing administrative data systems rather than building new ones.
  • Make administrative data useful to those who are inputting it to improve data quality.

Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Remote Monitoring in the Time of Coronavirus

On June 3,  MERL Tech and CLEAR-Anglophone Africa hosted the second of three virtual events for gLOCAL Evaluation Week. At this event, we heard from Ignacio Del Busto, IDInsight, Janna Rous, Humanitarian Data, and Ayanda Mtanyana, New Leaders, on the topic of remote monitoring.

Data is not always available, and it can be costly to produce. One challenge is generating data cheaply and quickly to meet the needs of decision-makers within the operational constraints that enumerators face. Another is ensuring that the process is high quality and also human-centered, so that we are not simply extracting data. This can be a challenge when there is low connectivity and reach, poor networks capacity and access, and low smartphone access. Enumerator training is also difficult when it must be done remotely, especially if enumerators are new to technology and more accustomed to doing paper-based surveys.

Watch the video below.

Listen to just the audio from the session here.

Some recommendations arising from the session included:

  • Learn and experiment as you try new things. For example, tracking when and why people are dropping off a survey and finding ways to improve the design and approach. This might be related to the time of the call or length of the survey.
  • It’s not only about phone surveys. There are other tools. For example, WhatsApp has been used successfully during COVID-19 for collecting health data.
  • Don’t just put your paper processes onto a digital device. Instead, consider how to take greater advantage of digital devices and tools to find better ways of monitoring. For example, could we incorporate sensors into the monitoring from the start? At the same time, be careful not to introduce technologies that are overly complex.
  • Think about exclusion and access. Who are we excluding when we move to remote monitoring? Children? Women? Elderly people? We might be introducing bias if we are going remote. We also cannot observe if vulnerable people are in a safe place to talk if we are doing remote monitoring. So, we might be exposing people to harm or they could be slipping through the cracks. Also, people self-select for phone surveys. Who is not answering the phone and thus left out of the survey?
  • Consider providing airtime but make sure this doesn’t create perverse incentives.
  • Ethics and doing no harm are key principles. If we are forced to deliver programs remotely, this involves experimentation. And we are experimenting with people’s lives during a health crisis. Consider including a complaints channel where people can report any issues.
  • Ensure data is providing value at the local level, and help teams see what the whole data process is and how their data feeds into it. That will help improve data quality and reduce the tendency to ‘tick the box’ for data collection or find workarounds.
  • Design systems for interoperability so that the data can overlap, and the data can be integrated with other data for better insights or can be automatically updated. Data standards need to be established so that different systems can capture data in the same way or the same format;
  • Create a well-designed change management program to bring people on board and support them. Role modeling by leaders can help to promote new behaviors.

Further questions to explore:

  • How can we design monitoring to be remote from the very start? What new gaps could we fill and what kinds of mixed methods could we use?
  • What two-way platforms are most useful and how can they be used effectively and ethically?
  • Can we create a simple overview of opportunities and threats of remote monitoring?
  • How can we collect qualitative data, e.g., focus groups and in-depth interviews?
  • How can we keep respondents safe? What are the repercussions of asking sensitive questions?
  • How can we create data continuity plans during the pandemic?


Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Using Data Responsibly During the COVID-19 Crisis

Over the past decade, monitoring, evaluation, research and learning (MERL) practices have become increasingly digitalized. The COVID-19 pandemic has caused that the process of digitalization to happen with even greater speed and urgency, due to travel restrictions, quarantine, and social distancing orders from governments who are desperate to slow the spread of the virus and lessen its impact.

MERL Tech and CLEAR-Anglophone Africa are working together to develop a framework and guidance on responsible data management for MERL in the Anglophone African context. As part of this effort, we held three virtual events in early June during CLEAR’s gLOCAL Evaluation Week.

At our June 2 event, Korstiaan Wapenaar, Genesis Analytics, Jerusha Govender, Data Innovator, and Teki Akkueteh, Africa Digital Rights Hub, shared tips on how to be more responsible with data.

Data is a necessary and critical part of COVID-19 prevention and response efforts to understand where the virus might appear next, who is most at risk, and where resources should be directed for prevention and response. However we need to be sure that we are not putting people at risk of privacy violations or misuse of personal data and to ensure that we are managing that data responsibly so that we don’t unnecessarily create fear or panic.

Watch the video below:

Listen to the audio from the session here:

Session summary:

  • MERL Practitioners have clear responsibilities when sharing, presenting, consuming and interpreting data. Individuals and institutions may use data to gain prestige, and this can allow bias to creep in or to justify government decisions. Data quality is critical for informing decisions, and information gaps create the risk of misinformation and flawed understanding. We need to embrace uncertainty and the limitations of the science, provide context and definitions so that our sources are clear, and ensure transparency around the numbers and the assumptions that are underpin our work.
  • MERL Practitioners should provide contextual information and guidance on how to interpret the data so that people can make sense of it in the right way. We should avoid cherry picking data to prove a point, and we should be aware that data visualization carries power to sway opinions and decisions. It can also influence behavior change in individuals, so we need to take responsibility for that. We also need to find ways to visualize data for lay people and non-technical sectors.
  • Critical data is needed, yet it might be used in negative or harmful ways, for example, COVID-related stigmatization that can affect human dignity. We must not override ethical and legal principles in our rush to collect data. Transparency around data collection processes and use are also needed, as well as data minimization. Some might be taking advantage of the situation to amass large amounts of data for alternative purposes, which is unethical. Large amounts of data also bring increased risk of data breaches. When people are scared, such as in COVID times, they will be willing to hand over data. We need to ensure that we are providing oversight and keeping watch over government entities, health facilities, and third-party data processors to ensure data is protected and not misused.
  • MERL Practitioners are seeking more guidance and support on: aspects of consent and confidentiality; bias and interference in data collection by governments and community leaders; overcollection of data leading to fatigue; misuse of sensitive data such as location data; potential for re-identification of individuals; data integrity issues; lack of encryption; and some capacity issues.
  • Good practices and recommendations include ethical clearance of data and data assurance structures; rigorous methods to reduce bias; third party audits of data and data protection processes; localization and contextualization of data processes and interpretation; and “do no harm” framing.

Download reports:

Read about the other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Research Opportunity: Harm and the M&E Cycle

We are looking for a researcher to undertake desk-based research into how harm has been defined and integrated into monitoring and evaluation cycles. Please see the Terms of Reference and submit your short proposal by July 5, 2020, or read more about this initiative below.

Monitoring and evaluation practitioners are in a privileged position where they have the opportunity to listen and hear the voices and stories of the people that aid and development agencies work with. These professionals often determine what gets counted and what counts. Yet, practical guidance for commissioners, managers, and evaluators on managing harm is limited. The above graphic shows just some of the areas where the monitoring and evaluation process could contribute to harm.

Our privileged position as M&E practitioners brings with it the responsibility to do no harm. We need to be aware of how we might create or exacerbate harm and also how we might overlook harm due to our positions of power.  Evaluators need to play a strong role in identifying areas where M&E can cause harm and develop mitigation strategies to prevent or reduce that potential harm. There has been patchy recognition about the variety of potential harms that can arise from both action and inaction of an evaluator and others involved in monitoring and evaluation processes. There is also a wider discussion to be had around evaluation as a whole and its inherent power dynamics that can lead to, enable, or obfuscate different types of harm and which play a role in determining what is considered to be harmful.

Over the past two years, a group of senior M&E practitioners* has been reflecting on harm in M&E. In the course of this work we’ve organized conversations and collective reflection workshops, think pieces, reports on priority areas and presentations at M&E conferences. The group now looks to build these actions into a practitioner-orientated publication. The research being commissioned aims to further map harms that arise within monitoring and evaluation practice.

As part of this publication, we are looking for a researcher to take a deeper look at how harm has been defined and if and how “do no harm” approaches have been integrated into M&E cycles.

Potential questions for this research include:

  1.  What definition, association, or conception (or definitions, associations, or conceptions) of harm emerge from M&E literature and practice?
  2. Who are the key social actors who interact in M&E cycles?
  3. What strategies for addressing, preventing or reducing these harms have emerged and how successful have these been?

Please see the full Terms of Reference and instructions for submitting your application if you are interested in conducting this research. The deadline for submissions is Sunday July 5th. 

*The group of M&E practitioners who are working together on this topic includes: Stephen Porter, Evaluation Strategy Advisor – Independent Evaluation Group, World Bank; Veronica Olazabal, Senior Adviser and Director, Measurement, Evaluation and Organizational Performance – The Rockefeller Foundation; Prof. Rodney Hopson, Department of Educational Psychology – University of Illinois; Linda Raftree, Convener of MERL Tech; Adj. Prof Dugan Fraser, Director of the Centre for Learning on Evaluation and Results Anglophone Africa – University of the Witwatersrand.

What’s Happening with Tech and MERL?

by Linda Raftree, Independent Consultant and MERL Tech organizer

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before roll-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Creating and Measuring Impact in Digital Social and Behavior Change Communication 

By Jana Melpolder

People are accessing the Internet, smartphones, and social media like never before, and the social and behavior change communication community is exploring the use of digital tools and social media for influencing behavior. The MERL Tech session, “Engaging for responsible change in a connected world: Good practices for measuring SBCC impact” was put together by Linda Raftree, Khwezi Magwaza, and Yvonne MacPherson, and it set out to help dive into Digital Social and Behavior Change Communication (SBCC).

Linda is the MERL Tech Organizer, but she also works as an independent consultant. She has worked as an Advisor for Girl Effect on research and digital safeguarding in digital behavior change programs with adolescent girls. She also recently wrote a landscaping paper for iMedia on Digital SBCC. Linda opened the session by sharing lessons from the paper, complemented by learning drawn from research and practice at Girl Effect.

Linda shares good practices from a recent landscape report on digital SBCC.

Digital SBCC is expanding due to smartphone access. In the work with Girl Effect, it was clear that even when girls in lower income communities did not own smartphones they often borrowed them. Project leaders should consider several relevant theories on influencing human behavior, such as social cognitive theory, behavioral economics, and social norm theory. Additionally, an ethical issue in SBCC projects is whether there is transparency about the behavior change efforts an organization is carrying out, and whether people even want their behaviors to be challenged or changed.

When it comes to creating a SBCC project, Linda shared a few tips: 

  • Users are largely unaware of data risks when sharing personal information online
  • We need to understand peoples’ habits. Being in tune with local context is important, as is design for habits, preferences, and interests.
  • Avoid being fooled by vanity metrics. For example, even if something had a lot of clicks, how do you know an action was taken afterwards? 
  • Data can be sensitive to deal with. For some, just looking at information online, such as facts on contraception, can put them at risk. Be sure to be careful of this when developing content.

The session’s second presenter was Khwezi Magwaza who has worked as a writer and radio, digital, and television producer. She worked as a content editor for Praekelt.org and also served as the Content Lead at Girl Effect. Khwezi is currently providing advisory to an International Rescue Committee platform in Tanzania that aims to support improved gender integration in refugee settings. Lessons from Khwezi from working in digital SBCC included:

  • Sex education can be taboo, and community healthcare workers are often people’s first touch point. 
  • There is a difference between social behavior change and, more precisely, individual behavior change. 
  • People and organizations working in SBCC need to think outside the box and learn how to measure it in non-traditional ways. 
  • Just because something is free doesn’t mean people will like it. We need to aim for high quality, modern, engaging content when creating SBCC programs.
  • It’s also critical to hire the right staff. Khwezi suggested building up engineering capacity in house rather than relying entirely on external developers. Having a digital company hand something over to you that you’re stuck with is like inheriting a dinosaur. Organizations need to have a real working relationship with their tech supplier and to make sure the tech can grow and adapt as the program does.
Panelists discuss digital SBCC with participants.

The third panelist from the session was Yvonne MacPherson, the U.S. Director of BBC Media Action, which is the BBC’s international NGO that was made to use communication and media to further development. Yvonne noted that:

  • Donors often want an app, but it’s important to push back on solely digital platforms. 
  • Face-to-face contact and personal connections are vital in programs, and social media should not be the only form of communication within SBCC programs.
  • There is a need to look at social media outreach experiences from various sectors to learn, but that the contexts that INGOs and national NGOs are working in is different from the environments where most people with digital engagement skills have worked, so we need more research and it’s critical to understand local context and behaviors of the populations we want to engage.
  • Challenges are being seen with so-called “dark channels,” (WhatsApp, Facebook Messenger) where many people are moving and where it becomes difficult to track behaviors. Ethical issues with dark channels have also emerged, as there are rich content options on them, but researchers have yet to figure out how to obtain consent to use these channels for research without interrupting the dynamic within channels.

I asked Yvonne if, in her experience and research, she thought Instagram or Facebook influencers (like celebrities) influenced young girls more than local community members could. She said there’s really no one answer for that one. There actually needs to be a detailed ethnographic research or study to understand the local context before making any decisions on design of an SBCC campaign. It’s critical to understand the target group — what ages they are, where do they come from, and other similar questions.

Resources for the Reader

To learn more about digital SBCC check out these resources, or get in touch with each of the speakers on Twitter:

5 tips for operationalizing Responsible Data policy

By Alexandra Robinson and Linda Raftree

MERL and development practitioners have long wrestled with complex ethical, regulatory, and technical aspects of adopting new data approaches and technologies. The topic of responsible data has gained traction over the past 5 years or so, and a handful of early adopters have developed and begun to operationalize institutional RD policies. Translating policy into practical action, however, can feel daunting to organizations. Constrained budgets, complex internal bureaucracies, and ever-evolving technology and regulatory landscapes make it hard to even know where to start. 

The Principles for Digital Development provide helpful high level standards, and donor guidelines (such as USAID’s Responsible Data Considerations) offer additional framing. But there’s no one-size-fits-all policy or implementation plan that organizations can simply copy and paste in order to tick all the responsible data boxes. 

We don’t think organizations should do that anyway, given that each organization’s context and operating approach is different, and policy means nothing if it’s not rolled out through actual practice and behavior change!

In September, we hosted a MERL Tech pre-workshop on Operationalizing Responsible Data to discuss and share different ways of turning responsible data policy into practice. Below we’ve summarized some tips shared at the workshop. RD champions in organizations of any size can consider these when developing and implementing RD policy.

1. Understand Your Context & Extend Empathy

  • Before developing policy, conduct a non-punitive assessment (a.k.a. a landscape assessment, self-assessment or staff research process) on existing data practices, norms, and decision-making structures . This should engage everyone who will using or affected by the new policies and practices. Help everyone relax and feel comfortable sharing how they’ve been managing data up to now so that the organization can then improve. (Hint: avoid the term ‘audit’ which makes everyone nervous.)
  • Create ‘safe space’ to share and learn through the assessment process:
    • Allow staff to speak anonymously about their challenges and concerns whenever possible
    • Highlight and reinforce promising existing practices
    • Involve people in a ‘self-assessment’
    • Use participatory workshops (e.g. work with a team to map a project’s data flows or conduct a Privacy Impact Assessment or a Risk-Benefits Assessment) – this allows everyone who participates to gain RD awareness while also learning new practical tools along with highlighting any areas that need attention. The workshop lead or “RD champion” can also then get a better sense of the wider organizations knowledge, attitudes and practices as related to RD
    • Acknowledge (and encourages institutional leaders to affirm) that most staff don’t have “RD expert” written into their JDs; reinforce that staff will not be ‘graded’ or evaluated on skills they weren’t hired for.
  • Identify organizational stakeholders likely to shape, implement, or own aspects of RD policy and tailor your engagement strategies to their perspectives, motivations, and concerns. Some may feel motivated financially (avoiding fines or the cost of a data breach); others may be motivated by human rights or ethics; whereas some others might be most concerned with RD with respect to reputation, trust, funding and PR.
  • Map organizational policies, major processes (like procurement, due diligence, grants management), and decision making structures to assess how RD policy can be integrated into these existing activities.

2. Consider Alternative Models to Develop RD Policy 

  • There is no ‘one size fits all’ approach to developing RD policy. As the (still small, but promising) number of organizations adopting policy grows, different approaches are emerging. Here are some that we’ve seen:
    • Top-down: An institutional-level policy is developed, normally at the request of someone on the leadership team/senior management. It is then adapted and applied across projects, offices, etc. 
      • Works best when there is strong leadership buy-in for RD policy and a focal point (e.g. an ‘Executive Sponsor’) coordinating policy formation and navigating stakeholders
    • Bottom-up: A group of staff are concerned about RD but do not have support or interest from senior leadership, so they ‘self-start’ the learning process and begin shaping their own practices, joining together, meeting, and communicating regularly until they have wider buy-in and can approach leadership with a use case and budget request for an organization-wide approach.
      • Good option if there is little buy-in at the top and you need to build a case for why RD matters.
    • Project- or Team-Generated: Development and application of RD policies are piloted within a targeted project or projects or on one team. Based on this smaller slice of the organization, the project or team documents its challenges, process, and lessons learned to build momentum for and inform the development of future organization-wide policy. 
      • Promising option when organizational awareness and buy-in for RD is still nascent and/or resources to support RD policy formation and adoption (staff, financial, etc.) are limited.
    • Hybrid approach: Organizational policy/policies are developed through pilot testing across a reasonably-representative sample of projects or contexts. For example, an organization with diverse programmatic and geographical scope develops and pilots policies in a select set of country offices that can offer different learning and experiences; e.g., a humanitarian-focused setting, a development-focused setting, and a mixed setting; a small office, medium sized office and large office; 3-4 offices in different regions; offices that are funded in various ways; etc.  
      • Promising option when an organization is highly decentralized and works across a diverse country contexts and settings. Supports the development of approaches that are relevant and responsive to diverse capacities and data contexts.

3. Couple Policy with Practical Tools, and Pilot Tools Early and Often

  • In order to translate policy into action, couple it with practical tools that support existing organizational practices. 
  • Make sure tools and processes empower staff to make decisions and relate clearly to policy standards or components; for example:
    • If the RD policy includes a high-level standard such as, “We ensure that our partnerships with technology companies align with our RD values,” give staff tools and guidance to assess that alignment. 
  • When developing tools and processes, involve target users early and iteratively. Don’t worry if draft tools aren’t perfectly formatted. Design with users to ensure tools are actually useful before you sink time into tools that will sit on a shelf at best, and confuse or overburden staff at worst. 

4. Integrate and “Right-Size” Solutions 

  • As RD champions, it can be tempting to approach RD policy in a silo, forgetting it is one of many organizational priorities. Be careful to integrate RD into existing processes, align RD with decision-making structures and internal culture, and do not place unrealistic burdens on staff.
  • When building tools and processes, work with stakeholders to develop responsibility assignment charts (e.g. RACI, MOCHA) and determine decision makers.
  • When developing responsibility matrices, estimate the hours each stakeholder (including partners, vendors, and grantees) will dedicate to a particular tool or process. Work with anticipated end users to ensure that processes:
    • Can realistically be carried out within a normal workload
    • Will not excessively burden staff and partners
    • Are realistically proportionate to the size, complexity, and risk involved in a particular investment or project

5. Bridge Policy and Behavior Change through Accompaniment & Capacity Building 

  • Integrating RD policy and practices requires behavior change and can feel technically intimidating to staff. Remember to reassure staff that no one (not even the best resourced technology firms!), has responsible data mastered, and that perfection is not the goal.
  • In order to feel confident using new tools and approaches to make decisions, staff need knowledge to analyze information. Skills and knowledge required will be different according to role, so training should be adapted accordingly. While IT staff may need to know the ins and outs of network security, general program officers certainly do not. 
  • Accompany staff as they integrate RD processes into their work. Walk alongside them, answering questions along the way, but more importantly, helping staff build confidence to develop their own internal RD compass. That way the pool of RD champions will grow!

What approaches have you seen work in your organization?