Tag Archives: digital

Creating and Measuring Impact in Digital Social and Behavior Change Communication 

By Jana Melpolder

People are accessing the Internet, smartphones, and social media like never before, and the social and behavior change communication community is exploring the use of digital tools and social media for influencing behavior. The MERL Tech session, “Engaging for responsible change in a connected world: Good practices for measuring SBCC impact” was put together by Linda Raftree, Khwezi Magwaza, and Yvonne MacPherson, and it set out to help dive into Digital Social and Behavior Change Communication (SBCC).

Linda is the MERL Tech Organizer, but she also works as an independent consultant. She has worked as an Advisor for Girl Effect on research and digital safeguarding in digital behavior change programs with adolescent girls. She also recently wrote a landscaping paper for iMedia on Digital SBCC. Linda opened the session by sharing lessons from the paper, complemented by learning drawn from research and practice at Girl Effect.

Linda shares good practices from a recent landscape report on digital SBCC.

Digital SBCC is expanding due to smartphone access. In the work with Girl Effect, it was clear that even when girls in lower income communities did not own smartphones they often borrowed them. Project leaders should consider several relevant theories on influencing human behavior, such as social cognitive theory, behavioral economics, and social norm theory. Additionally, an ethical issue in SBCC projects is whether there is transparency about the behavior change efforts an organization is carrying out, and whether people even want their behaviors to be challenged or changed.

When it comes to creating a SBCC project, Linda shared a few tips: 

  • Users are largely unaware of data risks when sharing personal information online
  • We need to understand peoples’ habits. Being in tune with local context is important, as is design for habits, preferences, and interests.
  • Avoid being fooled by vanity metrics. For example, even if something had a lot of clicks, how do you know an action was taken afterwards? 
  • Data can be sensitive to deal with. For some, just looking at information online, such as facts on contraception, can put them at risk. Be sure to be careful of this when developing content.

The session’s second presenter was Khwezi Magwaza who has worked as a writer and radio, digital, and television producer. She worked as a content editor for Praekelt.org and also served as the Content Lead at Girl Effect. Khwezi is currently providing advisory to an International Rescue Committee platform in Tanzania that aims to support improved gender integration in refugee settings. Lessons from Khwezi from working in digital SBCC included:

  • Sex education can be taboo, and community healthcare workers are often people’s first touch point. 
  • There is a difference between social behavior change and, more precisely, individual behavior change. 
  • People and organizations working in SBCC need to think outside the box and learn how to measure it in non-traditional ways. 
  • Just because something is free doesn’t mean people will like it. We need to aim for high quality, modern, engaging content when creating SBCC programs.
  • It’s also critical to hire the right staff. Khwezi suggested building up engineering capacity in house rather than relying entirely on external developers. Having a digital company hand something over to you that you’re stuck with is like inheriting a dinosaur. Organizations need to have a real working relationship with their tech supplier and to make sure the tech can grow and adapt as the program does.
Panelists discuss digital SBCC with participants.

The third panelist from the session was Yvonne MacPherson, the U.S. Director of BBC Media Action, which is the BBC’s international NGO that was made to use communication and media to further development. Yvonne noted that:

  • Donors often want an app, but it’s important to push back on solely digital platforms. 
  • Face-to-face contact and personal connections are vital in programs, and social media should not be the only form of communication within SBCC programs.
  • There is a need to look at social media outreach experiences from various sectors to learn, but that the contexts that INGOs and national NGOs are working in is different from the environments where most people with digital engagement skills have worked, so we need more research and it’s critical to understand local context and behaviors of the populations we want to engage.
  • Challenges are being seen with so-called “dark channels,” (WhatsApp, Facebook Messenger) where many people are moving and where it becomes difficult to track behaviors. Ethical issues with dark channels have also emerged, as there are rich content options on them, but researchers have yet to figure out how to obtain consent to use these channels for research without interrupting the dynamic within channels.

I asked Yvonne if, in her experience and research, she thought Instagram or Facebook influencers (like celebrities) influenced young girls more than local community members could. She said there’s really no one answer for that one. There actually needs to be a detailed ethnographic research or study to understand the local context before making any decisions on design of an SBCC campaign. It’s critical to understand the target group — what ages they are, where do they come from, and other similar questions.

Resources for the Reader

To learn more about digital SBCC check out these resources, or get in touch with each of the speakers on Twitter:

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?