All posts by Guest Post

Open Call for MERL Center Working Group Members!

By Mala Kumar, GitHub Social Impact, Open Source for Good

I lead a program on the GitHub Social Impact team called Open Source for Good — detailed in a previous MERL Tech post and (back when mass gatherings in large rooms were routine) at a lightning talk at the MERL Tech DC conference last year.

Before joining GitHub, I spent a decade wandering around the world designing, managing, implementing, and deploying tech for international development (ICT4D) software products. In my career, I found open source in ICT4D tends to be a polarizing topic, and often devoid of specific arguments. To advance conversations on the challenges, barriers, and opportunities of open source for social good, my program at GitHub led a year-long research project and produced a culminating report, which you can download here.

One of the hypotheses I posed at the MERL Tech conference last year, and that our research subsequently confirmed, is that IT departments and ICT4D practitioners in the social sector* have relatively less budgetary decision-making power than their counterparts at corporate IT companies. This makes it hard for IT and ICT4D staff to justify the use of open source in their work.

In the past year, Open Source for Good has solidified its strategy around helping the social sector more effectively engage with open source. To that aim, we started the MERL Center, which brings together open source experts and MERL practitioners to create resources to help medium and large social sector organizations understand if, how, and when to use open source in their MERL solutions.**

With the world heading into unprecedented economic and social change and uncertainty, we’re more committed than ever at GitHub Social Impact to helping the social sector effectively use open source and to build on a digital ecosystem that already exists.

Thanks to our wonderful working group members, the MERL Center has identified its target audiences, fleshed out the goals of the Center, set up a basic content production process, and is working on a few initial contributions to its two working groups: Case Studies and Beginner’s Guides. I’ll announce more details in the coming months, but I am also excited to announce that we’re committing funds to get a MERL Center public-facing website live to properly showcase the materials the MERL Center produces and how open source can support technology-enabled MERL activities and approaches.

As we ramp up, we’re now inviting more people to join the MERL Center working groups! If you are a MERL practitioner with an interest in or knowledge of open source, or you’re an open source expert with an interest in and knowledge of MERL, we’d love to have you! Please feel free to reach out me with a brief introduction to you and your work, and I’ll help you get on-boarded. We’re excited to have you work with us! 

*We define the “social sector” as any organization or company that primarily focuses on social good causes.

**Here’s our working definition of MERL.

 

8 Ways to Adapt Your M&E During the COVID-19 Pandemic

Guest post from Janna Rous. Original published here.

So, all of a sudden you’re stuck at home because of the new coronavirus.  You’re looking at your M&E commitments and your program commitments.  Do you put them all on hold and postpone them until the coronavirus threat has passed and everything goes back to normal?  Or is there a way to still get things done”!?  This article reviews 8 ways you can adapt your M&E during the pandemic.

Here are a few ideas that you and your team might consider doing to make sure you can stay on track (and maybe even IMPROVE your MEAL practices) even if you might currently be in the middle of a lockdown, or if you think you might be going into a lockdown soon:

1. Phone Call Interviews instead of In-Person Interviews

Do you have any household assessments or baseline surveys or post-distribution monitoring that you had planned in the next 1 to 3 months? Is there a way that you can carry out these interviews by phone or WhatsApp calls?  This is the easiest and most direct way to carry on with your current M&E plan.  Instead of doing these interviews face-to-face, just get them on a call.  I’ve created a checklist to help you prepare for doing phone call interviews – click here to get the “Humanitarian’s Phone Call Interview Checklist”.  Here are a few things you need to think through to transition to a phone-call methodology:

  • You need phone numbers and names of people that need to be surveyed. Do you have these?  Or is there a community leader who might be able to help you get these?
  • You also need to expect that a LOT of people may not answer their phone. So instead of “sampling” people for a survey, you might want to just plan on calling almost everyone on that list.
  • Just like for a face-to-face interview, you need to know what you’re going to say. So you need to have a script ready for how you introduce yourself and ask for consent to do a phone questionnaire.  It’s best to have a structured interview questionnaire that you follow for every phone call, just like you would in a face-to-face assessment.
  • You also need to have a way to enter data as you ask the questions. This usually depends on what you’re most comfortable with – but I recommend preparing an ODK or KoboToolbox questionnaire, just like you would for an in-person survey, and filling it out as you do the interview over the phone.  I find it easiest to enter the data into KoboToolbox “Webform” instead of the mobile app, because I can type information faster into my laptop rather than thumb-type it into a mobile device.  But use what you have!
  • If you’re not comfortable in KoboToolbox, you could also prepare an Excel sheet for directly entering answers – but this will probably require a lot more data cleaning later on.
  • When you’re interviewing, it’s usually faster to type down the answers in the language you’re interviewing in. If you need your final data collection to be in English, go back and do the translation after you’ve hung up the phone.
  • If you want a record of the interview, ask if you can record the phone call. When the person says yes, then just record it so you can go back and double check an answer if you need to.
  • Very practically – if you’re doing lots of phone calls in a day, it is easier on your arm and your neck if you use a headset instead of holding your phone to your ear all day!

2. Collect Videos & Photos Directly from Households and Communities

When you’re doing any in-person MEAL activities, you’re always able to observe evidence. You can look around and SEE impact, you don’t just hear it through an interview or group discussion.  But when you’re doing M&E remotely, you can’t double-check to see what impact really looks like.  So I recommend:

  • Connect with as many beneficiaries and team members as possible through WhatsApp or another communication app and collect photos and videos of evidence directly from them.
  • Video – Maybe someone has a story of impact they can share with you through video. Or if you’re overseeing a Primary Health Care clinic, perhaps you can have a staff member walk you through the clinic with a video so you can do a remote assessment.
  • Pictures – Maybe you can ask everyone to send you a picture of (for example) their “hand washing station with soap and water” (if you’re monitoring a WASH program). Or perhaps you want evidence that the local water point is functioning.

3. Programme Final Evaluation

It’s a good practice to do a final evaluation review when you reach the end of a program.  If you have a program finishing in the next 1-3 months, and you want to do a final review to assess lessons learned overall, then you can also do this remotely!

  • Make a list of all the stakeholders that would be great to talk to: staff members, a few beneficiaries, government authorities (local and/or national), other NGOs, coordination groups, partner organizations, local community leaders.
  • Then go in search of either their phone numbers, their email addresses, their Skype accounts, or their WhatsApp numbers and get in touch.
  • It’s best if you can get on a video chat with as many of them as possible – because it’s much more personal and easy to communicate if you can see one another’s faces! But if you can just talk with audio – that’s okay too.
  • Prepare a semi-structured interview, a list of questions you want to talk through about the impact, what went well, what could have gone better. And if there’s anything interesting that comes up, don’t worry about coming up with some new questions on the spot or skipping questions that don’t make sense in the context.
  • You can also gather together any monitoring reports/analysis that was done on the project throughout its implementation period, plus pictures of the interventions.
  • Use all this information to create a final “lessons learned” evaluation document. This is a fantastic way to continually improve the way you do humanitarian programming.

4. Adapt Your Focus Group Discussion Plan

If everyone is at home because your country has imposed a lockdown, it will be very difficult to do a focus group discussion because….you can’t be in groups!  So, with your team decide if it might be better to switch your monitoring activity from collecting qualitative data in group discussions to actually just having one-on-one interviews on the phone with several people to collect the same information.

  • There are some dynamics that you will miss in one-to-one interviews, information that may only come out during group discussions. (Especially where you’re collecting sensitive or “taboo” data.) Identify what that type of information might be – and either skip those types of questions for now, or brainstorm how else you could collect the information through phone-calls.

5. Adapt Your Key Informant Interviews

If you normally carry out Key Informant Interviews, it would be a great idea to think what “extra” questions you need to ask this month in the midst of the coronavirus pandemic.

  • If you normally ask questions around your program sector areas, think about just collecting a few extra data points about feelings, needs, fears, and challenges that are a reality in light of Covid-19. Are people facing any additional pressures due to the epidemic? Or are there any new humanitarian needs right now? Are there any upcoming needs that people are anticipating?
  • It goes without saying that if your Key Informant Interviews are normally in person, you’ll want to carry these out by phone for the foreseeable future!

6. What To Do About Third Party Monitoring

Some programs and donors use Third Party Monitors to assess their program results independently.  If you normally hire third party monitors, and you’ve got some third party monitoring planned for the next 1-3 months, you need to get on the phone with this team and make a new plan. Here are a few things you might want to think through with your third party monitors:

  • Can the third party carry out their monitoring by phone, in the same ways I’ve outlined above?
  • But also think through – is it worth it to get a third party monitor to assess results remotely? Is it better to postpone their monitoring?  Or is it worth it to carry on regardless?
  • What is the budget implication? If cars won’t be used, is there any cost-savings?  Is there any additional budget they’ll need for air-time costs for their phones?
  • Make sure there is a plan to gather as much photo and video evidence as possible (see point 2 above!)
  • If they’re carrying out phone call interviews it would also be a good recommendation to record phone calls if possible and with consent, so you have the records if needed.

7. Manage Expectations – The Coronavirus Pandemic May Impact Your Program Results.

You probably didn’t predict that a global pandemic would occur in the middle of your project cycle and throw your entire plan off.  Go easy on yourself and your team!  It is most likely that the results you’d planned for might not end up being achieved this year.  Your donors know this (because they’re probably also on lockdown).  You can’t control the pandemic, but you can control your response.  So proactively manage your own expectations, your manager’s expectations and your donor’s expectations.

  • Get on a Skype or Zoom call with the project managers and review each indicator of your M&E plan. In light of the pandemic, what indicator targets will most likely change?
  • Look through the baseline numbers in your M&E plan – is it possible that the results at the END of your project might be worse than even your baseline numbers? For example, if you have a livelihoods project, it is possible that income and livelihoods will be drastically reduced by a country-wide lockdown.  Or are you running an education program?  If schools have been closed, then will a comparison to the baseline be possible?
  • Once you’ve done a review of your M&E plan, create a very simple revised plan that can be talked through with your program donor.

8. Talk To Your Donors About What You Can Do Remotely

When you’re on the phone with your donors, don’t only talk about revised program indicators.

  • Also talk about a revised timeframe – is there any flexibility on the program timeframe, or deadlines for interim reporting on indicators? What are their expectations?
  • Also talk about what you CAN do remotely. Discuss with them the plan you have for carrying on everything possible that can be done remotely.
  • And don’t forget to discuss financial implications of changes to timeframe.

 

Three Problems — and a Solution — for Data Viz in MERL and ICT4D

Guest post by Amanda Makulec, MPH, Data Visualization Society Operations Director

Just about everyone I know in the ICT4D and MERL communities has interacted with, presented, or created a chart, dashboard, infographic, or other data visualization. We’ve also all seen charts that mislead, confuse, or otherwise fall short of making information more accessible. 

The goal of the Data Visualization Society is to collect and establish best practices in data viz, fostering a community that supports members as they grow and develop data visualization skills. With more than 11.5K members from 123 countries on our first birthday, the society has grown faster than any of the founders imagined.

There are three reasons you should join the Data Visualization Society to improve your data visualizations in international development.

Self-service data visualization tools are everywhere, but that doesn’t mean we’re always building usable charts and graphs.

We’ve seen the proliferation of dashboards and enthusiasm for data viz as a tool to promote data driven decisionmaking.

Just about anyone can make a chart if they have a table of data, thanks to the wide range of tools out there (Flourish, RAWgraphs, Datawrapper, Tableau, PowerBI…to name a few). Without a knowledge of data viz fundamentals though, it’s easy to use these tools to create confusing and misleading graphs.

A recent study on user-designed dashboards in DHIS2 (a commonly used data management and analysis platform in global health) found that “while the technical flexibility of [DHIS2] has been taken advantage of by providing platform customization training…the quality of the dashboards created face numerous challenges.” (Aprisa & Sebo, 2020).  

The researchers used a framework from Stephen Few to evaluate the frequency of five different kinds of ‘dashboard problems’ on 80 user-designed sample dashboards. The five problem ‘types’ included: context, dashboard layout, visualization technique, logical, and data quality. 

Of the 80 dashboards evaluated, 69 (83.1%) had at least one visualization technique problem (Aprisa & Sebo, 2020). Many of the examples shared in the paper could be easily addressed, like transforming the pie chart made of slices representing points in time into a line graph.

With so many tools at our fingerprints, how can we use them to develop meaningful, impactful charts and interactive dashboards?  Learning fundamentals of data visualization is an excellent place to start, and DVS offers a free-to-join professional home to learn those fundamentals.

Many of the communities that exist around data visualization are focused on specific tools, which may not be relevant or accessible for your organization.

In ICT4D, we often have to be scrappy and flexible. That means learning how to work with open source tools, hack charts in Excel, and often make decisions about what tool to use driven as much by resource availability as functionality. 

There are many great tool specific communities out there: TUGs, PUGs, RLadies, Stack Overflow, and more. DVS emerged out of a need to connect people looking to share best practices across the many disciplines doing data viz: journalists, evaluators, developers, graphic designers, and more. That means not being limited to one tool or platform, so we can look for what fits a given project or audience.

After joining DVS, you’ll receive an invite to the Society’s’ Slack, a community “workspace” with channels on different topics and for connecting different groups of people within the community.  You can ask questions about any data viz tool on the #topic-tools channel, and explore emerging and established platforms with honest feedback on how other members have used them in their work.

Data visualization training often means one-off workshops. Attendees leave enthusiastic, but then don’t have colleagues to rely on when they run into new questions or get stuck.

Data visualization isn’t consistently taught as a foundation skill for public health or development professionals.

In university, there may be a few modules within a statistics or evaluation class, but seldom are there dedicated, semester long classes on visualization; those are reserved for computer science and analytics programs (though this seems to be slowing changing!).  Continuing education in data viz is usually short workshops, not long-term mentoring relationships. 

So what happens when people are asked to “figure it out” on the job? Or attend a two day workshop and come away as a resident data viz expert?  

Within DVS, our leadership and our members step up to answer questions and be that coach for people at all stages of learning data visualization. We even have a dedicated feedback space within Slack to share examples of data viz work in progress and get feedback.

DVS also enables informal connections on questions on a wide range of topics. Go to #share-critique, for posting work-in-progress visualizations and seeking feedback from the community. We also host quarterly challenges where you can do hands-on practice with provided data sets to develop your data viz skills and have plans for a formal mentorship program to launch in 2020.

Join DVS today to get its benefits – members from Africa, Asia, and other underrepresented areas are especially encouraged to join us now!

Have any questions? Or ideas on ways DVS can support our global membership base? Find me on Twitter – my DMs are open.

A Toolkit to Measure the Performance and Labour Conditions in Small and Medium Enterprises

Guest post from ILO The Lab

Performance measurement is critical not only to see whether enterprise development projects are making a difference, but so that small and medium enterprises (SME) themselves can continuously improve. As the saying goes: “If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” In other words, the measurement of performance is the first step towards the management of performance.

Enterprise development projects need to measure changes in SME performance, not only to report results to project funders, but also to help SMEs continuously improve. But measuring the performance of SMEs in the context of a developing economy brings special considerations, including:

  • Pressing capacity challenges in record keeping, data collection, and access to modern management techniques – along with the technology that drives it. Most SMEs have some kind of performance measurement system, however, these tend to be very basic.
  • Intensely competitive environments where there is little market differentiation – meaning most SMEs have to tussle just to survive, reducing the incentive to collect and use data. Some countries have 5-year survival rates as low as 10%.
  • Flatter management structures, less bureaucracy and – in theory at least – can be more agile and adaptive to use performance information to improve.
  • SME’s dependence on productivity gains to maximise long-term competitiveness and profitability. In the absence of intellectual property or technology as a source of comparative advantage, labour productivity is often critical to sustaining SME performance.

Moreover, enterprise development projects are facing increasing pressure to demonstrate that their work is leading to qualitative improvements in people’s terms and conditions of employment. As researchers have noted, it is “not only the number, but also the quality of jobs matters to poverty alleviation and economic development”.

For many SMEs in the global south, workers are a critical determinant of business success. Since SMEs often undertake labour-intensive activates, they rely on a supply of labour – with varying skills requirements – to produce their goods and services. Labour and employment issues are frequently included in non-financial performance measurement systems, but they often only focus on the most easily quantifiable elements such as the number of accidents. However, labour conditions refer to the working environment and all circumstances affecting the workplace, including job hours, physical aspects, and the rights and responsibilities of SMEs towards their workers. Many aspects of this work environment are covered by national labour laws, which in turn are shaped by the eight fundamental ILO conventions.

By improving labour conditions, SMEs can improve their business outcomes. Better health and safety practices can boost productivity and employee retention. Companies have shown growth in sales per employee workforce hour following targeted training programmes. As recent research has demonstrated, jobs with decent wages, predictable hours, sufficient training, and opportunities for advancement can be a source of competitive advantage. For many businesses, thinking about employee working conditions has shifted from a way to minimize risk to a competitive advantage.

Conversely, bad conditions can be bad for business: Poor health and safety practices can result in fines and slow task completion. Industrial action and absenteeism can lead to prolonged disruption to operations. An SME owner says, “You have to have an environment where people are happy working, where they cooperate well, interact well. If you have problems in the way people work, it could terribly affect the performance”.

Against this complex framework and challenges, the International labour Organization has launched the ILO SME Measurement Toolkit

This Toolkit is a practical resource for practitioners and and projects to support SMEs decide what aspects of SME performance (productivity, working conditions, etc.) to measure, as well as how to measure them.

  • +250 indicators including a set of actionable metrics drawn from existing sustainability standards, company codes of conduct and international development monitoring and evaluation frameworks
  • Methods outlining different tools and data collection techniques
  • Real-life examples of SME measurement in a developing country context

We’d love to hear your comments, questions and suggestions about the Toolkit. Drop us an email at thelab@ilo.org!

More ILO The Lab’s resources on results measurement:

Open Call for ideas: 2020 GeOnG forum

Guest post by Nina Eissen from CartONG, organizers of the GeOnG Forum.

The 7th edition of the GeOnG Forum on Humanitarian and Development Data will take place from November 2nd to 4th, 2020 in Chambéry (France). CartONG is launching an Open Call for Suggestions.

Organized by CartONG every two years since 2008, the GeOnG forum gathers humanitarian and development actors and professionals specialized in information management. The GeOnG is dedicated to addressing issues related to data in the humanitarian and development sectors, including topics related to mapping, GIS, data collection & information management. To this end, the forum is designed to allow participants to debate current and future stakes, introduce relevant and innovative solutions and share experience and best practices. The GeOnG is one of the biggest independent fora on the topic in Europe, with an average of 180 participants from 90 organizations in the last three editions.

The main theme of the 2020 edition will be: “People at the heart of Information Management: promoting responsible and inclusive practices”. More information about the choice of this main theme is available here.

We also invite you to discover the 2020 GeOnG teaser here: 

To submit your ideas, please use this online form. The Open Call for Suggestions will remain open until the end of May 2020.

A few topics we hope to see covered during the 2020 GeOnG Forum:

  • How to better integrate vulnerable populations into the data life cycle, with a focus on ensuring that the data collected is particularly representative of populations at risk of discrimination.
  • How to implement the Do No Harm approach in relation to data: simple security & protection measures, streamlining of data privacy rights in programming, algorithmization of data processing, etc.
  • What is the role of the often considered ‘less direct stakeholders’ of humanitarian and development data (such as civil society actors, governments, etc.) so as to identify clearer pathways to share the data that should be shared for the common good and protect the data that should clearly not be shared.
  • How to promote data literacy beyond NGO information management and M&E staff to facilitate data-driven decision making.
  • How to ensure that tools and solutions used and promoted by humanitarian and development organizations are also sufficiently user-friendly and inclusive (for instance by limiting in-built biases and promoting human-centric design).
  • Beyond the main theme of the conference, don’t hesitate to send us any idea that you think might be relevant for the next GeOnG edition (about tools, methodologies, lessons learned, feedback from the field, etc.)!

Registration for the conference will open in the Spring of 2020.

 

 

Measuring Local Ownership in International Development Projects

by Rachel Dickinson, Technical Officer for Research and Learning, Root Change

“Localization”, measuring local ownership, USAID’s Journey to Self-Reliance… We’re all talking about these ideas and policies, and trying to figure out how to incorporate them in our global development projects, but how do we know if we are making progress on these goals? What do we need to measure?

Root Change and Keystone Accountability, under a recent USAID Local Works research grant, created the Pando Localization Learning System (LLS) as both a tool and a methodology for measuring and tracking local ownership within projects in real time. Pando LLS is an online platform that uses network maps and feedback surveys to assess system health, power dynamics, and collaboration within a local development system. It gives development practitioners simple, easy-to-use visuals and indicators, which can be shared with stakeholders and used to identify opportunities for strengthening local development systems.

We launched the Pando platform at MERL Tech DC in 2018, and this year we wanted to share (and get reactions to) a new set of localization measures and a reflective approach we have embedded in the tool. 

Analysis of local ownership on Pando LLS is organized around four key measures. Under each we have determined a series of indicators pulling from both social network analysis (SNA) and feedback survey questions. For those interested in geeking out on the indicators themselves, visit our White Paper on the Pando Localization Learning System (LLS), but the four measures are: 

1) Leadership measures whether local actors can voice concerns, set priorities and define success in our projects. It measures whether we, as outsiders, are soliciting input from local actors. In other words, it looks at whether project design and implementation is bottom-up.

2) Mutuality measures whether strong reciprocal, or two-way, relationships exist. It measures whether we, as external actors, respond to and act on feedback from local actors. It’s the respect and trust required for success in any interaction. 

3) Connectivity measures whether the local system motivates and incentivizes local actors to work together to solve problems. It measures whether we, as program implementers, promote collaboration and connection between local actors. It asks whether the local system is actually improving, and if we are playing the right roles. 

4) Financing measures whether dependency on external financial resources is decreasing, and local financial opportunities are becoming stronger. It measures whether we, as outsiders, are preparing local organizations to be more resilient and adaptive. It explores the timeless question of money and resources. 

Did you notice how each of these measures assesses not only local actors and their system, but also our role as outsiders? This takes us to the reflective approach.

The Pando LLS approach emphasizes dialogue with system actors and self-reflection by development practitioners. It pushes us to question our assumptions about the systems where we work and tasks us with developing project activities and M&E plans that involve local actors. The theories behind the approach can also be found in our White Paper, but here are the basic steps: 

  • Listen to local actors by inviting them to map their relationships, share feedback, and engage in dialogue about the results;
  • Co-create solutions and learn through short-term experiments that aim to improve relationships and strengthen the local system;
  • Incorporate what’s working back into development projects and celebrate failures as progress; and 
  • Repeat the listen, reflect, and adapt cycles 3-4 times a year to ensure each one is small and manageable.

What do you think of this method for measuring and promoting local ownership? Do we have the measures right? How are you measuring local ownership in your work? Would you be interested in testing the Pando LLS approach together? We’d love to hear from you! Email me at rdickinson@rootchange.org to share your feedback, questions, or ideas! 

Tech Is Easy, People Are Hard: Behavioral Design Considerations to Improve Mobile Engagement

By Cathy Richards

Mobile platforms are often a go-to when it comes to monitoring and evaluation in developing communities and markets. One provider of these platforms, Echo Mobile, is often asked, “what sort of response rate can I expect for my SMS survey?” or, “what percentage of my audience will engage my IVR initiative?” In this session at MERL Tech DC in September, Boris Maguire, CEO of Echo Mobile, walked participants through various case studies which highlight that the answer to that question largely depends on the project’s individual context and that there is ultimately no one size fits all solution. 

Echo Mobile is a platform that allows users to have powerful conversations over SMS, voice, and USSD for purposes such as monitoring and evaluation, field reporting, feedback, information access, market research and customer service. The platform’s user segments include consumer goods (20%), education and health (16%), M&E/Research (15%), agriculture and conservation (14%), finance and consulting (13%) and media and advocacy (7%). Its user types are primarily business (35%), non-profit (31%) and social enterprises (29%). 

The team at Echo Mobile has learned that regardless of the chosen mobile engagement technology, achieving MERL goals often rests on the design and psychology behind the mobile engagement strategy – the content, tone, language, and timing of communications and the underlying incentives of the audience. More often than not, the most difficult parts in mobile engagement are the human aspects (psychological, emotional, strategic) rather than the technological implementation. 

Because of this, Echo Mobile chose to dive deeper into the factors they believed influenced mobile engagement the most. Some of their beliefs included:

  • Responder characteristics: Who are you trying to engage with? It’s important to figure out who you are engaging with and tailor your strategy to them.
  • Social capital and trust: Do these responders have a reason to trust you? What is the nature of your relationship with them?
  • Style, tone & content: What specific words are you using to engage with them? Are you showing that you want to know more and that you care about them?
  • Convenience: What is the level of effort, time and resources that responders have to invest in order to engage with your organization?
  • Incentives/relevance: Do they have a reason to engage with your organization? Do they think you’ll understand them better? Will they get more of what they need?

Through informal analysis, Echo Mobile found that the factors most highly correlated with high rates of engagement are the time of day in which recipients receive the messaging, followed by reminders to engage. Financial incentives were found to be the least effective. However, case studies prove that context ultimately adds the most important component of the mobile engagement strategy.

In the first case study, a BBOXX team in Rwanda sought to understand the welfare impact of solar consumption amongst their customers via SMS surveys. They first ran a set of small experiments, modifying survey financial incentives, timing, length, and language to see which moved the needle on response rates and compare the results to what customers told them in focus groups. In this case, Echo Mobile found that reminders in the morning and surveys in the evening nearly doubled their response rates. The choice to opt or dive in also affected response rates.

In the second case study, a UN agency nearly doubled SMS engagement rates from 40,000 Kenyan teachers by dropping financial incentives and tweaking the structure, tone and content of their messaging. In this case, incentive amounts once again did not do much to increase engagement but rather the ability to opt or dive in, reminders, and content/tone made the biggest difference. 

In short, Echo Mobile’s biggest takeaways are that:

  • Convenience is king
  • One can harass but not bore
  • Financial incentives are often overrated

Several participants also shared their experiences with mobile engagement and cited factors such as survey length and consent as important. 

Visualizing Your Network for Adaptive Program Decision Making

By Alexis Banks, Jennifer Himmelstein, and Rachel Dickinson

Social network analysis (SNA) is a powerful tool for understanding the systems of organizations and institutions in which your development work is embedded. It can be used to create interventions that are responsive to local needs and to measure systems change over time. But, what does SNA really look like in practice? In what ways could it be used to improve your work? Those are the questions we tackled in our recent MERL Tech session, Visualizing Your Network for Adaptive Program Decision Making. ACDI/VOCA and Root Change teamed up to introduce SNA, highlight examples from our work, and share some basic questions to help you get started with this approach.

MERL Tech 2019 participants working together to apply SNA to a program.

SNA is the process of mapping and measuring relationships and information flows between people, groups, organizations, and more. Using key SNA metrics enables us to answer important questions about the systems where we work. Common SNA metrics include (learn more here):

  • Reachability, which helps us determine if one actor, perhaps a local NGO, can access another actor, such as a local government;
  • Distance, which is used to determine how many steps, or relationships, there are separating two actors;
  • Degree centrality, which is used to understand the role that a single actor, such as an international NGO, plays in a system by looking at the number of connections with that organization;
  • Betweenness, which enables us to identify brokers or “bridges” within networks by identifying actors that lie on the shortest path between others; and
  • Change Over Time, which allows us to see how organizations and relationships within a system have evolved.
Using betweenness to address bottlenecks.

SNA in the Program Cycle

SNA can be used throughout the design, implementation, and evaluation phases of the program cycle.

Design: Teams at Root Change and ACDI/VOCA use SNA in the design phase of a program to identify initial partners and develop an early understanding of a system–how organizations do or do not work together, what barriers are preventing collaboration, and what strategies can be used to strengthen the system.

As part of the USAID Local Works program, Root Change worked with the USAID mission in Bosnia and Herzegovina (BiH) to launch a participatory network map that identified over 1,000 organizations working in community development in BiH, many of which had been previously unknown to the mission. It also provided the foundation for a dialogue with system actors about the challenges facing BiH civil society.

To inform project design, ACDI/VOCA’s Feed the Future Tanzania NAFAKA II Activity, funded by USAID conducted a network analysis to understand the networks associated with village based agricultural advisors (VBAAs)–what services they were offering to farmers already, which had the most linkages to rural actors, which actors were service as bottlenecks, and more. This helped the project identify which VBBA’s to work with through small grants and technical assistance (e.g. key actors), and what additional linkages needed to be built between VBAAs and other types of actors.

NAFAKA II Tanzania

Implementation: We also use SNA throughout program implementation to monitor system growth, increase collaboration, and inform learning and program design adaptation. ACDI/VOCA’s USAID/Honduras Transforming Market Systems Activity uses network analysis as a tool to track business relationships created through primary partners. For example, one such primary partner is the Honduran chamber of tourism who facilitates business relationships through group training workshops and other types of technical assistance. They can then follow up on these new relationships to gather data on indirect outcomes (e.g. jobs created, sales and more).

Root Change used SNA throughout implementation of the USAID funded Strengthening Advocacy and Civic Engagement (SACE) program in Nigeria. Over five years, more than 1,300 organizations and 2,000 relationships across 17 advocacy issue areas were identified and tracked. Nigerian organizations came together every six months to update the map and use it to form meaningful partnerships, coordinate advocacy strategies, and hold the government accountable.

SACE participants explore a hand drawn network map.

Evaluating Impact: Finally, our organizations use SNA to measure results at the mid-term or end of project implementation. In Kenya, Root Change developed the capacity of Aga Khan Foundation (AKF) staff to carry out a baseline, and later an end-of-project network analysis of the relationships between youth and organizations providing employment, education, and entrepreneurship support. The latter analysis enabled AKF to evaluate growth in the network and the extent to which gaps identified in the baseline had been addressed.

AKF’s Youth Opportunities Map in Kenya

The Feed The Future Ghana Agricultural Development and Value Chain Enhancement II (ADVACNE II) Project, implemented by ACDI/VOCA and funded by USAID, leveraged existing database data to demonstrate the outgrower business networks that were established as a result of the project. This was an important way of demonstrating one of ADVANCE II’s major outcomes–creating a network of private service providers that serve as resources for inputs, financing, and training, as well as hubs for aggregating crops for sales.

Approaches to SNA
There are a plethora of tools to help you incorporate SNA in your work. These range from bespoke software custom built for each organization, to free, open source applications.

Root Change uses Pando, a web-based, participatory tool that uses relationship surveys to generate real-time network maps that use basic SNA metrics. ACDI/VOCA, on the other hand, uses unique identifiers for individuals and organizations in its routine monitoring and evaluation processes to track relational information for these actors (e.g. cascaded trainings, financing given, farmers’ sales to a buyer, etc.) and an in-house SNA tool.

Applying SNA to Your Work
What do you think? We hope we’ve piqued your interest! Using the examples above, take some time to consider ways that SNA could be embedded into your work at the design, implementation, or evaluation stage of your work using this worksheet. If you get stuck, feel free to reach out (Alexis Banks, abanks@rootchange.org; Rachel Dickinson, rdickinson@rootchange.org; Jennifer Himmelstein, JHimmelstein@acdivoca.org)!

Practicing Safe Monitoring and Evaluation in the 21st Century

By Stephen Porter. Adapted from the original post published here.

Monitoring and evaluation practice can do harm. It can harm:

  • the environment by prioritizing economic gain over species that have no voice
  • people who are invisible to us when we are in a position of power
  • by asking for information that can then be misused.

In the quest for understanding What Works, the focus is often too narrowly on program goals rather than the safety of people. A classic example in the environmental domain is the use of DDT: “promoted as a wonder-chemical, the simple solution to pest problems large and small. Today, nearly 40 years after DDT was banned in the U.S., we continue to live with its long-lasting effects.” The original evaluation of its effects had failed to identify harm and emphasized its benefits. Only when harm to the ecosystem became more apparent was evidence presented in Rachel Carson’s book Silent Spring. We should not have to wait for failure to be so apparent before evaluating for harm.

Join me, Veronica Olazabal, Rodney Hopson, Dugan Fraser and Linda Raftree, for a session on “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, 8-9am, Room CC M100 H, at the American Evaluation Association Conference in Minneapolis.

Ethical standards have been developed for evaluators, which are discussed at conferences and included in professional training. Yet institutional monitoring and evaluation practices still struggle to fully get to grips with the reality of harm in the pressure to get results reported. If we want monitoring and evaluation to be safer for the 21st Century we need to shift from training and evaluator-to-evaluator discussions to changing institutional practices.

At a workshop convened by Oxfam and the Rockefeller Foundation in 2019, we sought to identify core issues that could cause harm and get to grips with areas where institutions need to change practices. The workshop brought together partners from UN agencies, philanthropies, research organizations and NGOs. This meeting sought to give substance to issues. It was noted by a participant that though the UNEG Norms and Standards and UNDP’s evaluation policy are designed to make evaluation safe, in practice there is little consideration given to capturing or understanding the unintended or perverse consequences of programs or policies. The workshop explored this and other issues and identified three areas of practice that could help to reframe institutional monitoring and evaluation in a practical manner.

1. Data rights, privacy and protection: 

In working on rights in the 21st Century, data and Information are some of the most important ‘levers’ pulled to harm and disadvantage people. Oxfam has had a Responsible Data in Program policy in place since 2015 goes some way towards recognizing this.But we know we need to more fully implement data privacy and protection measures in our work.

At Oxfam, work is continuing to build a rights-based approach which already includes aligned confederation-wide Data Protection Policies, implementation of responsible data management policy and practices and other tools aligned with the Responsible Data Policy and European Privacy law, including a responsible data training pack.

Planned and future work includes stronger governance, standardized baseline measures of privacy & information security, and communications/guidance/change management. This includes changes in evaluation protocols related to how we assess risk to the people we work with, who gets access to the data and ensure consent for how the data will be used.

This is a start, but consistent implementation is hard and if we know we aren’t competent at operating the controls within our reach, it becomes more difficult in how we call others out if they are causing harm when they misuse theirs.

2. Harm prevention lens for evaluation

The discussion highlighted that evaluation has not often sought to understand the harm of practices or interventions. When they do, however, the results can powerfully shed new light on an issue. A case that starkly illustrates potential under-reporting is that of the UN Military Operation in Liberia (UNMIL). UNMIL was put in place with the aim “to consolidate peace, address insecurity and catalyze the broader development of Liberia”. Traditionally we would evaluate this objective. Taking a harm lens we may evaluate the sexual exploitation and abuse related to the deployment. The reporting system highlights low levels of abuse, 14 from 2007 – 2008 and 6 in 2015. A study by Beber, Gilligan, Guardado and Karim, however, estimated through representative randomized survey that more than half of eighteen- to thirty-year-old women in greater Monrovia have engaged in transactional sex and that most of them (more than three-quarters, or about 58,000 women) have done so with UN personnel, typically in exchange for money.

Changing evaluation practice should not just focus on harm in the human systems, but also provide insight in the broader ecosystem. Institutionally there needs to be championship for identifying harm within and through monitoring and evaluation practice and changes in practice.

3. Strengthening safeguarding and evaluation skills

We need to resource teams appropriately so they have the capacity to be responsive to harm and reflective on the potential for harm. This is both about tools and procedures and conceptual frames.

Tools and procedures can include, for example:

  • Codes-of-conduct that create a safe environment for reporting issues
  • Transparent reporting lines to safeguarding/safe programming advisors
  • Training based on actual cases
  • Safe data protocols (see above)

All of these fall by the way-side, however, if the values and concepts that guide implementation are absent. Rodney Hopson at the workshop, drawing on environmental policy and concepts of ecology, presented a frame to increasing evaluators’ usefulness in complex ecologies where safeguarding issues are prevalent, that emphasizes:

  • Relationships – the need to identify and relate to key interests, interactions, variables and stakeholders amid dynamic and complex issues in an honest manner that is based on building trust.
  • Responsibilities – acting with propriety, doing what is proper, fair, right, just in evaluation against standards.
  • Relevance – being accurate and meaningful technically, culturally and contextually.

Safe monitoring and evaluation in the 21st Century does not just seek ‘What Works’ and will need to be relentless at looking at ‘How we can work differently?’. This includes us understanding connectivity in harm between human and environmental systems. The three areas noted here are a start of a conversation and a challenge to institutions to think more about what it means to be safe in monitoring and evaluation practice.

Planning to attend the American Evaluation Association Conference this week? Join us for the session “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, from 8- 9:00 AM) in room CC M100 H.

Panelists will discuss ideas to better address harm in regards to: (i) harm identification and mitigation in evaluation practice; (ii) responsible data practice evaluation in complex ecologies, (iii) understanding harm in an international development context, and (iv) evaluation in complex ecologies.

The panel will be chaired by  Veronica M Olazabal, (Senior Advisor & Director, Measurement, Evaluation and Organizational Performance, The Rockefeller Foundation) , with speakers Stephen Porter (Evaluation Strategy Advisor, World Bank), Linda Raftree (Independent Consultant, Organizer of MERL Tech), Dugan Fraser (Prof & Director CLEAR-AA – University of the Witwatersrand, Johannesburg) and Rodney Hopson (Prof of Evaluation, Department of Ed Psych, University of Illinois Urbana-Champaign). View the full program here: https://lnkd.in/g-CHMEj 

Ethics and unintended consequences: The answers are sometimes questions

by Jo Kaybryn

Our MERL Tech DC session, “Ethics and unintended consequences of digital programs and digital MERL” was a facilitated discussion about some of the challenges we face in the Wild West of digital and technology-enabled MERL and the data that it generates. Here are some of the things that stood out from discussions with participants and our experience.

Purposes

Sometimes we are not clear on why we are collecting data.  ‘Just because we can’ is not a valid reason to collect or use data and technology.  What purposes are driving our data collection and use of technology? What is the problem we are trying to solve? A lack of specificity can allow us stray into speculative data collection — if we’re collecting data on X, then it’s a good opportunity to collect data on Y “in case we need it in the future”. Do we ever really need it in the future? And if we do go back to it, we often find that because we didn’t collect the data on Y with a specific purpose, it’s not the “right” data for our needs. So, let’s always ask ourselves why are we collecting this data, do we really need it?

Tensions

Projects are increasingly under pressure to be more efficient and cost-effective in their data collection, yet the need or desire to conduct more robust assessments can requires the collection of data on multiple dimensions within a community. These two dynamics are often in conflict with each other. Here are three questions that can help guide our decision making:

  • Are there existing data sets that are “good enough” to meet the M&E needs of a project? Often there are, and they are collected regularly enough to be useful. Lean on partners who understand the data space to help map out what exists and what really needs to be collected. Leverage partners who are innovating in the data space – can machine learning and AI-produced data meet 80% of your needs? If so, consider it.
  • What data are we critically in need of to assess a project? Build an efficient data collection methodology that considers respondent burden and potentially includes multiple channels for receiving responses to increase inclusivity.
  • What will the data be used for? Sensitive contexts and life or death decisions require a different level of specificity and periodicity than less sensitive projects. Think about data from this lens when deciding which information to collect, how often to collect it, and who to collect it from.

Access

It is worth exploring questions of access in our data collection practices. Who has access to the data and the technology?  Do the people about whom the data is, have access to it?  Have we considered the harms that could come from the collection, storage, and use of data? For instance, while it can be useful to know where all the clients are who are accessing a pregnancy clinic to design better services, an unintended consequence may involve others having the ability to identify people who are pregnant, which pregnant people might not like these others to know. What can we do to protect the privacy of vulnerable populations? Also, going digital can be helpful, but if a person or community implicated in a data collection endeavour does not have access to technology or to a charging point – are we not just increasing or reinforcing inequality?

Transparency

While we often advocate for transparency in many parts of our industry, we are not always transparent about our data practices. Are we willing to tell others, to tell community members, why we are collecting data, using technology, and how we are using information?  If we are clear on our purpose, but not willing for it to be transparent, then it might be a good reason to reconsider. Yet, transparency does not equate accountability, so what are the mechanisms for ensuring greater accountability towards the people and communities we seek to serve?

Power and patience

One of the issues we’re facing is power imbalances. The demands that are made of us from donors about data, and the technology solutions that are presented to us, all make us feel like we’re not in control. But the rules haven’t been written yet — we get to write them.

One of the lessons from the responsible data workshop leading up to the conference was that organisations can get out in front of demands for data by developing their own data management and privacy policies. From this position it is easier to enter into dialogues and negotiations, with the organisational policy as your backstop. Therefore, it is worth asking, Who has power? For what? Where does it reside and how can we rebalance it?

Literacy underpins much of this – linguistic, digital, identity, ethical literacy.  Often when it comes to ‘digital’ we immediately fall under the spell of the tyranny of the urgent.  Therefore,  in what ways can we adopt a more ‘patient’ or ‘reflective’ practice with respect to digital?

For more information, see: