All posts by Guest Post

Program Data: Practices and needs of francophone CSOs

Guest post from the team at CartONG

CartONG has just released a new study on “Program Data: The silver bullet of the humanitarian and aid sectors? Panorama of the practices and needs of francophone CSOs“.

What place for program data management in a sector in the throes of a digital revolution?

Mirroring our society, the Humanitarian Aid and International Development (HAID) sector is in the throes of a digital revolution. Whilst the latter is undeniably impacting day-to-day management of Civil Society Organisations (CSOs) – whether in their administrative duties or in those related to fundraising – it is also generating radical changes in actions being implemented for the benefit of populations.

Although it has become a key element in the coordination of operations, data management remains somewhat invisible from the perspective of the sector, in spite of its many ethical, financial and human implications, and above all its impact on project quality. In the field and at headquarters, project teams are therefore devoting an increasing amount of time to data management, often at the expense of other activities. Poorly trained and ill-equipped, these teams can produce substandard performances with regards to these tasks, and without the topic necessarily being regarded as an operational issue by most CSOs.

Program data management – also known as Information Management (IM) – is both a topical issue and the source of numerous debates within francophone Humanitarian Aid and International Development CSOs.

A unique study in the world of French-speaking CSOs

At present and to our knowledge, no equivalent study with a view to examining, as a whole, the practices of (francophone) CSOs, or to identifying their needs in terms of program data management, has yet been carried out. A number of analyses and articles do exist, yet these generally approach the subject either from a technical standpoint or as if these were still innovations for the sector and thus with limited constructive hindsight.

The organisational dimension is moreover relatively unexplored and very little consolidated data at the inter-CSO level is available. Lastly, although CSOs have been handling large amounts of data for almost 20 years, there remains much debate: what level of attention and investment should data management be subject to? Does the activity require a dedicated person in-house and, if so, which profile should be given priority? In fact, where does the scope of data management begin and where does it end? Do CSOs working in humanitarian situations have different needs than those working in a development context? Do differences in approach exist between francophone and anglophone CSOs, the latter often deemed more advanced in the field?

Based on a survey of CSOs, a literature review and interviews with key stakeholders, this study designed by CartONG aims to explore and provide preliminary answers to these questions. It also aims to make a valuable contribution to bolster the debate on data management. To this end, we have thereupon sought to synthesise and formalise often scattered and at times contradictory considerations.

The study is also available in French here.

What’s Program Data Management?

Based on the concept of Information Management (IM), program data management is a term whose scope of application continues to fluctuate and whose definition remains unclear. With a view to facilitating its ownership, readers of this new study will be given an accessible definition (synthesised in the diagram below) and a relatively small scope of application (see illustration below), at the juncture of Monitoring & Evaluation (M&E), Information and Communications Technologies for Development (ICT4D), information systems and knowledge management.

Main components of Information Management

Simplified diagram of the place of Information Management vis-à-vis related topics

Program data management & Francophone CSOs: an overview of the main stakes and of the existing relationships by categories of CSOs

Despite studies still being relatively sparse as to the link between project data management and project quality, the available evidence shows that good data project management makes for greater efficiency and transparency in organisations. The evidence gathered suggests, however, that project data management is widely used today for the benefit of bottom-up accountability – towards decision-makers and financial backers – rather than for day-to-day project steering.

The reasons for this state of affairs are manifold, but it appears that chief amongst them is a significant lack of maturity from francophone CSOs in matters relating to data and digital issues. Six main weaknesses and levers for action have thus been identified (see illustration):

  1. an insufficient data literacy within CSOs
  2. unduly fragile, siloed and insufficiently funded program data management strategies
  3. a lack of leadership and often overly vague responsibilities;
  4. a technological environment that is neither controlled nor influenced by CSOs
  5. the use of approaches that foster information overload and neglect qualitative data; and
  6. an under-estimation of the responsibilities carried by CSOs and of the ethical issues at stake with regard to the data they manipulate.

Confronted with these challenges, it appears that francophone CSOs are somewhat lagging behind – at least in terms of awareness and strategic positioning – compared to their anglophone counterparts. Moreover, program data management continues to be approached by the various CSOs in an inconsistent manner: the study therefore proposes a classification of CSOs and reflects on the main existing differences – between types, sectors and sizes – and in particular points out the difficulties encountered by the smallest organisations.

What types of IM support are expected by Francophone CSOs and on what priority themes?

This study was also an opportunity to identify both the type of materials and on which priority program data management themes a support is expected by francophone CSOs (see below); especially to enable specialized organisations, including H2H/Support CSOs such as CartONG, to better define their priorities of support toward CSOs.

The study also reveals that CSOs are mainly waiting for accompaniment on the following topics (in this order):

  1. selection of solutions
  2. responsible data management
  3. data quality control
  4. data analysis
  5. data sharing and, for the smaller ones also
  6. database design and
  7. simple map visualization.

What follow-up does CartONG intend to give to this study?

The study closes with a series of some fifteen recommendations to the various international aid and development actors, especially CSOs, who would benefit from being more proactive on the topic, as well as to donors and network heads who play a pivotal role to advance these issues.

By clarifying the various elements feeding the debate along with the issues at stake, we hope that this document – which remains a first for CartONG – will help feed current discussions. Many of them should actually be taken up again during the next GeOnG Forum that will be held online from November 2-3, 2020.

Carried out as part of the project Strengthening program data management within francophone CSOs carried out by CartONG (and co-financed by the French Development Agency – AFD over the 2020-2022 period), this study should be the subject of presentations during face-to-face or remote events before the year is out. It will also be enriched in the coming months by the release of many other resources.

Do not hesitate to follow us on social media or to write to us to be added to the project mailing list to stay informed.

Link to study in French: 

Link to study in English: 

Geospatial, location and big data: emerging MERL Tech approaches

Our first webinar in the series Emerging Data Landscapes in M&E, on Geospatial, location and big data: Where have we been and where can we go? was held on 28 July. We had a lively discussion on the use of these innovative technologies in the world of evaluation.

First, Estelle Raimondo,  Senior Evaluation Officer at the World Bank Independent Evaluation Group, framed the discussion with her introduction on Evaluation and emerging data: what are we learning from early applications? She noted how COVID-19 has been an accelerator of change, pushing the evaluation community to explore new, innovative technologies to overcome today’s challenges, and set the stage for the ethical, conceptual and methodical considerations we now face.

Next came the Case Study: Integrating geospatial methods into evaluations: opportunities and lessons from Anupam Anand, Evaluation Officer at the Global Environmental Facility, Independent Evaluation Office, and Hur Hassnain, Senior Evaluation Advisor, European Commission DEVCO/ESS. After providing an overview of the advantages of using satellite and remote sensing data, particularly in fragile and conflict zones, the presenters gave the examples of their use in Syria and Sierra Leone.

The second Case Study: Observing from space when you cannot observe from the field, was presented by Joachim Vandercasteelen, Young Professional at World Bank Independent Evaluation Group. This example focused on using geospatial data for evaluating a biodiversity conservation project in Madagascar, as traveling to the field was not feasible. The presentation gave an overview on how to use such technology for both quantitative and qualitative assessments, but also the downsides to consider.

Lastly, Alexandra Robinson, Co-Author of Big Data to Data Science: Moving from What to How in the MERL Tech Space, and Market Strategy and Data Ethics Lead at Threshold.World, discussed What are the organizational barriers to adopting new data types for M&E? This presentation focused on six main barriers to using big data, but also shared some key recommendations to improve its use.

The full recording of the webinar, including the PowerPoint Presentations and Questions & Answers session at the end, are available on the EES’ YouTube page.

Over the next month, we will release specific blogs of each of the presentations, where the speakers will answer the questions participants raised during the webinar that were not already addressed during the Q&A, and provide the links to further reading on the subject. These will be publicly available on the EES Blog.

The EES would like to thank our speakers for this engaging webinar, as well as our partners The Development Café, MERL Tech, and the World Bank IEG.

Stay tuned for our next webinar in the series. You can also follow the EES on Twitter, LinkedIn and Facebook, and sign up to receive our monthly newsletter EuropEval Digest for more exciting updates!

Emerging Technologies: How Can We Use Them for MERL?

Guest post from Kerry Bruce, Clear Outcomes

A new wave of technologies and approaches has the potential to influence how monitoring, evaluation, research and learning (MERL) practitioners do their work. The growth in use of smartphones and the internet, digitization of existing data sets, and collection of digital data make data increasingly available for MERL activities. This changes how MERL is conducted and, in some cases, who conducts it.

We recently completed research on emerging technologies for use in MERL as part of a wider research project on The State of the Field of MERL Tech.

We hypothesized that emerging technology is revolutionizing the types of data that can be collected and accessed and the ways that it can be processed and used for better MERL. However, improved research on and documentation of how these technologies are being used is required so the sector can better understand where, when, why, how, and for which populations and which types of MERL these emerging technologies would be appropriate.

The team reviewed the state of the field and found there were three key new areas of data that MERL practitioners should consider:

  • New kinds of data sources, such as application data, sensor data, data from drones and biometrics. These types of data are providing more access to information and larger volumes of data than ever before.
  • New types of systems for data storage.  The most prominent of these was the distributed ledger technologies (also known as blockchain) and an increasing use of cloud and edge computing.  We discuss the implications of these technologies for MERL.
  • New ways of processing data, mainly from the field of machine learning, specifically supervised and unsupervised learning techniques that could help MERL practitioners manage large volumes of both quantitative and qualitative data.

These new technologies hold great promise for making MERL practices more precise, automated and timely. However, some challenges include:

  • A need to clearly define problems so the choice of data, tool, or technique is appropriate
  • Non-representative selection bias when sampling
  • Reduced MERL practitioner or evaluator control
  • Change management needs to adapt how organizations manage data
  • Rapid platform changes and difficulty with assessing the costs
  • A need for systems thinking which may involve stitching different technologies together

To address emerging challenges and make best use of the new data, tools, and approaches, we found a need for capacity strengthening for MERL practitioners, greater collaboration among social scientists and technologists, a need for increased documentation, and a need for the incorporation of more systems thinking among MERL practitioners.

Finally there remains a need for greater attention to justice, ethics and privacy in emerging technology.

Download the paper here!

Read the other papers in the series here!

The Hype Cycle of MERL Tech Knowledge Synthesis

Guest Post by Zach Tilton, Doctoral Research Associate, Interdisciplinary Ph.D. in Evaluation (IDPE), Western Michigan University

Would I be revealing too much if I said we initially envisioned and even titled our knowledge synthesis as a ‘rapid’ scoping review? Hah! After over a year and a half of collaborative research with an amazing team we likely have just as many findings about how (and how not) to conduct a scoping review as we do about the content of our review on traditional MERL Tech. I console myself that the average Cochrane systematic review takes 30 months to complete (while recognizing that is a more disciplined knowledge synthesis).

Looking back, I could describe our hubris and emotions during the synthesis process similar to the trajectory of the Gartner Hype Cycle, a concept we draw from in our broader MERL Tech State of the Field research to conceptualize the maturity and adoption of technology. Our triggering curiosities about the state of the field was followed by multiple peaks of inflated expectations and troughs of disillusionment until we settled onto the plateau of productivity (and publication). We uncovered much about the nature of what we termed traditional MERL Tech, or tech-enabled systematic inquiry that allows us to do what we have always done in the MERL space, only better or differently.

One of our findings was actually related to the possible relationship technologies have with the Gartner Hype Cycle. Based on a typology we developed as we started screening studies from our review, we found that the ratio of studies related to a specific MERL Tech versus the studies focused on that same MERL Tech, provided an indirect measure of the trust researchers and practitioners had in that technology to deliver results, similar to the expectation variable in Y axis of the Hype Cycle plane.

Briefly, in focused studies MERL Tech is under the magnifying glass; in related studies MERL Tech is the magnifying glass. When we observed specific technologies being regularly used to study other phenomena significantly more than they were themselves being studied, we inferred these technologies were trusted more than others to deliver results. Conversely, when we observed a higher proportion of technologies being investigated as opposed to facilitating investigations, we inferred these were less trusted to deliver results. In other words, coupled with higher reported frequency, the technologies with higher levels of trust could be viewed as farther along on the hype cycle than those with lower levels of trust. Online surveys, geographic information system, and quantitative data analysis software were among the most trusted technologies, with dashboards, mobile tablets, and real-time technologies among the least trusted.

To read a further explanation of this and other findings, conclusions, and recommendations from our MERL Tech State of the Field Scoping Review, download the white paper.

Read the other papers in the State of the Field of MERL Tech series.

Big Data to Data Science: Moving from ‘What’ to ‘How’ in MERL

Guest post by Grace Higdon

Big data is a big topic in other sectors but its application within monitoring and evaluation (M&E) is limited, with most reports focusing more on its potential rather than actual use. Our paper,  “Big Data to Data Science: Moving from ‘What’ to ‘How’ in the MERL Tech Space”  probes trends in the use of big data between 2014 and 2019 by a community of early adopters working in monitoring, evaluation, research, and learning (MERL) in the development and humanitarian sectors. We focus on how MERL practitioners actually use big data and what encourages or deters adoption.

First, we collated administrative and publicly available MERL Tech conference data from the 281 sessions accepted for presentation between 2015 and 2019. Of these, we identified 54 sessions that mentioned big data and compared trends between sessions that did and did not mention this topic. In any given year from 2015 to 2019, 16 percent to 26 percent of sessions at MERL Tech conferences were related to the topic of big data. (Conferences were held in Washington DC, London, and Johannesburg).

Our quantitative analysis was complemented by 11 qualitative key informant interviews. We selected interviewees representing diverse viewpoints (implementers, donors, MERL specialists) and a range of subject matter expertise and backgrounds. During interviews, we explored why an interviewee chose to use big data, the benefits and challenges of using big data, reflections on the use of big data in the wider MERL tech community, and opportunities for the future.

Findings

Our findings indicate that MERL practitioners are in a fragmented, experimental phase, with use and application of big data varying widely, accompanied by shifting terminologies. One interviewee noted that “big data is sort of an outmoded buzzword” with practitioners now using terms such as ‘artificial intelligence’ and ‘machine learning.’ Our analysis attempted to expand the umbrella of terminologies under which big data and related technologies might fall. Key informant interviews and conference session analysis identified four main types of technologies used to collect big data: satellites, remote sensors, mobile technology, and M&E platforms, as well as a number of other tools and methods. Additionally, our analysis surfaced six main types of tools used to analyze big data: artificial intelligence and machine learning, geospatial analysis, data mining, data visualization, data analysis software packages, and social network analysis.

Barriers to adoption

We also took an in-depth look at barriers to and enablers of use of big data within MERL, as well as benefits and drawbacks. Our analysis found that perceived benefits of big data included enhanced analytical possibilities, increased efficiency, scale, data quality, accuracy, and cost-effectiveness. Big data is contributing to improved targeting and better value for money. It is also enabling remote monitoring in areas that are difficult to access for reasons such as distance, poor infrastructure, or conflict.

Concerns about bias, privacy, and the potential for big data to magnify existing inequalities arose frequently. MERL practitioners cited a number of drawbacks and limitations that make them cautious about using big data. These include lack of trust in the data (including mistrust from members of local communities); misalignment of objectives, capacity, and resources when partnering with big data firms and the corporate sector; and ethical concerns related to privacy, bias, and magnification of inequalities. Barriers to adoption include insufficient resources, absence of relevant use cases, lack of skills for big data, difficulty in determining return on investment, and challenges in pinpointing the tangible value of using big data in MERL.

Our paper includes a series of short case studies of big data applications in MERL. Our research surfaced a need for more systematic and broader sharing of big data use cases and case studies in the development sector.

The field of Big Data is rapidly evolving, thus we expect that shifts have happened already in the field since the beginning of our research in 2018. We recommend several steps for advancing with Big Data / Data Science in the MERL Space, including:

  1. Consider. MERL Tech practitioners should examine relevant learning questions before deciding whether big data is the best tool for the MERL job at hand or whether another source or method could answer them just as well.
  2. Pilot testing of various big data approaches is needed in order to assess their utility and the value they add. Pilot testing should be collaborative; for example, an organization with strong roots at the field level might work with an agency that has technical expertise in relevant areas.
  3. Documenting. The current body of documentation is insufficient to highlight relevant use cases and identify frameworks for determining return on investment in big data for MERL work. The community should do more to document efforts, experiences, successes, and failures in academic and gray literature.
  4. Sharing. There is a hum of activity around big data in the vibrant MERL Tech community. We encourage the MERL Tech community to engage in fora such as communities of practice, salons, events, and other convenings, and to seek less typical avenues for sharing information and learning and to avoid knowledge silos.
  5. Learning. The MERL Tech space is not static; indeed, the terminology and applications of big data have shifted rapidly in the past 5 years and will continue to change over time. The MERL Tech community should participate in new training related to big data, continuing to apply critical thinking to new applications.
  6. Guiding. Big data practitioners are crossing exciting frontiers as they apply new methods to research and learning questions. These new opportunities bring significant responsibility. MERL Tech programs serve people who are often vulnerable — but whose rights and dignity deserve respect. As we move forward with using big data, we must carefully consider, implement, and share guidance for responsible use of these new applications, always honoring the people at the heart of our interventions.

Download the full paper here.

Read the other papers in the State of the Field of MERL Tech series.

Open Call for MERL Center Working Group Members!

By Mala Kumar, GitHub Social Impact, Open Source for Good

I lead a program on the GitHub Social Impact team called Open Source for Good — detailed in a previous MERL Tech post and (back when mass gatherings in large rooms were routine) at a lightning talk at the MERL Tech DC conference last year.

Before joining GitHub, I spent a decade wandering around the world designing, managing, implementing, and deploying tech for international development (ICT4D) software products. In my career, I found open source in ICT4D tends to be a polarizing topic, and often devoid of specific arguments. To advance conversations on the challenges, barriers, and opportunities of open source for social good, my program at GitHub led a year-long research project and produced a culminating report, which you can download here.

One of the hypotheses I posed at the MERL Tech conference last year, and that our research subsequently confirmed, is that IT departments and ICT4D practitioners in the social sector* have relatively less budgetary decision-making power than their counterparts at corporate IT companies. This makes it hard for IT and ICT4D staff to justify the use of open source in their work.

In the past year, Open Source for Good has solidified its strategy around helping the social sector more effectively engage with open source. To that aim, we started the MERL Center, which brings together open source experts and MERL practitioners to create resources to help medium and large social sector organizations understand if, how, and when to use open source in their MERL solutions.**

With the world heading into unprecedented economic and social change and uncertainty, we’re more committed than ever at GitHub Social Impact to helping the social sector effectively use open source and to build on a digital ecosystem that already exists.

Thanks to our wonderful working group members, the MERL Center has identified its target audiences, fleshed out the goals of the Center, set up a basic content production process, and is working on a few initial contributions to its two working groups: Case Studies and Beginner’s Guides. I’ll announce more details in the coming months, but I am also excited to announce that we’re committing funds to get a MERL Center public-facing website live to properly showcase the materials the MERL Center produces and how open source can support technology-enabled MERL activities and approaches.

As we ramp up, we’re now inviting more people to join the MERL Center working groups! If you are a MERL practitioner with an interest in or knowledge of open source, or you’re an open source expert with an interest in and knowledge of MERL, we’d love to have you! Please feel free to reach out me with a brief introduction to you and your work, and I’ll help you get on-boarded. We’re excited to have you work with us! 

*We define the “social sector” as any organization or company that primarily focuses on social good causes.

**Here’s our working definition of MERL.

 

8 Ways to Adapt Your M&E During the COVID-19 Pandemic

Guest post from Janna Rous. Original published here.

So, all of a sudden you’re stuck at home because of the new coronavirus.  You’re looking at your M&E commitments and your program commitments.  Do you put them all on hold and postpone them until the coronavirus threat has passed and everything goes back to normal?  Or is there a way to still get things done”!?  This article reviews 8 ways you can adapt your M&E during the pandemic.

Here are a few ideas that you and your team might consider doing to make sure you can stay on track (and maybe even IMPROVE your MEAL practices) even if you might currently be in the middle of a lockdown, or if you think you might be going into a lockdown soon:

1. Phone Call Interviews instead of In-Person Interviews

Do you have any household assessments or baseline surveys or post-distribution monitoring that you had planned in the next 1 to 3 months? Is there a way that you can carry out these interviews by phone or WhatsApp calls?  This is the easiest and most direct way to carry on with your current M&E plan.  Instead of doing these interviews face-to-face, just get them on a call.  I’ve created a checklist to help you prepare for doing phone call interviews – click here to get the “Humanitarian’s Phone Call Interview Checklist”.  Here are a few things you need to think through to transition to a phone-call methodology:

  • You need phone numbers and names of people that need to be surveyed. Do you have these?  Or is there a community leader who might be able to help you get these?
  • You also need to expect that a LOT of people may not answer their phone. So instead of “sampling” people for a survey, you might want to just plan on calling almost everyone on that list.
  • Just like for a face-to-face interview, you need to know what you’re going to say. So you need to have a script ready for how you introduce yourself and ask for consent to do a phone questionnaire.  It’s best to have a structured interview questionnaire that you follow for every phone call, just like you would in a face-to-face assessment.
  • You also need to have a way to enter data as you ask the questions. This usually depends on what you’re most comfortable with – but I recommend preparing an ODK or KoboToolbox questionnaire, just like you would for an in-person survey, and filling it out as you do the interview over the phone.  I find it easiest to enter the data into KoboToolbox “Webform” instead of the mobile app, because I can type information faster into my laptop rather than thumb-type it into a mobile device.  But use what you have!
  • If you’re not comfortable in KoboToolbox, you could also prepare an Excel sheet for directly entering answers – but this will probably require a lot more data cleaning later on.
  • When you’re interviewing, it’s usually faster to type down the answers in the language you’re interviewing in. If you need your final data collection to be in English, go back and do the translation after you’ve hung up the phone.
  • If you want a record of the interview, ask if you can record the phone call. When the person says yes, then just record it so you can go back and double check an answer if you need to.
  • Very practically – if you’re doing lots of phone calls in a day, it is easier on your arm and your neck if you use a headset instead of holding your phone to your ear all day!

2. Collect Videos & Photos Directly from Households and Communities

When you’re doing any in-person MEAL activities, you’re always able to observe evidence. You can look around and SEE impact, you don’t just hear it through an interview or group discussion.  But when you’re doing M&E remotely, you can’t double-check to see what impact really looks like.  So I recommend:

  • Connect with as many beneficiaries and team members as possible through WhatsApp or another communication app and collect photos and videos of evidence directly from them.
  • Video – Maybe someone has a story of impact they can share with you through video. Or if you’re overseeing a Primary Health Care clinic, perhaps you can have a staff member walk you through the clinic with a video so you can do a remote assessment.
  • Pictures – Maybe you can ask everyone to send you a picture of (for example) their “hand washing station with soap and water” (if you’re monitoring a WASH program). Or perhaps you want evidence that the local water point is functioning.

3. Programme Final Evaluation

It’s a good practice to do a final evaluation review when you reach the end of a program.  If you have a program finishing in the next 1-3 months, and you want to do a final review to assess lessons learned overall, then you can also do this remotely!

  • Make a list of all the stakeholders that would be great to talk to: staff members, a few beneficiaries, government authorities (local and/or national), other NGOs, coordination groups, partner organizations, local community leaders.
  • Then go in search of either their phone numbers, their email addresses, their Skype accounts, or their WhatsApp numbers and get in touch.
  • It’s best if you can get on a video chat with as many of them as possible – because it’s much more personal and easy to communicate if you can see one another’s faces! But if you can just talk with audio – that’s okay too.
  • Prepare a semi-structured interview, a list of questions you want to talk through about the impact, what went well, what could have gone better. And if there’s anything interesting that comes up, don’t worry about coming up with some new questions on the spot or skipping questions that don’t make sense in the context.
  • You can also gather together any monitoring reports/analysis that was done on the project throughout its implementation period, plus pictures of the interventions.
  • Use all this information to create a final “lessons learned” evaluation document. This is a fantastic way to continually improve the way you do humanitarian programming.

4. Adapt Your Focus Group Discussion Plan

If everyone is at home because your country has imposed a lockdown, it will be very difficult to do a focus group discussion because….you can’t be in groups!  So, with your team decide if it might be better to switch your monitoring activity from collecting qualitative data in group discussions to actually just having one-on-one interviews on the phone with several people to collect the same information.

  • There are some dynamics that you will miss in one-to-one interviews, information that may only come out during group discussions. (Especially where you’re collecting sensitive or “taboo” data.) Identify what that type of information might be – and either skip those types of questions for now, or brainstorm how else you could collect the information through phone-calls.

5. Adapt Your Key Informant Interviews

If you normally carry out Key Informant Interviews, it would be a great idea to think what “extra” questions you need to ask this month in the midst of the coronavirus pandemic.

  • If you normally ask questions around your program sector areas, think about just collecting a few extra data points about feelings, needs, fears, and challenges that are a reality in light of Covid-19. Are people facing any additional pressures due to the epidemic? Or are there any new humanitarian needs right now? Are there any upcoming needs that people are anticipating?
  • It goes without saying that if your Key Informant Interviews are normally in person, you’ll want to carry these out by phone for the foreseeable future!

6. What To Do About Third Party Monitoring

Some programs and donors use Third Party Monitors to assess their program results independently.  If you normally hire third party monitors, and you’ve got some third party monitoring planned for the next 1-3 months, you need to get on the phone with this team and make a new plan. Here are a few things you might want to think through with your third party monitors:

  • Can the third party carry out their monitoring by phone, in the same ways I’ve outlined above?
  • But also think through – is it worth it to get a third party monitor to assess results remotely? Is it better to postpone their monitoring?  Or is it worth it to carry on regardless?
  • What is the budget implication? If cars won’t be used, is there any cost-savings?  Is there any additional budget they’ll need for air-time costs for their phones?
  • Make sure there is a plan to gather as much photo and video evidence as possible (see point 2 above!)
  • If they’re carrying out phone call interviews it would also be a good recommendation to record phone calls if possible and with consent, so you have the records if needed.

7. Manage Expectations – The Coronavirus Pandemic May Impact Your Program Results.

You probably didn’t predict that a global pandemic would occur in the middle of your project cycle and throw your entire plan off.  Go easy on yourself and your team!  It is most likely that the results you’d planned for might not end up being achieved this year.  Your donors know this (because they’re probably also on lockdown).  You can’t control the pandemic, but you can control your response.  So proactively manage your own expectations, your manager’s expectations and your donor’s expectations.

  • Get on a Skype or Zoom call with the project managers and review each indicator of your M&E plan. In light of the pandemic, what indicator targets will most likely change?
  • Look through the baseline numbers in your M&E plan – is it possible that the results at the END of your project might be worse than even your baseline numbers? For example, if you have a livelihoods project, it is possible that income and livelihoods will be drastically reduced by a country-wide lockdown.  Or are you running an education program?  If schools have been closed, then will a comparison to the baseline be possible?
  • Once you’ve done a review of your M&E plan, create a very simple revised plan that can be talked through with your program donor.

8. Talk To Your Donors About What You Can Do Remotely

When you’re on the phone with your donors, don’t only talk about revised program indicators.

  • Also talk about a revised timeframe – is there any flexibility on the program timeframe, or deadlines for interim reporting on indicators? What are their expectations?
  • Also talk about what you CAN do remotely. Discuss with them the plan you have for carrying on everything possible that can be done remotely.
  • And don’t forget to discuss financial implications of changes to timeframe.

 

Three Problems — and a Solution — for Data Viz in MERL and ICT4D

Guest post by Amanda Makulec, MPH, Data Visualization Society Operations Director

Just about everyone I know in the ICT4D and MERL communities has interacted with, presented, or created a chart, dashboard, infographic, or other data visualization. We’ve also all seen charts that mislead, confuse, or otherwise fall short of making information more accessible. 

The goal of the Data Visualization Society is to collect and establish best practices in data viz, fostering a community that supports members as they grow and develop data visualization skills. With more than 11.5K members from 123 countries on our first birthday, the society has grown faster than any of the founders imagined.

There are three reasons you should join the Data Visualization Society to improve your data visualizations in international development.

Self-service data visualization tools are everywhere, but that doesn’t mean we’re always building usable charts and graphs.

We’ve seen the proliferation of dashboards and enthusiasm for data viz as a tool to promote data driven decisionmaking.

Just about anyone can make a chart if they have a table of data, thanks to the wide range of tools out there (Flourish, RAWgraphs, Datawrapper, Tableau, PowerBI…to name a few). Without a knowledge of data viz fundamentals though, it’s easy to use these tools to create confusing and misleading graphs.

A recent study on user-designed dashboards in DHIS2 (a commonly used data management and analysis platform in global health) found that “while the technical flexibility of [DHIS2] has been taken advantage of by providing platform customization training…the quality of the dashboards created face numerous challenges.” (Aprisa & Sebo, 2020).  

The researchers used a framework from Stephen Few to evaluate the frequency of five different kinds of ‘dashboard problems’ on 80 user-designed sample dashboards. The five problem ‘types’ included: context, dashboard layout, visualization technique, logical, and data quality. 

Of the 80 dashboards evaluated, 69 (83.1%) had at least one visualization technique problem (Aprisa & Sebo, 2020). Many of the examples shared in the paper could be easily addressed, like transforming the pie chart made of slices representing points in time into a line graph.

With so many tools at our fingerprints, how can we use them to develop meaningful, impactful charts and interactive dashboards?  Learning fundamentals of data visualization is an excellent place to start, and DVS offers a free-to-join professional home to learn those fundamentals.

Many of the communities that exist around data visualization are focused on specific tools, which may not be relevant or accessible for your organization.

In ICT4D, we often have to be scrappy and flexible. That means learning how to work with open source tools, hack charts in Excel, and often make decisions about what tool to use driven as much by resource availability as functionality. 

There are many great tool specific communities out there: TUGs, PUGs, RLadies, Stack Overflow, and more. DVS emerged out of a need to connect people looking to share best practices across the many disciplines doing data viz: journalists, evaluators, developers, graphic designers, and more. That means not being limited to one tool or platform, so we can look for what fits a given project or audience.

After joining DVS, you’ll receive an invite to the Society’s’ Slack, a community “workspace” with channels on different topics and for connecting different groups of people within the community.  You can ask questions about any data viz tool on the #topic-tools channel, and explore emerging and established platforms with honest feedback on how other members have used them in their work.

Data visualization training often means one-off workshops. Attendees leave enthusiastic, but then don’t have colleagues to rely on when they run into new questions or get stuck.

Data visualization isn’t consistently taught as a foundation skill for public health or development professionals.

In university, there may be a few modules within a statistics or evaluation class, but seldom are there dedicated, semester long classes on visualization; those are reserved for computer science and analytics programs (though this seems to be slowing changing!).  Continuing education in data viz is usually short workshops, not long-term mentoring relationships. 

So what happens when people are asked to “figure it out” on the job? Or attend a two day workshop and come away as a resident data viz expert?  

Within DVS, our leadership and our members step up to answer questions and be that coach for people at all stages of learning data visualization. We even have a dedicated feedback space within Slack to share examples of data viz work in progress and get feedback.

DVS also enables informal connections on questions on a wide range of topics. Go to #share-critique, for posting work-in-progress visualizations and seeking feedback from the community. We also host quarterly challenges where you can do hands-on practice with provided data sets to develop your data viz skills and have plans for a formal mentorship program to launch in 2020.

Join DVS today to get its benefits – members from Africa, Asia, and other underrepresented areas are especially encouraged to join us now!

Have any questions? Or ideas on ways DVS can support our global membership base? Find me on Twitter – my DMs are open.

A Toolkit to Measure the Performance and Labour Conditions in Small and Medium Enterprises

Guest post from ILO The Lab

Performance measurement is critical not only to see whether enterprise development projects are making a difference, but so that small and medium enterprises (SME) themselves can continuously improve. As the saying goes: “If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” In other words, the measurement of performance is the first step towards the management of performance.

Enterprise development projects need to measure changes in SME performance, not only to report results to project funders, but also to help SMEs continuously improve. But measuring the performance of SMEs in the context of a developing economy brings special considerations, including:

  • Pressing capacity challenges in record keeping, data collection, and access to modern management techniques – along with the technology that drives it. Most SMEs have some kind of performance measurement system, however, these tend to be very basic.
  • Intensely competitive environments where there is little market differentiation – meaning most SMEs have to tussle just to survive, reducing the incentive to collect and use data. Some countries have 5-year survival rates as low as 10%.
  • Flatter management structures, less bureaucracy and – in theory at least – can be more agile and adaptive to use performance information to improve.
  • SME’s dependence on productivity gains to maximise long-term competitiveness and profitability. In the absence of intellectual property or technology as a source of comparative advantage, labour productivity is often critical to sustaining SME performance.

Moreover, enterprise development projects are facing increasing pressure to demonstrate that their work is leading to qualitative improvements in people’s terms and conditions of employment. As researchers have noted, it is “not only the number, but also the quality of jobs matters to poverty alleviation and economic development”.

For many SMEs in the global south, workers are a critical determinant of business success. Since SMEs often undertake labour-intensive activates, they rely on a supply of labour – with varying skills requirements – to produce their goods and services. Labour and employment issues are frequently included in non-financial performance measurement systems, but they often only focus on the most easily quantifiable elements such as the number of accidents. However, labour conditions refer to the working environment and all circumstances affecting the workplace, including job hours, physical aspects, and the rights and responsibilities of SMEs towards their workers. Many aspects of this work environment are covered by national labour laws, which in turn are shaped by the eight fundamental ILO conventions.

By improving labour conditions, SMEs can improve their business outcomes. Better health and safety practices can boost productivity and employee retention. Companies have shown growth in sales per employee workforce hour following targeted training programmes. As recent research has demonstrated, jobs with decent wages, predictable hours, sufficient training, and opportunities for advancement can be a source of competitive advantage. For many businesses, thinking about employee working conditions has shifted from a way to minimize risk to a competitive advantage.

Conversely, bad conditions can be bad for business: Poor health and safety practices can result in fines and slow task completion. Industrial action and absenteeism can lead to prolonged disruption to operations. An SME owner says, “You have to have an environment where people are happy working, where they cooperate well, interact well. If you have problems in the way people work, it could terribly affect the performance”.

Against this complex framework and challenges, the International labour Organization has launched the ILO SME Measurement Toolkit

This Toolkit is a practical resource for practitioners and and projects to support SMEs decide what aspects of SME performance (productivity, working conditions, etc.) to measure, as well as how to measure them.

  • +250 indicators including a set of actionable metrics drawn from existing sustainability standards, company codes of conduct and international development monitoring and evaluation frameworks
  • Methods outlining different tools and data collection techniques
  • Real-life examples of SME measurement in a developing country context

We’d love to hear your comments, questions and suggestions about the Toolkit. Drop us an email at thelab@ilo.org!

More ILO The Lab’s resources on results measurement:

Open Call for ideas: 2020 GeOnG forum

Guest post by Nina Eissen from CartONG, organizers of the GeOnG Forum.

The 7th edition of the GeOnG Forum on Humanitarian and Development Data will take place from November 2nd to 4th, 2020 in Chambéry (France). CartONG is launching an Open Call for Suggestions.

Organized by CartONG every two years since 2008, the GeOnG forum gathers humanitarian and development actors and professionals specialized in information management. The GeOnG is dedicated to addressing issues related to data in the humanitarian and development sectors, including topics related to mapping, GIS, data collection & information management. To this end, the forum is designed to allow participants to debate current and future stakes, introduce relevant and innovative solutions and share experience and best practices. The GeOnG is one of the biggest independent fora on the topic in Europe, with an average of 180 participants from 90 organizations in the last three editions.

The main theme of the 2020 edition will be: “People at the heart of Information Management: promoting responsible and inclusive practices”. More information about the choice of this main theme is available here.

We also invite you to discover the 2020 GeOnG teaser here: 

To submit your ideas, please use this online form. The Open Call for Suggestions will remain open until the end of May 2020.

A few topics we hope to see covered during the 2020 GeOnG Forum:

  • How to better integrate vulnerable populations into the data life cycle, with a focus on ensuring that the data collected is particularly representative of populations at risk of discrimination.
  • How to implement the Do No Harm approach in relation to data: simple security & protection measures, streamlining of data privacy rights in programming, algorithmization of data processing, etc.
  • What is the role of the often considered ‘less direct stakeholders’ of humanitarian and development data (such as civil society actors, governments, etc.) so as to identify clearer pathways to share the data that should be shared for the common good and protect the data that should clearly not be shared.
  • How to promote data literacy beyond NGO information management and M&E staff to facilitate data-driven decision making.
  • How to ensure that tools and solutions used and promoted by humanitarian and development organizations are also sufficiently user-friendly and inclusive (for instance by limiting in-built biases and promoting human-centric design).
  • Beyond the main theme of the conference, don’t hesitate to send us any idea that you think might be relevant for the next GeOnG edition (about tools, methodologies, lessons learned, feedback from the field, etc.)!

Registration for the conference will open in the Spring of 2020.