Tag Archives: data visualization

Use of Administrative Data for the COVID-19 Response

Administrative data is that which is collected as part of regular activities that occur during program implementation. It has not been tapped sufficiently for learning and research. As the COVID-19 pandemic advances, how might administrative data be used to help with the COVID response, and other national or global pandemics.

At the final event in the MERL Tech and CLEAR-Anglophone Africa series for  gLOCAL Evaluation Week, we were joined by Kwabena Boakye, Ministry of Monitoring and Evaluation, Ghana; Bosco Okumu, National Treasury and Planning, Kenya; Stephen Taylor, Department of Basic Education, South Africa; and Andrea Fletcher, Cooper-Smith.

The four panelists described the kinds of administrative or “routine” data they are using in their work. For example, in Kenya educational records, client information from financial institutions, hospital records of patients, and health outcomes are being used to plan and implement actions related to COVID-19 and to evaluate the impact of different COVID-related policies that governments have put in place or are considering. In Malawi, administrative data is combined with other sources such as Google mobility data to understand how migration might be affecting the virus’ spread. COVID-19 is putting a spotlight on weaknesses and gaps in existing administrative data systems.

Watch the video here:

Listen to just the audio from the event here:

Summary:

Benefits of administrative data include that:

  • Data is generated through normal operations and does not require an additional survey to create it
  • It can be more relevant than a survey because it covers a large swath of the entire population
  • It is an existing data source during COVID when it’s difficult to collect new data
  • It can be used to create dashboards for decision-makers at various levels

Challenges include:

  • Data sits in silos and the systems are not designed to be interoperable
  • Administrative data may leave out those who are not participating in a government program
  • Data sets are time-bound to the life of the program
  • Some administrative data systems are outdated and have poor quality data that is not useful for decision-making or analysis
  • There is a demand for beautiful dashboards and maps but there is insufficient attention to the underlying data processes that would be needed to produce this information so that it can be used
  • Real-time data is not possible when there is no Internet connectivity
  • There is insufficient attention to data privacy and protection, especially for sensitive data
  • Institutions may resist providing data if weakness are highlighted through the data or they think it will make them look bad

Recommendations for better use of administrative data in the public sector:

  • Understand the data needs of decision-makers and build capacity to understand and use data systems
  • Map the data that exists, assess its quality, and identify gaps
  • Design and enact policies and institutional arrangements, tools, and processes to make sure that data is organized and interoperable.
  • Automate processes with digital tools to make them more seamless.
  • Focus on enhancing underlying data collection processes to improve the quality of administrative data; this includes making it useful for those who provide the data so that it is not yet another administrative burden with no local value.
  • Assign accountability for data quality across the entire system.
  • Learn from the private sector, but remember that the public sector has different incentives and goals.
  • Rather than fund more research on administrative data, donors should put funds into training on data quality, data visualization, and other skills related to data use and data literacy at different levels of government.
  • Determine how to improve data quality and use of existing administrative data systems rather than building new ones.
  • Make administrative data useful to those who are inputting it to improve data quality.

Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Remote Monitoring in the Time of Coronavirus

On June 3,  MERL Tech and CLEAR-Anglophone Africa hosted the second of three virtual events for gLOCAL Evaluation Week. At this event, we heard from Ignacio Del Busto, IDInsight, Janna Rous, Humanitarian Data, and Ayanda Mtanyana, New Leaders, on the topic of remote monitoring.

Data is not always available, and it can be costly to produce. One challenge is generating data cheaply and quickly to meet the needs of decision-makers within the operational constraints that enumerators face. Another is ensuring that the process is high quality and also human-centered, so that we are not simply extracting data. This can be a challenge when there is low connectivity and reach, poor networks capacity and access, and low smartphone access. Enumerator training is also difficult when it must be done remotely, especially if enumerators are new to technology and more accustomed to doing paper-based surveys.

Watch the video below.

Listen to just the audio from the session here.

Some recommendations arising from the session included:

  • Learn and experiment as you try new things. For example, tracking when and why people are dropping off a survey and finding ways to improve the design and approach. This might be related to the time of the call or length of the survey.
  • It’s not only about phone surveys. There are other tools. For example, WhatsApp has been used successfully during COVID-19 for collecting health data.
  • Don’t just put your paper processes onto a digital device. Instead, consider how to take greater advantage of digital devices and tools to find better ways of monitoring. For example, could we incorporate sensors into the monitoring from the start? At the same time, be careful not to introduce technologies that are overly complex.
  • Think about exclusion and access. Who are we excluding when we move to remote monitoring? Children? Women? Elderly people? We might be introducing bias if we are going remote. We also cannot observe if vulnerable people are in a safe place to talk if we are doing remote monitoring. So, we might be exposing people to harm or they could be slipping through the cracks. Also, people self-select for phone surveys. Who is not answering the phone and thus left out of the survey?
  • Consider providing airtime but make sure this doesn’t create perverse incentives.
  • Ethics and doing no harm are key principles. If we are forced to deliver programs remotely, this involves experimentation. And we are experimenting with people’s lives during a health crisis. Consider including a complaints channel where people can report any issues.
  • Ensure data is providing value at the local level, and help teams see what the whole data process is and how their data feeds into it. That will help improve data quality and reduce the tendency to ‘tick the box’ for data collection or find workarounds.
  • Design systems for interoperability so that the data can overlap, and the data can be integrated with other data for better insights or can be automatically updated. Data standards need to be established so that different systems can capture data in the same way or the same format;
  • Create a well-designed change management program to bring people on board and support them. Role modeling by leaders can help to promote new behaviors.

Further questions to explore:

  • How can we design monitoring to be remote from the very start? What new gaps could we fill and what kinds of mixed methods could we use?
  • What two-way platforms are most useful and how can they be used effectively and ethically?
  • Can we create a simple overview of opportunities and threats of remote monitoring?
  • How can we collect qualitative data, e.g., focus groups and in-depth interviews?
  • How can we keep respondents safe? What are the repercussions of asking sensitive questions?
  • How can we create data continuity plans during the pandemic?


Download the event reports:

See other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Using Data Responsibly During the COVID-19 Crisis

Over the past decade, monitoring, evaluation, research and learning (MERL) practices have become increasingly digitalized. The COVID-19 pandemic has caused that the process of digitalization to happen with even greater speed and urgency, due to travel restrictions, quarantine, and social distancing orders from governments who are desperate to slow the spread of the virus and lessen its impact.

MERL Tech and CLEAR-Anglophone Africa are working together to develop a framework and guidance on responsible data management for MERL in the Anglophone African context. As part of this effort, we held three virtual events in early June during CLEAR’s gLOCAL Evaluation Week.

At our June 2 event, Korstiaan Wapenaar, Genesis Analytics, Jerusha Govender, Data Innovator, and Teki Akkueteh, Africa Digital Rights Hub, shared tips on how to be more responsible with data.

Data is a necessary and critical part of COVID-19 prevention and response efforts to understand where the virus might appear next, who is most at risk, and where resources should be directed for prevention and response. However we need to be sure that we are not putting people at risk of privacy violations or misuse of personal data and to ensure that we are managing that data responsibly so that we don’t unnecessarily create fear or panic.

Watch the video below:

Listen to the audio from the session here:

Session summary:

  • MERL Practitioners have clear responsibilities when sharing, presenting, consuming and interpreting data. Individuals and institutions may use data to gain prestige, and this can allow bias to creep in or to justify government decisions. Data quality is critical for informing decisions, and information gaps create the risk of misinformation and flawed understanding. We need to embrace uncertainty and the limitations of the science, provide context and definitions so that our sources are clear, and ensure transparency around the numbers and the assumptions that are underpin our work.
  • MERL Practitioners should provide contextual information and guidance on how to interpret the data so that people can make sense of it in the right way. We should avoid cherry picking data to prove a point, and we should be aware that data visualization carries power to sway opinions and decisions. It can also influence behavior change in individuals, so we need to take responsibility for that. We also need to find ways to visualize data for lay people and non-technical sectors.
  • Critical data is needed, yet it might be used in negative or harmful ways, for example, COVID-related stigmatization that can affect human dignity. We must not override ethical and legal principles in our rush to collect data. Transparency around data collection processes and use are also needed, as well as data minimization. Some might be taking advantage of the situation to amass large amounts of data for alternative purposes, which is unethical. Large amounts of data also bring increased risk of data breaches. When people are scared, such as in COVID times, they will be willing to hand over data. We need to ensure that we are providing oversight and keeping watch over government entities, health facilities, and third-party data processors to ensure data is protected and not misused.
  • MERL Practitioners are seeking more guidance and support on: aspects of consent and confidentiality; bias and interference in data collection by governments and community leaders; overcollection of data leading to fatigue; misuse of sensitive data such as location data; potential for re-identification of individuals; data integrity issues; lack of encryption; and some capacity issues.
  • Good practices and recommendations include ethical clearance of data and data assurance structures; rigorous methods to reduce bias; third party audits of data and data protection processes; localization and contextualization of data processes and interpretation; and “do no harm” framing.

Download reports:

Read about the other gLOCAL Evaluation 2020 events from CLEAR-AA and MERL Tech:

Three Problems — and a Solution — for Data Viz in MERL and ICT4D

Guest post by Amanda Makulec, MPH, Data Visualization Society Operations Director

Just about everyone I know in the ICT4D and MERL communities has interacted with, presented, or created a chart, dashboard, infographic, or other data visualization. We’ve also all seen charts that mislead, confuse, or otherwise fall short of making information more accessible. 

The goal of the Data Visualization Society is to collect and establish best practices in data viz, fostering a community that supports members as they grow and develop data visualization skills. With more than 11.5K members from 123 countries on our first birthday, the society has grown faster than any of the founders imagined.

There are three reasons you should join the Data Visualization Society to improve your data visualizations in international development.

Self-service data visualization tools are everywhere, but that doesn’t mean we’re always building usable charts and graphs.

We’ve seen the proliferation of dashboards and enthusiasm for data viz as a tool to promote data driven decisionmaking.

Just about anyone can make a chart if they have a table of data, thanks to the wide range of tools out there (Flourish, RAWgraphs, Datawrapper, Tableau, PowerBI…to name a few). Without a knowledge of data viz fundamentals though, it’s easy to use these tools to create confusing and misleading graphs.

A recent study on user-designed dashboards in DHIS2 (a commonly used data management and analysis platform in global health) found that “while the technical flexibility of [DHIS2] has been taken advantage of by providing platform customization training…the quality of the dashboards created face numerous challenges.” (Aprisa & Sebo, 2020).  

The researchers used a framework from Stephen Few to evaluate the frequency of five different kinds of ‘dashboard problems’ on 80 user-designed sample dashboards. The five problem ‘types’ included: context, dashboard layout, visualization technique, logical, and data quality. 

Of the 80 dashboards evaluated, 69 (83.1%) had at least one visualization technique problem (Aprisa & Sebo, 2020). Many of the examples shared in the paper could be easily addressed, like transforming the pie chart made of slices representing points in time into a line graph.

With so many tools at our fingerprints, how can we use them to develop meaningful, impactful charts and interactive dashboards?  Learning fundamentals of data visualization is an excellent place to start, and DVS offers a free-to-join professional home to learn those fundamentals.

Many of the communities that exist around data visualization are focused on specific tools, which may not be relevant or accessible for your organization.

In ICT4D, we often have to be scrappy and flexible. That means learning how to work with open source tools, hack charts in Excel, and often make decisions about what tool to use driven as much by resource availability as functionality. 

There are many great tool specific communities out there: TUGs, PUGs, RLadies, Stack Overflow, and more. DVS emerged out of a need to connect people looking to share best practices across the many disciplines doing data viz: journalists, evaluators, developers, graphic designers, and more. That means not being limited to one tool or platform, so we can look for what fits a given project or audience.

After joining DVS, you’ll receive an invite to the Society’s’ Slack, a community “workspace” with channels on different topics and for connecting different groups of people within the community.  You can ask questions about any data viz tool on the #topic-tools channel, and explore emerging and established platforms with honest feedback on how other members have used them in their work.

Data visualization training often means one-off workshops. Attendees leave enthusiastic, but then don’t have colleagues to rely on when they run into new questions or get stuck.

Data visualization isn’t consistently taught as a foundation skill for public health or development professionals.

In university, there may be a few modules within a statistics or evaluation class, but seldom are there dedicated, semester long classes on visualization; those are reserved for computer science and analytics programs (though this seems to be slowing changing!).  Continuing education in data viz is usually short workshops, not long-term mentoring relationships. 

So what happens when people are asked to “figure it out” on the job? Or attend a two day workshop and come away as a resident data viz expert?  

Within DVS, our leadership and our members step up to answer questions and be that coach for people at all stages of learning data visualization. We even have a dedicated feedback space within Slack to share examples of data viz work in progress and get feedback.

DVS also enables informal connections on questions on a wide range of topics. Go to #share-critique, for posting work-in-progress visualizations and seeking feedback from the community. We also host quarterly challenges where you can do hands-on practice with provided data sets to develop your data viz skills and have plans for a formal mentorship program to launch in 2020.

Join DVS today to get its benefits – members from Africa, Asia, and other underrepresented areas are especially encouraged to join us now!

Have any questions? Or ideas on ways DVS can support our global membership base? Find me on Twitter – my DMs are open.

Four Reflections on the 2019 MERL Tech Dashboards Competition

by Amanda Makulec, Excella Labs. This post first appeared here.

Data visualization (viz) has come a long way in our MERL Tech community. Four years ago the conversation was around “so you think you want a dashboard?” which evolved to a debate on dashboards as the silver bullet solution (spoiler: they’re not). Fast forward to 2019, when we had the first plenary competition of dashboard designs on the main stage!

Wayan Vota and Linda Raftree, MERL Tech Organizers, were kind enough to invite me to be a judge for the dashboard competition. Let me say: judging is far less stressful than presenting. Having spoken at MERL Tech every year on a data viz topic since 2015, it felt novel to not be frantically reviewing slides the morning of the conference.

The competition sparked some reflections on how we’ve grown and where we can continue to improve as we use data visualization as one item in our MERL toolbox.

1 – We’ve moved beyond conversations about ‘pretty’ and are talking about how people use our dashboards.

Thankfully, our judging criteria and final selection were not limited to which dashboard was the most beautiful. Instead, we focused on the goal, how the data was structured, why the design was chosen, and the impact it created.

One of the best stories from the stage came from DAI’s Carmen Tedesco (one of three competition winners), who demoed a highly visual interface that even included custom spatial displays of how safe girls felt in different locations throughout a school. When the team demoed the dashboard to their Chief of Party, he was underwhelmed… because he was colorblind and couldn’t make sense of many of the visuals. They pivoted, added more tabular, text-focused, grayscale views, and the team was thrilled.

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green https://twitter.com/siobhangreen/status/1169675846761758724

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green: https://twitter.com/siobhangreen/status/1169675846761758724

Having a competition judged on impact, not just display, matters. What gets measured gets done, right? We need to reward and encourage the design and development of data visualization that has a purpose and helps someone do something – whether it’s raising awareness, making a decision, or something else – not just creating charts for the sake of telling a donor that we have a dashboard.

2 – Our conversations about data visualization need to be anchored in larger dialogues about data culture and data literacy.

We need to continue to move beyond talking about what we’re building and focus on for who, why, and what else is needed for the visualizations to be used.

Creating a “data culture” on a small project team is complicated. In a large global organization or slow-to-change government agency, it can feel impossible. Making data visual, nurturing that skillset within a team, and building a culture of data visualization is one part of the puzzle, but we need champions outside of the data and M&E (monitoring and evaluation) teams who support that organizational change. A Thursday morning MERL Tech session dug into eight dimensions of a data readiness, all of which are critical to having dashboards actually get used – learn more about this work here.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data filtered to a specific user level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

3 – Our data dashboards look far more diverse in scope, purpose, and design than the cluttered widgets of early days.

The three winners we picked were diverse in their project goals and displays, including a complex map, a PowerBI project dashboard, and a simple interface of bar charts designed for various user levels on local enterprise success metrics.

One of the winners – Fraym – was a complex, interactive map display allowing users to zoom in to the square kilometer level. Layers for various metrics, from energy to health, can be turned on or off depending on the use case. Huge volumes of data had to be ingested, including both spatial and quantitative datasets, to make the UI possible.

In contrast, the People’s Choice winner wasn’t a quantitative dashboard of charts and maps. Matter of Focus’ OutNav tool instead makes the certainty around elements of theory of change visual, has visual encodings in the form of colors, saturation, and layout within a workflow, and helps organizations show where they’ve contributed to change.

Seeing the diversity of displays, I’m hopeful that we’re moving away from one-size-fits-all solutions or reliance on a single tech stack (whether Excel, Tableau, PowerBI or something else) and continuing to focus more on crafting products that solve problems for someone, which may require us to continue to expand our horizons regarding the tools and designs we use.

4 – Design still matters though, and data and design nerds should collaborate more often.

That said, there remain huge opportunities for more design in our data displays. Last year, I gave a MERL tech lightning talk on why no one is using your dashboard that focused on the need for more integration of design principles in our data visualization development, and those principles still resonate today.

Principles from graphic design, UX, and other disciplines can take a specific visualization from good to great – the more data nerds and designers collaborate, the better (IMHO). Otherwise, we’ll continue the an epidemic of dashboards, many of which are tools designed to do ALL THE THINGS without being tailored enough to be usable by the most important audiences.

An invitation to join the Data Viz Society

If you’re interested in more discourse around data viz, consider joining the Data Viz Society (DVS) and connect with more than 8,000 members from around the globe (it’s free!) who have joined since we launched in February.

DVS connects visualization enthusiasts across disciplines, tech stacks, and expertise, and aims to collect and establish best practices, fostering a community that supports members as they grow and develop data visualization skills.

We (I’m the volunteer Operations Director) have a vibrant Slack workspace packed with topic and location channels (you’ll get an invite when you join), two-week long moderated Topics in DataViz conversations, data viz challenges, our journal (Nightingale), and more.

More on ways to get involved in this thread – including our data viz practitioner survey results challenge closing 30 September 2019 that has some fabulous cash prizes for your data viz submissions!

We’re actively looking for more diversity in our geographic representation, and would particularly welcome voices from countries outside of North America. A recent conversation about data viz in LMICs (low and middle income countries) was primarily voices from headquarters staff – we’d love to hear more from the field.

I can’t wait to see what the data viz conversations are at MERL Tech 2020!

New Report: Global Innovations in Measurement and Evaluation

All 8 innovationsOn June 26th, New Philanthropy Capital (NPC) released its “Global Innovations in Measurement and Evaluation” report. In it, NPC outlines and elaborates on eight concepts that represent innovations in conducting effective measurement and evaluation of social impact programs. The list of concepts was distilled from conversations with leading evaluation experts about what is exciting in the field and what is most likely to make a long-lasting impact on the practice of evaluation. Below, we feature each of these eight concepts accompanied by brief descriptions of their meanings and implications.

User-Centric

The key to making an evaluation user-centric is to ensure that the service users are truly involved in every stage of the evaluation process. In this way, the power dynamic ceases to be unidirectional as more agency is given to the user. As a result, not only can findings become more compelling to decision makers because of more robust data collection, but also those responsible for the program now become accountable to the users in addition to the funders, a shift that is both ethically important and that is important for the trust it builds.

Shared Measurement & Evaluation

Shared measurement and evaluation requires multiple organizations with similar missions, programs or users to work together to measure their own and their combined impact. This involves using the same evaluation metrics and, at a more advanced stage, developing shared measurement tools and methodologies. Pooling data and comparing outcomes creates a bigger dataset that can support stronger conclusions and provide more insights.

Theory-Based Evaluation

The central idea behind theory-based evaluation is to not only measure the outcome of a program but to also get at the reason why it does or does not work. Typically, this approach begins with a theory of change that proposes an explanation for how activities lead to impact, and this theory is then tested and accepted, refuted or qualified. It is important to apply this concept because without an understanding of why programs work, there is a risk that mistakes will be repeated or that attempts to replicate a program will fail when attempted under different conditions.

Impact Management

Impact management is the integration of impact assessment into strategy and performance management by regularly collecting data and responding to it with course corrections designed to improve the outcomes of a program. This method contrasts with assessment strategies that only examine a program at the end of its life cycle. The objective here is to be flexible and adaptive in order to produce a more effective intervention rather than waiting to evaluate it until there is nothing that can be done to change it.

Data Linkage

Data linkage is the act of bringing together different but relevant data about a specified group of users from beyond a single organization or sub-sector dataset. One example could be a homelessness charity that supports its users in accessing social housing linking its data with the local council to see if its users ultimately remained in their homes. In essence, this method allows organizations to leverage the increasing quantities of data to create comparison groups to track the long term impacts of their programs.

Big Data

Big data is typically considered as the data generated as a by-product of digital transactions and interactions. It is a category that includes people’s social media activity, web searches and digital financial transaction trails. New technology has expanded the human ability to analyze large datasets, and consequently big data has become a powerful tool for helping identify trends and patterns, even if it does not provide explanations for them.

Remote Sensing

Remote sensing uses technology, such as mobile phones, to gather information from afar. This method is useful because it allows one to collect data that may not be typically accessible. Additionally, remote sensing data can be highly detailed, accurate, and in real time. Finally, one of its great strengths is that it is generated passively, which reduces the possibility of introducing researcher bias through human input.

Data Visualization

Data visualization is the practice of presenting data in a graphic form. New technology has made it possible to create a broad range of useful visualizations. The result is that data is now more accessible to non-specialists, and the insights produced through analysis can now be better understood and communicated.

For more details and more examples of real-world applications of these concepts, check out the full “Global Innovations in Measurement and Evaluation” report here.