Tag Archives: Emily Tomkys Valtari

Tools, tips and templates for making Responsible Data a reality

by David Leege, CRS; Emily Tomkys, Oxfam GB; Nina Getachew, mSTAR/FHI 360; and Linda Raftree, Independent Consultant/MERL Tech; who led the session “Tools, tips and templates for making responsible data a reality.

The data lifecycle.
The data lifecycle.

For this year’s MERL Tech DC, we teamed up to do a session on Responsible Data. Based on feedback from last year, we knew that people wanted less discussion on why ethics, privacy and security are important, and more concrete tools, tips and templates. Though it’s difficult to offer specific do’s and don’ts, since each situation and context needs individualized analysis, we were able to share a lot of the resources that we know are out there.

To kick off the session, we quickly explained what we meant by Responsible Data. Then we handed out some cards from Oxfam’s Responsible Data game and asked people to discuss their thoughts in pairs. Some of the statements that came up for discussion included:

  • Being responsible means we can’t openly share data – we have to protect it
  • We shouldn’t tell people they can withdraw consent for us to use their data when in reality we have no way of doing what they ask
  • Biometrics are a good way of verifying who people are and reducing fraud

Following the card game we asked people to gather around 4 tables with a die and a print out of the data lifecycle where each phase corresponded to a number (Planning = 1, collecting = 2, storage = 3, and so on…). Each rolled the die and, based on their number, told a “data story” of an experience, concern or data failure related to that phase of the lifecycle. Then the group discussed the stories.

For our last activity, each of us took a specific pack of tools, templates and tips and rotated around the 4 tables to share experiences and discuss practical ways to move towards stronger responsible data practices.

Responsible data values and principles

David shared Catholic Relief Services’ process of developing a responsible data policy, which they started in 2017 by identifying core values and principles and how they relate to responsible data. This was based on national and international standards such as the Humanitarian Charter including the Humanitarian Protection Principles and the Core and Minimum Standards as outlined in Sphere Handbook Protection Principle 1; the Protection of Human Subjects, known as the “Common Rule” as laid out in the Department of Health and Human Services Policy for Protection of Human Research Subjects; and the Digital Principles, particularly Principle 8 which mandates that organizations address privacy and security.

As a Catholic organization, CRS follows the principles of Catholic social teaching, which directly relate to responsible data in the following ways:

  • Sacredness and dignity of the human person – we will respect and protect an individual’s personal data as an extension of their human dignity;
  • Rights and responsibilities – we will balance the right to be counted and heard with the right to privacy and security;
  • Social nature of humanity – we will weigh the benefits and risks of using digital tools, platforms and data;
  • Common good – we will open data for the common good only after minimizing the risks;
  • Subsidiarity – we will prioritize local ownership and control of data for planning and decision-making;
  • Solidarity – we will work to educate inform and engage our constituents in responsible data approaches;
  • Option for the poor – we will take a preferential option for protecting and securing the data of the poor; and
  • Stewardship – we will responsibly steward the data that is provided to us by our constituents.

David shared a draft version of CRS’ responsible data values and principles.

Responsible data policy, practices and evaluation of their roll-out

Oxfam released its Responsible Program Data Policy in 2015. Since then, they have carried out six pilots to explore how to implement the policy in a variety of countries and contexts. Emily shared information on these these pilots and the results of research carried out by the Engine Room called Responsible Data at Oxfam: Translating Oxfam’s Responsible Data Policy into practice, two years on. The report concluded that the staff that have engaged with Oxfam’s Responsible Data Policy find it both practically relevant and important. One of the recommendations of this research showed that Oxfam needed to increase uptake amongst staff and provide an introductory guide to the area of responsible data.  

In response, Oxfam created the Responsible Data Management pack, (available in English, Spanish, French and Arabic), which included the game that was played in today’s session along with other tools and templates. The card game introduces some of the key themes and tensions inherent in making responsible data decisions. The examples on the cards are derived from real experiences at Oxfam and elsewhere, and they aim to generate discussion and debate. Oxfam’s training pack also includes other tools, such as advice on taking photos, a data planning template, a poster of the data lifecycle and general information on how to use the training pack. Emily’s session also encouraged discussion with participants about governance and accountability issues like who in the organisation manages responsible data and how to make responsible data decisions when each context may require a different action.

Emily shared the following resources:

A packed house for the responsible data session.
A packed house for the responsible data session.

Responsible data case studies

Nina shared early results of four case studies mSTAR is conducting together with Sonjara for USAID. The case studies are testing a draft set of responsible data guidelines, determining whether they are adequate for ‘on the ground’ situations and if projects find them relevant, useful and usable. The guidelines were designed collaboratively, based on a thorough review and synthesis of responsible data practices and policies of USAID and other international development and humanitarian organizations. To conduct the case studies, Sonjara, Nina and other researchers visited four programs which are collecting large amounts of potentially sensitive data in Nigeria, Kenya and Uganda. The researchers interviewed a broad range of stakeholders and looked at how the programs use, store, and manage personally identifiable data (PII). Based on the research findings, adjustments are being made to the guidelines. It is anticipated that they will be published in October.

Nina also talked about CALP/ELAN’s data sharing tipsheets, which include a draft data-sharing agreement that organizations can adapt to their own contracting contracting documents. She circulated a handout which identifies the core elements of the Fair Information Practice Principles (FIPPs) that are important to consider when using PII data.  

Responsible data literature review and guidelines

Linda mentioned that a literature review of responsible data policy and practice has been done as part of the above mentioned mSTAR project (which she also worked on). The literature review will provide additional resources and analysis, including an overview of the core elements that should be included in organizational data guidelines, an overview of USAID policy and regulations, emerging legal frameworks such as the EU’s General Data Protection Regulation (GDPR), and good practice on how to develop guidelines in ways that enhance uptake and use. The hope is that both the Responsible Data Literature Review and the of Responsible Data Guidelines will be suitable for adopting and adapting by other organizations. The guidelines will offer a set of critical questions and orientation, but that ethical and responsible data practices will always be context specific and cannot be a “check-box” exercise given the complexity of all the elements that combine in each situation. 

Linda also shared some tools, guidelines and templates that have been developed in the past few years, such as Girl Effect’s Digital Safeguarding Guidelines, the Future of Privacy Forum’s Risk-Benefits-Harms framework, and the World Food Program’s guidance on Conducting Mobile Surveys Responsibly.

More tools, tips and templates

Check out this responsible data resource list, which includes additional tools, tips and templates. It was developed for MERL Tech London in February 2017 and we continue to add to it as new documents and resources come out. After a few years of advocating for ‘responsible data’ at MERL Tech to less-than-crowded sessions, we were really excited to have a packed room and high levels of interest this year!   

Mobile Case Management for Multi-Dimensional Accountability

This is a cross-post from Christopher Robert of Dobility. It was originally published September 13 on the SurveyCTO blog.

At MERL Tech DC 2017, Oxfam’s Emily Tomkys Valteri and I teamed up to lead a session on Mobile case management for multi-dimensional accountability. This blog post shares some highlights from that session. [Note: session slides are available here]

Background

In their Your Word Counts project, Oxfam is collaborating with local and global partners to capture, analyze, and respond to community feedback data using a mobile case management tool. The goal is to inform Oxfam’s Middle East humanitarian response and give those affected by crisis a voice for improved support and services. This project is a scale-up of an earlier pilot project, and both the pilot and the scale-up have been supported by the Humanitarian Innovation Fund.

Oxfam’s use of SurveyCTO’s case-management features has been innovative, and they have been helping to support improvements in the core technology. In this session, we discussed both the core technology and the broader organizational and logistical challenges that Oxfam has encountered in the field.

Mobile case management: an introduction 

In standard applications of mobile data collection, enumerators, inspectors, program officers, or others use a mobile phone or tablet to collect data. Whether they quietly observe things, interview people, or inspect facilities, they ultimately enter some kind of data into a mobile device. In systems like SurveyCTO, data-collection officially begins when they click a Fill Blank Formbutton and choose a digital form to fill out.

Mobile data collection

Mobile case management is much the same, but the process begins with cases and then proceeds to forms. As far as the core technology is concerned, a case might be a clinic, a school, a water point, a household – pretty much any unit that’s meaningful in the given context. Instead of choosing Fill Blank Form and choosing a form, users in the field choose Manage Cases and then choose a particular case from a list that’s filtered specifically for that user (e.g., to include only schools in their area); once they select a case, they then select one of the forms that is outstanding for that case.

Mobile case management

Behind the scenes, the case list is really just a spreadsheet. It includes columns for the unique case ID, the label that should be used to identify the case to users, the list of forms that should be filled for the case, and the users and/or user roles that should see the case listed in their case list. Importantly, the case list is not static: any form can update or add a case, and thus as users fill forms the case list can be dynamically revised and extended. (In SurveyCTO, the case list is simply a server dataset: it can be manually uploaded as a .csv, attached to forms, and updated just like any other dataset.)

Mobile case management: case list

Oxfam’s innovative use case: Your Word Counts 

Oxfam accountability feedback loop

Oxfam accountability feedback loop. Diagram credit: Oxfam GB.

In Oxfam’s Your Word Counts project, cases represent any kind of feedback from the community. Volunteers and program staff carry mobile phones and log feedback as new cases whenever they interact with community members; technical teams then work to resolve feedback within a week, filling out new forms to update cases as their status changes; and program staff then close the loop with the original community members when possible, before closing the case. Because the data is all available in a single electronic system, in-country, regional, and even global teams can then report on and analyze both the community feedback and the responses over time.

There have been some definite successes in piloting and early scale-up:

  • By listening to community members, recording their feedback, and following up, the community feedback system has helped to build trust.
  • The digital process of recording referrals, updates, and eventually responses has been rapid, speeding responsiveness to feedback overall.
  • Since all digital forms can be updated easily, the system is dynamic and flexible enough to adapt as programs or needs change.
  • The solution appears to be low-cost, scalable, and sustainable.

There have been both organizational and logistical challenges, however. For example:

  • For a system like this to truly be effective, fundamental responsibility for accountability must be shared organization-wide. While MEAL officers (monitoring, evaluation, accountability, and learning officers) can help to set up and manage accountability systems, technical teams, program teams, and senior leadership ultimately have to share ownership and responsibility in order for the system to function and sustain.
  • Globally-predefined feedback categories turned out not to fit well with early deployment contexts, and so the program team needed to re-think how to most effectively categorize feedback. (See Oxfam’s blog post on the subject.)
  • In dynamic in-country settings, staff turnover can be high, posing major logistical and sustainability challenges for systems of all kinds.
  • While community members can add and update cases offline, ultimately an Internet connection is required to synchronize case lists with a central server. In some settings, access to office Internet has been a challenge.
  • Ideally, cases would be easily referred across agencies working in a particular setting, but some agencies have been reluctant to buy into shared digital systems.

Oxfam’s MEAL team is exploring ways to facilitate a broader accountability culture throughout the organization. In country programs, for example, MEAL coordinators are looking to use office whiteboards to track key indicators of feedback performance and engage staff in discussions of what those indicators mean for them. More broadly, Oxfam is looking to highlight best practices in responding and acting on feedback and seeking other ways to incentivize teams in this area.

Oxfam’s work is ongoing, and you can follow their progress on their project blog.

Mobile case management: Where it’s going 

While Oxfam works to build and support both systems and culture for accountability in their humanitarian response programs, we at Dobility are working to improve the core technology. With Oxfam’s feedback and support, we are currently working to improve the user interface used to filter and browse case lists, both on devices (in the field) and on the web (in the office). We are also working to improve the user interface for those setting up and managing these kinds of case-management system. If you have specific ideas, please share them by commenting below!