Tag Archives: tips

8 Ways to Adapt Your M&E During the COVID-19 Pandemic

Guest post from Janna Rous. Original published here.

So, all of a sudden you’re stuck at home because of the new coronavirus.  You’re looking at your M&E commitments and your program commitments.  Do you put them all on hold and postpone them until the coronavirus threat has passed and everything goes back to normal?  Or is there a way to still get things done”!?  This article reviews 8 ways you can adapt your M&E during the pandemic.

Here are a few ideas that you and your team might consider doing to make sure you can stay on track (and maybe even IMPROVE your MEAL practices) even if you might currently be in the middle of a lockdown, or if you think you might be going into a lockdown soon:

1. Phone Call Interviews instead of In-Person Interviews

Do you have any household assessments or baseline surveys or post-distribution monitoring that you had planned in the next 1 to 3 months? Is there a way that you can carry out these interviews by phone or WhatsApp calls?  This is the easiest and most direct way to carry on with your current M&E plan.  Instead of doing these interviews face-to-face, just get them on a call.  I’ve created a checklist to help you prepare for doing phone call interviews – click here to get the “Humanitarian’s Phone Call Interview Checklist”.  Here are a few things you need to think through to transition to a phone-call methodology:

  • You need phone numbers and names of people that need to be surveyed. Do you have these?  Or is there a community leader who might be able to help you get these?
  • You also need to expect that a LOT of people may not answer their phone. So instead of “sampling” people for a survey, you might want to just plan on calling almost everyone on that list.
  • Just like for a face-to-face interview, you need to know what you’re going to say. So you need to have a script ready for how you introduce yourself and ask for consent to do a phone questionnaire.  It’s best to have a structured interview questionnaire that you follow for every phone call, just like you would in a face-to-face assessment.
  • You also need to have a way to enter data as you ask the questions. This usually depends on what you’re most comfortable with – but I recommend preparing an ODK or KoboToolbox questionnaire, just like you would for an in-person survey, and filling it out as you do the interview over the phone.  I find it easiest to enter the data into KoboToolbox “Webform” instead of the mobile app, because I can type information faster into my laptop rather than thumb-type it into a mobile device.  But use what you have!
  • If you’re not comfortable in KoboToolbox, you could also prepare an Excel sheet for directly entering answers – but this will probably require a lot more data cleaning later on.
  • When you’re interviewing, it’s usually faster to type down the answers in the language you’re interviewing in. If you need your final data collection to be in English, go back and do the translation after you’ve hung up the phone.
  • If you want a record of the interview, ask if you can record the phone call. When the person says yes, then just record it so you can go back and double check an answer if you need to.
  • Very practically – if you’re doing lots of phone calls in a day, it is easier on your arm and your neck if you use a headset instead of holding your phone to your ear all day!

2. Collect Videos & Photos Directly from Households and Communities

When you’re doing any in-person MEAL activities, you’re always able to observe evidence. You can look around and SEE impact, you don’t just hear it through an interview or group discussion.  But when you’re doing M&E remotely, you can’t double-check to see what impact really looks like.  So I recommend:

  • Connect with as many beneficiaries and team members as possible through WhatsApp or another communication app and collect photos and videos of evidence directly from them.
  • Video – Maybe someone has a story of impact they can share with you through video. Or if you’re overseeing a Primary Health Care clinic, perhaps you can have a staff member walk you through the clinic with a video so you can do a remote assessment.
  • Pictures – Maybe you can ask everyone to send you a picture of (for example) their “hand washing station with soap and water” (if you’re monitoring a WASH program). Or perhaps you want evidence that the local water point is functioning.

3. Programme Final Evaluation

It’s a good practice to do a final evaluation review when you reach the end of a program.  If you have a program finishing in the next 1-3 months, and you want to do a final review to assess lessons learned overall, then you can also do this remotely!

  • Make a list of all the stakeholders that would be great to talk to: staff members, a few beneficiaries, government authorities (local and/or national), other NGOs, coordination groups, partner organizations, local community leaders.
  • Then go in search of either their phone numbers, their email addresses, their Skype accounts, or their WhatsApp numbers and get in touch.
  • It’s best if you can get on a video chat with as many of them as possible – because it’s much more personal and easy to communicate if you can see one another’s faces! But if you can just talk with audio – that’s okay too.
  • Prepare a semi-structured interview, a list of questions you want to talk through about the impact, what went well, what could have gone better. And if there’s anything interesting that comes up, don’t worry about coming up with some new questions on the spot or skipping questions that don’t make sense in the context.
  • You can also gather together any monitoring reports/analysis that was done on the project throughout its implementation period, plus pictures of the interventions.
  • Use all this information to create a final “lessons learned” evaluation document. This is a fantastic way to continually improve the way you do humanitarian programming.

4. Adapt Your Focus Group Discussion Plan

If everyone is at home because your country has imposed a lockdown, it will be very difficult to do a focus group discussion because….you can’t be in groups!  So, with your team decide if it might be better to switch your monitoring activity from collecting qualitative data in group discussions to actually just having one-on-one interviews on the phone with several people to collect the same information.

  • There are some dynamics that you will miss in one-to-one interviews, information that may only come out during group discussions. (Especially where you’re collecting sensitive or “taboo” data.) Identify what that type of information might be – and either skip those types of questions for now, or brainstorm how else you could collect the information through phone-calls.

5. Adapt Your Key Informant Interviews

If you normally carry out Key Informant Interviews, it would be a great idea to think what “extra” questions you need to ask this month in the midst of the coronavirus pandemic.

  • If you normally ask questions around your program sector areas, think about just collecting a few extra data points about feelings, needs, fears, and challenges that are a reality in light of Covid-19. Are people facing any additional pressures due to the epidemic? Or are there any new humanitarian needs right now? Are there any upcoming needs that people are anticipating?
  • It goes without saying that if your Key Informant Interviews are normally in person, you’ll want to carry these out by phone for the foreseeable future!

6. What To Do About Third Party Monitoring

Some programs and donors use Third Party Monitors to assess their program results independently.  If you normally hire third party monitors, and you’ve got some third party monitoring planned for the next 1-3 months, you need to get on the phone with this team and make a new plan. Here are a few things you might want to think through with your third party monitors:

  • Can the third party carry out their monitoring by phone, in the same ways I’ve outlined above?
  • But also think through – is it worth it to get a third party monitor to assess results remotely? Is it better to postpone their monitoring?  Or is it worth it to carry on regardless?
  • What is the budget implication? If cars won’t be used, is there any cost-savings?  Is there any additional budget they’ll need for air-time costs for their phones?
  • Make sure there is a plan to gather as much photo and video evidence as possible (see point 2 above!)
  • If they’re carrying out phone call interviews it would also be a good recommendation to record phone calls if possible and with consent, so you have the records if needed.

7. Manage Expectations – The Coronavirus Pandemic May Impact Your Program Results.

You probably didn’t predict that a global pandemic would occur in the middle of your project cycle and throw your entire plan off.  Go easy on yourself and your team!  It is most likely that the results you’d planned for might not end up being achieved this year.  Your donors know this (because they’re probably also on lockdown).  You can’t control the pandemic, but you can control your response.  So proactively manage your own expectations, your manager’s expectations and your donor’s expectations.

  • Get on a Skype or Zoom call with the project managers and review each indicator of your M&E plan. In light of the pandemic, what indicator targets will most likely change?
  • Look through the baseline numbers in your M&E plan – is it possible that the results at the END of your project might be worse than even your baseline numbers? For example, if you have a livelihoods project, it is possible that income and livelihoods will be drastically reduced by a country-wide lockdown.  Or are you running an education program?  If schools have been closed, then will a comparison to the baseline be possible?
  • Once you’ve done a review of your M&E plan, create a very simple revised plan that can be talked through with your program donor.

8. Talk To Your Donors About What You Can Do Remotely

When you’re on the phone with your donors, don’t only talk about revised program indicators.

  • Also talk about a revised timeframe – is there any flexibility on the program timeframe, or deadlines for interim reporting on indicators? What are their expectations?
  • Also talk about what you CAN do remotely. Discuss with them the plan you have for carrying on everything possible that can be done remotely.
  • And don’t forget to discuss financial implications of changes to timeframe.

 

5 tips for operationalizing Responsible Data policy

By Alexandra Robinson and Linda Raftree

MERL and development practitioners have long wrestled with complex ethical, regulatory, and technical aspects of adopting new data approaches and technologies. The topic of responsible data has gained traction over the past 5 years or so, and a handful of early adopters have developed and begun to operationalize institutional RD policies. Translating policy into practical action, however, can feel daunting to organizations. Constrained budgets, complex internal bureaucracies, and ever-evolving technology and regulatory landscapes make it hard to even know where to start. 

The Principles for Digital Development provide helpful high level standards, and donor guidelines (such as USAID’s Responsible Data Considerations) offer additional framing. But there’s no one-size-fits-all policy or implementation plan that organizations can simply copy and paste in order to tick all the responsible data boxes. 

We don’t think organizations should do that anyway, given that each organization’s context and operating approach is different, and policy means nothing if it’s not rolled out through actual practice and behavior change!

In September, we hosted a MERL Tech pre-workshop on Operationalizing Responsible Data to discuss and share different ways of turning responsible data policy into practice. Below we’ve summarized some tips shared at the workshop. RD champions in organizations of any size can consider these when developing and implementing RD policy.

1. Understand Your Context & Extend Empathy

  • Before developing policy, conduct a non-punitive assessment (a.k.a. a landscape assessment, self-assessment or staff research process) on existing data practices, norms, and decision-making structures . This should engage everyone who will using or affected by the new policies and practices. Help everyone relax and feel comfortable sharing how they’ve been managing data up to now so that the organization can then improve. (Hint: avoid the term ‘audit’ which makes everyone nervous.)
  • Create ‘safe space’ to share and learn through the assessment process:
    • Allow staff to speak anonymously about their challenges and concerns whenever possible
    • Highlight and reinforce promising existing practices
    • Involve people in a ‘self-assessment’
    • Use participatory workshops (e.g. work with a team to map a project’s data flows or conduct a Privacy Impact Assessment or a Risk-Benefits Assessment) – this allows everyone who participates to gain RD awareness while also learning new practical tools along with highlighting any areas that need attention. The workshop lead or “RD champion” can also then get a better sense of the wider organizations knowledge, attitudes and practices as related to RD
    • Acknowledge (and encourages institutional leaders to affirm) that most staff don’t have “RD expert” written into their JDs; reinforce that staff will not be ‘graded’ or evaluated on skills they weren’t hired for.
  • Identify organizational stakeholders likely to shape, implement, or own aspects of RD policy and tailor your engagement strategies to their perspectives, motivations, and concerns. Some may feel motivated financially (avoiding fines or the cost of a data breach); others may be motivated by human rights or ethics; whereas some others might be most concerned with RD with respect to reputation, trust, funding and PR.
  • Map organizational policies, major processes (like procurement, due diligence, grants management), and decision making structures to assess how RD policy can be integrated into these existing activities.

2. Consider Alternative Models to Develop RD Policy 

  • There is no ‘one size fits all’ approach to developing RD policy. As the (still small, but promising) number of organizations adopting policy grows, different approaches are emerging. Here are some that we’ve seen:
    • Top-down: An institutional-level policy is developed, normally at the request of someone on the leadership team/senior management. It is then adapted and applied across projects, offices, etc. 
      • Works best when there is strong leadership buy-in for RD policy and a focal point (e.g. an ‘Executive Sponsor’) coordinating policy formation and navigating stakeholders
    • Bottom-up: A group of staff are concerned about RD but do not have support or interest from senior leadership, so they ‘self-start’ the learning process and begin shaping their own practices, joining together, meeting, and communicating regularly until they have wider buy-in and can approach leadership with a use case and budget request for an organization-wide approach.
      • Good option if there is little buy-in at the top and you need to build a case for why RD matters.
    • Project- or Team-Generated: Development and application of RD policies are piloted within a targeted project or projects or on one team. Based on this smaller slice of the organization, the project or team documents its challenges, process, and lessons learned to build momentum for and inform the development of future organization-wide policy. 
      • Promising option when organizational awareness and buy-in for RD is still nascent and/or resources to support RD policy formation and adoption (staff, financial, etc.) are limited.
    • Hybrid approach: Organizational policy/policies are developed through pilot testing across a reasonably-representative sample of projects or contexts. For example, an organization with diverse programmatic and geographical scope develops and pilots policies in a select set of country offices that can offer different learning and experiences; e.g., a humanitarian-focused setting, a development-focused setting, and a mixed setting; a small office, medium sized office and large office; 3-4 offices in different regions; offices that are funded in various ways; etc.  
      • Promising option when an organization is highly decentralized and works across a diverse country contexts and settings. Supports the development of approaches that are relevant and responsive to diverse capacities and data contexts.

3. Couple Policy with Practical Tools, and Pilot Tools Early and Often

  • In order to translate policy into action, couple it with practical tools that support existing organizational practices. 
  • Make sure tools and processes empower staff to make decisions and relate clearly to policy standards or components; for example:
    • If the RD policy includes a high-level standard such as, “We ensure that our partnerships with technology companies align with our RD values,” give staff tools and guidance to assess that alignment. 
  • When developing tools and processes, involve target users early and iteratively. Don’t worry if draft tools aren’t perfectly formatted. Design with users to ensure tools are actually useful before you sink time into tools that will sit on a shelf at best, and confuse or overburden staff at worst. 

4. Integrate and “Right-Size” Solutions 

  • As RD champions, it can be tempting to approach RD policy in a silo, forgetting it is one of many organizational priorities. Be careful to integrate RD into existing processes, align RD with decision-making structures and internal culture, and do not place unrealistic burdens on staff.
  • When building tools and processes, work with stakeholders to develop responsibility assignment charts (e.g. RACI, MOCHA) and determine decision makers.
  • When developing responsibility matrices, estimate the hours each stakeholder (including partners, vendors, and grantees) will dedicate to a particular tool or process. Work with anticipated end users to ensure that processes:
    • Can realistically be carried out within a normal workload
    • Will not excessively burden staff and partners
    • Are realistically proportionate to the size, complexity, and risk involved in a particular investment or project

5. Bridge Policy and Behavior Change through Accompaniment & Capacity Building 

  • Integrating RD policy and practices requires behavior change and can feel technically intimidating to staff. Remember to reassure staff that no one (not even the best resourced technology firms!), has responsible data mastered, and that perfection is not the goal.
  • In order to feel confident using new tools and approaches to make decisions, staff need knowledge to analyze information. Skills and knowledge required will be different according to role, so training should be adapted accordingly. While IT staff may need to know the ins and outs of network security, general program officers certainly do not. 
  • Accompany staff as they integrate RD processes into their work. Walk alongside them, answering questions along the way, but more importantly, helping staff build confidence to develop their own internal RD compass. That way the pool of RD champions will grow!

What approaches have you seen work in your organization?

Qualitative Coding: From Low Tech to High Tech Options

by Daniel Ramirez-Raftree, MERL Tech volunteer

In their MERL Tech DC session on qualitative coding, Charles Guedenet and Anne Laesecke from IREX together with Danielle de Garcia of Social Impact offered an introduction to the qualitative coding process followed by a hands-on demonstration on using Excel and Dedoose for coding and analyzing text.

They began by defining content analysis as any effort to make sense of qualitative data that takes a volume of qualitative material and attempts to identify core consistencies and meanings. More concretely, it is a research method that uses a set of procedures to make valid inferences from text. They also shared their thoughts on what makes for a good qualitative coding method.

Their belief is that: it should

  • consider what is already known about the topic being explored
  • be logically grounded in this existing knowledge
  • use existing knowledge as a basis for looking for evidence in the text being analyzed

With this definition laid out, they moved to a discussion about the coding process where they elaborated on four general steps:

  1. develop codes and a codebook
  2. decide on a sampling plan
  3. code your data
  4. go back and do it again!
  5. test for reliability

Developing codes and a codebook is important for establishing consistency in the coding process, especially if there will be multiple coders working on the data. A good way to start developing these codes is to consider what is already known. For example, you can think about literature that exists on the subject you’re studying. Alternatively, you can simply turn to the research questions the project seeks to answer and use them as a guide for creating your codes. Beyond this, it is also useful to go through the content and think about what you notice as you read. Once a codebook is created, it will lend stability and some measure of objectivity to the project.

The next important issue is the question of sampling. When determining sample size, though a larger sample will yield more robust results, one must of course consider the practical constraints of time, cost and effort. Does the benefit of higher quality results justify the additional investment? Fortunately, the type of data will often inform sampling. For example, if there is a huge volume of data, it may be impossible to analyze it all, but it would be prudent to sample at least 30% of it. On the other hand, usually interview and focus group data will all be analyzed, because otherwise the effort of obtaining the data would have gone to waste.

Regarding sampling method, session leads highlighted two strategies that produce sound results. One is systematic random sampling and the other is quota sampling–a method employed to ensure that the proportions of demographic group data are fairly represented.

Once these key decisions have been made, the actual coding can begin. Here, all coders should work from the same codebook and apply the codes to the same unit of analysis. Typical units of analysis are: single words, themes, sentences, paragraphs, and items (such as articles, images, books, or programs). Consistency is essential. A way to test the level of consistency is to have a 10% overlap in the content each coder analyzes and aim for 80% agreement between their coding of that content. If the coders are not applying the same codes to the same units this could either mean that they are not trained properly or that the code book needs to be altered.

Along a similar vein, the fourth step in the coding process is to test for reliability. Challenges in producing stable and consistent results in coding could include: using a unit of analysis that is too large for a simple code to be reliably applied, coding themes or concepts that are ambiguous, and coding nonverbal items. For each of these, the central problem is that the units of analysis leave too much room for subjective interpretation that can introduce bias. Having a detailed codebook can help to mitigate against this.

After giving an overview of the coding process, the session leads suggested a few possible strategies for data visualization. One is to use a word tree, which helps one look at the context in which a word appears. Another is a bubble chart, which is useful if one has descriptive data and demographic information. Thirdly, correlation maps are good for showing what sorts of relationships exist among the data. The leads suggested visiting the website stephanieevergreen.com/blog for more ideas about data visualization.

Finally, the leads covered low-tech and high-tech options for coding. On the low-tech end of the spectrum, paper and pen get the job done. They are useful when there are few data sources to analyze, when the coding is simple, and when there is limited tech literacy among the coders. Next up the scale is Excel, which works when there are few data sources and when the coders are familiar with Excel. Then the session leads closed their presentation with a demonstration of Dedoose, which is a qualitative coding tool with advanced capabilities like the capacity to code audio and video files and specialized visualization tools. In addition to Dedoose, the presenters mentioned Nvivo and Atlas as other available qualitative coding software.

Despite the range of qualitative content available for analysis, there are a few core principles that can help ensure that it is analyzed well, these include consistency and disciplined methodology. And if qualitative coding will be an ongoing part of your organization’s operations, there are several options for specialized software that are available for you to explore. [Click here for links and additional resources from the session.]

Tools, tips and templates for making Responsible Data a reality

by David Leege, CRS; Emily Tomkys, Oxfam GB; Nina Getachew, mSTAR/FHI 360; and Linda Raftree, Independent Consultant/MERL Tech; who led the session “Tools, tips and templates for making responsible data a reality.

The data lifecycle.
The data lifecycle.

For this year’s MERL Tech DC, we teamed up to do a session on Responsible Data. Based on feedback from last year, we knew that people wanted less discussion on why ethics, privacy and security are important, and more concrete tools, tips and templates. Though it’s difficult to offer specific do’s and don’ts, since each situation and context needs individualized analysis, we were able to share a lot of the resources that we know are out there.

To kick off the session, we quickly explained what we meant by Responsible Data. Then we handed out some cards from Oxfam’s Responsible Data game and asked people to discuss their thoughts in pairs. Some of the statements that came up for discussion included:

  • Being responsible means we can’t openly share data – we have to protect it
  • We shouldn’t tell people they can withdraw consent for us to use their data when in reality we have no way of doing what they ask
  • Biometrics are a good way of verifying who people are and reducing fraud

Following the card game we asked people to gather around 4 tables with a die and a print out of the data lifecycle where each phase corresponded to a number (Planning = 1, collecting = 2, storage = 3, and so on…). Each rolled the die and, based on their number, told a “data story” of an experience, concern or data failure related to that phase of the lifecycle. Then the group discussed the stories.

For our last activity, each of us took a specific pack of tools, templates and tips and rotated around the 4 tables to share experiences and discuss practical ways to move towards stronger responsible data practices.

Responsible data values and principles

David shared Catholic Relief Services’ process of developing a responsible data policy, which they started in 2017 by identifying core values and principles and how they relate to responsible data. This was based on national and international standards such as the Humanitarian Charter including the Humanitarian Protection Principles and the Core and Minimum Standards as outlined in Sphere Handbook Protection Principle 1; the Protection of Human Subjects, known as the “Common Rule” as laid out in the Department of Health and Human Services Policy for Protection of Human Research Subjects; and the Digital Principles, particularly Principle 8 which mandates that organizations address privacy and security.

As a Catholic organization, CRS follows the principles of Catholic social teaching, which directly relate to responsible data in the following ways:

  • Sacredness and dignity of the human person – we will respect and protect an individual’s personal data as an extension of their human dignity;
  • Rights and responsibilities – we will balance the right to be counted and heard with the right to privacy and security;
  • Social nature of humanity – we will weigh the benefits and risks of using digital tools, platforms and data;
  • Common good – we will open data for the common good only after minimizing the risks;
  • Subsidiarity – we will prioritize local ownership and control of data for planning and decision-making;
  • Solidarity – we will work to educate inform and engage our constituents in responsible data approaches;
  • Option for the poor – we will take a preferential option for protecting and securing the data of the poor; and
  • Stewardship – we will responsibly steward the data that is provided to us by our constituents.

David shared a draft version of CRS’ responsible data values and principles.

Responsible data policy, practices and evaluation of their roll-out

Oxfam released its Responsible Program Data Policy in 2015. Since then, they have carried out six pilots to explore how to implement the policy in a variety of countries and contexts. Emily shared information on these these pilots and the results of research carried out by the Engine Room called Responsible Data at Oxfam: Translating Oxfam’s Responsible Data Policy into practice, two years on. The report concluded that the staff that have engaged with Oxfam’s Responsible Data Policy find it both practically relevant and important. One of the recommendations of this research showed that Oxfam needed to increase uptake amongst staff and provide an introductory guide to the area of responsible data.  

In response, Oxfam created the Responsible Data Management pack, (available in English, Spanish, French and Arabic), which included the game that was played in today’s session along with other tools and templates. The card game introduces some of the key themes and tensions inherent in making responsible data decisions. The examples on the cards are derived from real experiences at Oxfam and elsewhere, and they aim to generate discussion and debate. Oxfam’s training pack also includes other tools, such as advice on taking photos, a data planning template, a poster of the data lifecycle and general information on how to use the training pack. Emily’s session also encouraged discussion with participants about governance and accountability issues like who in the organisation manages responsible data and how to make responsible data decisions when each context may require a different action.

Emily shared the following resources:

A packed house for the responsible data session.
A packed house for the responsible data session.

Responsible data case studies

Nina shared early results of four case studies mSTAR is conducting together with Sonjara for USAID. The case studies are testing a draft set of responsible data guidelines, determining whether they are adequate for ‘on the ground’ situations and if projects find them relevant, useful and usable. The guidelines were designed collaboratively, based on a thorough review and synthesis of responsible data practices and policies of USAID and other international development and humanitarian organizations. To conduct the case studies, Sonjara, Nina and other researchers visited four programs which are collecting large amounts of potentially sensitive data in Nigeria, Kenya and Uganda. The researchers interviewed a broad range of stakeholders and looked at how the programs use, store, and manage personally identifiable data (PII). Based on the research findings, adjustments are being made to the guidelines. It is anticipated that they will be published in October.

Nina also talked about CALP/ELAN’s data sharing tipsheets, which include a draft data-sharing agreement that organizations can adapt to their own contracting contracting documents. She circulated a handout which identifies the core elements of the Fair Information Practice Principles (FIPPs) that are important to consider when using PII data.  

Responsible data literature review and guidelines

Linda mentioned that a literature review of responsible data policy and practice has been done as part of the above mentioned mSTAR project (which she also worked on). The literature review will provide additional resources and analysis, including an overview of the core elements that should be included in organizational data guidelines, an overview of USAID policy and regulations, emerging legal frameworks such as the EU’s General Data Protection Regulation (GDPR), and good practice on how to develop guidelines in ways that enhance uptake and use. The hope is that both the Responsible Data Literature Review and the of Responsible Data Guidelines will be suitable for adopting and adapting by other organizations. The guidelines will offer a set of critical questions and orientation, but that ethical and responsible data practices will always be context specific and cannot be a “check-box” exercise given the complexity of all the elements that combine in each situation. 

Linda also shared some tools, guidelines and templates that have been developed in the past few years, such as Girl Effect’s Digital Safeguarding Guidelines, the Future of Privacy Forum’s Risk-Benefits-Harms framework, and the World Food Program’s guidance on Conducting Mobile Surveys Responsibly.

More tools, tips and templates

Check out this responsible data resource list, which includes additional tools, tips and templates. It was developed for MERL Tech London in February 2017 and we continue to add to it as new documents and resources come out. After a few years of advocating for ‘responsible data’ at MERL Tech to less-than-crowded sessions, we were really excited to have a packed room and high levels of interest this year!   

12 ways to ensure your data management implementation doesn’t become a dumpster fire

By Jason Rubin, PCI; Kate Mueller, Dev Results; and Mike Klein, ISG. They lead the session on “One system to rule them all? Balancing organization-wide data standards and project data needs.

Dumpster FireLet’s face it: failed information system implementations are not uncommon in our industry, and as a result, we often have a great deal of skepticism toward new tools and processes.

We addressed this topic head-on during our 2017 MERL Tech session, One system to rule them all?

The session discussed the tension between the need for enterprise data management solutions that can be used across the entire organization and solutions that meet the needs of specific projects. The three of us presented our lessons learned on this topic from our respective roles as M&E advisor, M&E software provider, and program implementer.

We then asked attendees to provide a list of their top do’s and don’ts related to their own experiences – and then reviewed the feedback to identify key themes.

Here’s a rundown on the themes that emerged from participants’ feedback:

Organizational Systems

Think of these as systems broadly—not M&E specific. For example: How do HR practices affect technology adoption? Does your organization have a federated structure that makes standard indicator development difficult? Do you require separate reporting for management and donor partners? These are all organizational systems that need to be properly considered before system selection and implementation. Top takeaways from the group include these insights to help you ensure your implementation goes smoothly:

1. Form Follows Function: This seems like an obvious theme, but since we received so much feedback about folks’ experiences, it bears repeating: define your goals and purpose first, then design a system to meet those, not the other way around. Don’t go looking for a solution that doesn’t address an existing problem. This means that if the ultimate goal for a system is to improve field staff data collection, don’t build a system to improve data visualization.

2. HR & Training: One of the areas our industry seems to struggle with is long-term capacity building and knowledge transfer around new systems. Suggestions in this theme were that training on information systems become embedded in standard HR processes with ongoing knowledge sharing and training of field staff, and putting a priority on hiring staff with adequate skill mixes to make use of information systems.

3. Right-Sized Deployment for Your Organization: There were a number of horror stories around organizations that tried to implement a single system simultaneously across all projects and failed because they bit off more than they could chew, or because the selected tool really didn’t meet a majority of their organization’s projects’ needs. The general consensus here was that small pilots, incremental roll-outs, and other learn-and-iterate approaches are a best practice. As one participant put it: Start small, scale slowly, iterate, and adapt.

M&E Systems

We wanted to get feedback on best and worst practices around M&E system implementations specifically—how tools should be selected, necessary planning or analysis, etc.

4. Get Your M&E Right: Resoundingly, participants stressed that a critical component of implementing an M&E information system is having well-organized M&E, particularly indicators. We received a number of comments about creating standardized indicators first, auditing and reconciling existing indicators, and so on.

5. Diagnose Your Needs: Participants also chorused the need for effective diagnosis of the current state of M&E data and workflows and what the desired end-state is. Feedback in this theme focused on data, process, and tool audits and putting more tool-selection power in M&E experts’ hands rather than upper management or IT.

6. Scope It Out: One of the flaws each of us has seen in our respective roles is having too generalized or vague of a sense of why a given M&E tool is being implemented in the first place. All three of us talked about the need to define the problem and desired end state of an implementation. Participants’ feedback supported this stance. One of the key takeaways from this theme was to define who the M&E is actually for, and what purpose it’s serving: donors? Internal management? Local partner selection/management? Public accountability/marketing?

Technical Specifications

The first two categories are more about the how and why of system selection, roll-out, and implementation. This category is all about working to define and articulate what any type of system needs to be able to do.

7. UX Matters: It seems like a lot of folks have had experience with systems that aren’t particularly user-friendly. We received a lot of feedback about consulting users who actually have to use the system, building the tech around them rather than forcing them to adapt, and avoiding “clunkiness” in tool interfaces. This feels obvious but is, in fact, often hard to do in practice.

8. Keep It Simple, Stupid: This theme echoed the Right-Sized Deployment for Your Organization: take baby steps; keep things simple; prioritize the problems you want to solve; and don’t try to make a single tool solve all of them at once. We might add to this: many organizations have never had a successful information system implementation. Keeping the scope and focus tight at first and getting some wins on those roll-outs will help change internal perception of success and make it easier to implement broader, more elaborate changes long-term.

9. Failing to Plan Is Planning to Fail: The consensus in feedback was that it pays to take more time upfront to identify user/system needs and figure out which are required and which are nice to have. If interoperability with other tools or systems is a requirement, think about it from day one. Work directly with stakeholders at all levels to determine specs and needs; conduct internal readiness assessments to see what the actual needs are; and use this process to identify hierarchies of permissions and security.

Change Management

Last, but not least, there’s how systems will be introduced and rolled out to users. We got the most feedback on this section and there was a lot of overlap with other sections. This seems to be the piece that organizations struggle with the most.

10. Get Buy-in/Identify Champions: Half the feedback we received on change management revolved around this theme. For implementations to be successful, you need both a top-down approach (buy-in from senior leadership) and a bottom-up approach (local champions/early adopters). To help facilitate this buy-in, participants suggested creating incentives (especially for management), giving local practitioners ownership, including programs and operations in the process, and not letting the IT department lead the initiative. The key here is that no matter which group the implementation ultimately benefits the most, having everyone on the same page understanding the implementation goals and why the organization needs it are key.

11. Communicate: Part of how you get buy-in is to communicate early and often. Communicate the rationale behind why tools were selected, what they’re good—and bad—at, what the value and benefits of the tool are, and transparency in the roll-out/what it hopes to achieve/progress towards those goals. Consider things like behavior change campaigns, brown bags, etc.

12. Shared Vision: This is a step beyond communication: merely telling people what’s going on is not enough. There must be a larger vision of what the tool/implementation is trying to achieve and this, particularly, needs to be articulated. How will it benefit each type of user? Shared vision can help overcome people’s natural tendencies to resist change, hold onto “their” data, or cover up failures or inconsistencies.