All posts by Linda Raftree

About Linda Raftree

Linda Raftree supports strategy, program design, research, and technology in international development initiatives. She co-founded MERLTech in 2014 and Kurante in 2013. Linda advises Girl Effect on digital safety, security and privacy and supports the organization with research and strategy. She is involved in developing responsible data policies for both Catholic Relief Services and USAID. Since 2011, she has been advising The Rockefeller Foundation’s Evaluation Office on the use of ICTs in monitoring and evaluation. Prior to becoming an independent consultant, Linda worked for 16 years with Plan International. Linda runs Technology Salons in New York City and advocates for ethical approaches for using ICTs and digital data in the humanitarian and development space. She is the co-author of several publications on technology and development, including Emerging Opportunities: Monitoring and Evaluation in a Tech-Enabled World with Michael Bamberger. Linda blogs at Wait… What? and tweets as @meowtree. See Linda’s full bio on LInkedIn.

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Better or different or both?

by Linda Raftree, Independent Consultant and MERL Tech Organizer

As we delve into why, when, where, if, and how to incorporate various types of technology and digital data tools and approaches into monitoring, evaluation, research and learning (MERL), it can be helpful to think about MERL technologies from two angles:

  1. Doing our work better:  How can new technologies and approaches help us do what we’ve always done — the things that we know are working and having an impact — but do them better? (E.g., faster, with higher quality, more efficiently, less expensively, with greater reach or more inclusion of different voices)
  2. Doing our work differently:  What brand new, previously unthinkable things can be done because of new technologies and approaches? How might these totally new ideas contribute positively to our work or push us to work in an entirely different way.

Sometimes these two things happen simultaneously and sometimes they do not.  Some organizations are better at Thing 1, and others are set-up well to explore Thing 2. Not all organizations need to feel pressured into doing Thing 2; however, and sometimes it can be a distraction from Thing 1. Some organizations may be better off letting early adopters focus on Thing 2 and investing their own budgets and energy in Thing 1 until innovations have been tried and tested by the early adopters. Organizations may also have staff members or teams working on both Thing 1 and Thing 2 separately. Others may conceptualize this as process or pathway moving from Thing 2 to Thing 1, where Thing 2 (once tested and evaluated) is a pipeline into Thing 1.

Here are some potentially useful past discussions on the topic of innovations within development organizations that flesh out some of these thoughts:

Many of the new tools and approaches that were considered experimental 10 years ago have moved from being “brand new and innovative” to simply “helping us do what we’ve always done.” Some of these earlier “innovations” are related to digital data and data collection and processing, and they help us do better monitoring, evaluation and research.

On the flip side, monitoring, evaluation and research have played a key role in helping organizations and the sector overall learn more about how, where, when, why and in what contexts these different tools and approaches (including digital data for MERL) can be adopted. MERL on ICT4D and Digital Development approaches can help calibrate the “hype cycle” and weed out the shiny new tools and approaches that are actually not very effective or useful to the sector and highlight those that cause harm or put people at risk.

There are always going to be new tools and approaches that emerge. Humanitarian and development organizations, then, need to think strategically about what kind of organization they are (or want to be) and where they fit on the MERL Tech continuum between Thing 1 and Thing 2.

What capacities does an organization have for working on Thing 2 (brand new and different)? When and for how long should an organization focus on Thing 1, building on what it knows is working or could work, keeping an eye on the early adopters who are working on Thing 2. When does an organization have enough “proof” to start adopting new tools and approaches that seem to add value? How are these new tools and approaches being monitored, evaluated and researched to improve our use of them?

It’s difficult for widespread adoption to happen in the development space, where there is normally limited time and capacity for failure or for experimentation, without solid MERL. And even with “solid MERL” it can be difficult for organizations to adapt and change due to a multitude of factors, both internal and external.

I’m looking forward to September’s MERL Tech Conference in DC where we have some sessions that explore “the MERL on ICT4MERL?” and others that examine aspects of organizational change related to adopting newer MERL Tech tools and approaches.

(Register here if you haven’t already!)

 

 

MERL Tech DC: Session ideas due by May 12th!

Don’t forget to sign up to present, register to attend, or reserve a demo table for MERL Tech DC on September 7-8, 2017 at FHI 360 in Washington, DC.

Submit Your Session Ideas by Friday, May 12th!

Like previous conferences, MERL Tech DC will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are particularly interested in:

  • Discussions around good practice and evidence-based review
  • Workshops with practical, hands-on exercises
  • Discussion and sharing on how to address methodological aspects such as rigor, bias, and construct validity in MERL Tech approaches
  • Future-focused thought provoking ideas and examples
  • Conversations about ethics, inclusion and responsible policy and practice in MERL Tech

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early June and, if selected, you will be asked to submit the final session title, summary and outline by June 30.

If you have questions or are unsure about a submission idea, please get in touch with Linda Raftree.

Submit your ideas here! 

Six priorities for the MERL Tech community

by Linda Raftree, MERL Tech Co-organizer

IMG_4636Participants at the London MERL Tech conference in February 2017 crowdsourced a MERL Tech History timeline (which I’ve shared in this post). Building on that, we projected out our hopes for a bright MERL Tech Future. Then we prioritized our top goals as a group (see below). We’ll aim to continue building on these as a sector going forward and would love more thoughts on them.

  1. Figure out how to be responsible with digital data and not put people, communities, vulnerable groups at risk. Subtopics included: share data with others responsibly without harming anyone; agree minimum ethical standard for MERL and data collection; agree principles for minimizing data we collect so that only essential data is captured, develop duty of care principles for MERL Tech and digital data; develop ethical data practices and policies at organization levels; shift the power balance so that digital data convenience costs are paid by orgs, not affected populations; develop a set of quality standards for evaluation using tech
  2. Increase data literacy across the sector, at individual level and within the various communities where we are working.
  3. Overcome the extraction challenge and move towards true downward accountability. Do good user/human centered design and planning together, be ‘leaner’ and more user-focused at all stages of planning and MERL. Subtopics included: development of more participatory MERL methods; bringing consensus decision-making to participatory MERL; realizing the potential of tech to shift power and knowledge hierarchies; greater use of appreciative inquiry in participatory MERL; more relevant use of tech in MERL — less data, more empowering, less extractive, more used.
  4. Integrate MERL into our daily opfor blogerations to avoid the thinking that it is something ‘separate;’ move it to the core of operations management and make sure we have the necessary funds to do so; demystify it and make it normal! Subtopics included that: we’ve stopped calling “MERL” a “thing” and the norm is to talk about monitoring as part of operations; data use is enabling real-time coordination; no more paper based surveys.
  5. Improve coordination and interoperability as related to data and tools, both between organizations and within organizations. Subtopics included: more interoperability; more data-sharing platforms; all data with suitable anonymization is open; universal exchange of machine readable M&E Data (e.g., standards? IATI? a platform?); sector-wide IATI compliance; tech solutions that enable sharing of qualitative and quantitative data; systems of use across agencies; e.g., to refer feedback; coordination; organizations sharing more data; interoperability of tools. It was emphasized that donors should incentivize this and ensure that there are resources to manage it.
  6. Enhance user-driven and accessible tech that supports impact and increases efficiency, that is open source and can be built on, and that allows for interoperability and consistent systems of measurement and evaluation approaches.

In order to move on these priorities, participants felt we needed better coordination and sharing of tools and lessons among the NGO community. This could be through a platform where different innovations and tools are appropriately documented so that donors and organizations can more easily find good practice, useful tools and get a sense of ‘what’s out there’ and what it’s being used for. This might help us to focus on implementing what is working where, when, why and how in M&E (based on a particular kind of context) rather than re-inventing the wheel and endlessly pushing for new tools.

Participants also wanted to see MERL Tech as a community that is collaborating to shape the field and to ensure that we are a sector that listens, learns, and adopts good practices. They suggested hosting MERL Tech events and conferences in ‘the South;’ and building out the MERL Tech community to include greater representation of users and developers in order to achieve optimal tools and management processes.

What do you think – have we covered it all? What’s missing?

Technology in MERL: an approximate history

by Linda Raftree, MERL Tech co-organizer.

At MERL Tech London, Maliha Khan led us in an exercise to map out our shared history of MERL Tech. Following that we did some prioritizing around potential next steps for the sector (which I’ll cover in a next post).

She had us each write down 1) When we first got involved in something related to MERL Tech, and 2) What would we identify as a defining moment or key event in the wider field or in terms of our own experiences with MERL Tech.

The results were a crowdsourced MERL Tech Timeline on the wall.

 

An approximate history of tech in MERL 

We discussed the general flow of how technology had come to merge with MERL in humanitarian and development work over the past 20 years. The purpose was not to debate about exact dates, but to get a sense of how the field and community had emerged and how participants had experienced its ebbs and flows over time.

Some highlights:

  • 1996 digital photos being used in community-led research
  • 1998 mobile phones start to creep more and more into our work
  • 2000 the rise of SMS
  • 2001 spread of mobile phone use among development/aid workers, especially when disasters hit
  • 2003 Mobile Money comes onto the scene
  • 2004 enter smart phones; Asian tsunami happens and illustrates need for greater collaboration
  • 2005 increased focus on smartphones; enter Google maps
  • 2008 IATI, Hans Rosling interactive data talk/data visualization
  • 2009 ODK, FrontlineSMS, more and more Mobile Money and smart phones, open data; global ICT4D conference
  • 2010 Haiti earthquakes – health, GIS and infrastructure data collected at large scale, SMS reporting and mapping
  • 2011 FrontlineSMS’ data integrity guide
  • 2012 introduction and spread of cloud services in our work; more and more mapping/GIS in humanitarian and development work
  • 2013 more focus and funding from donors for tech-enabled work, more awareness and work on data standards and protocols, more use of tablets for data collection, bitcoin and blockchain enter the humanitarian/development scene; big data
  • 2014 landscape report on use of ICTs for M&E; MERL Tech conference starts to come together; Responsible Data Forum; U-Report and feedback loops; thinking about SDGs and Data revolution
  • 2015 Ebola crisis leads to different approach to data, big data concerns and ‘big data disasters’, awareness of the need for much improved coordination on tech and digital data; World Bank Digital Dividends report; Oxfam Responsible Data policy
  • 2016 real-time data and feedback loops are better unpacked and starting to be more integrated, adaptive management focus, greater awareness of need of interoperability, concerns about digital data privacy and security
  • 2017 MERL Tech London and the coming-together of the related community

What do you think? What’s missing? We’d love to have a more complete and accurate timeline at some point…. 

 

Using R to produce innovative, quick and reproducible evidence

By Claire Benard, formerly of Crisis UK and now with National Council for Voluntary Organizations (NCVO). 

Most people who work with data in MERL will have heard of R. Some people will have been properly introduced to it, but only a few will invest the necessary time in learning how to use it. Being a relatively late convert, I wanted to share my experience of moving from a traditional data analysis software package to a language based one, so I did a Lightning Talk at MERL Tech London. (You can watch the video below.)

First things first, what is R?

Aside from being the 18th letter of the alphabet, R is also a language and environment for statistical computing and graphics. 

But wait, you say… why should I use it?

This is what the five-minute video below is about, but in short, here are a few reasons:

  • There is nothing your current software package does that R doesn’t do.
  • R is free.
  • Using a programming language makes the analysis easy to reproduce, whether it’s because you need to produce similar analysis year on year or because you have a team of analysts who need to collaborate and understand each other’s work.
  • R is an open source technology. People from all backgrounds contribute to it and make new tools available for free regularly. This is you’re insurance to stay at the cutting edge of what is being developed.

Well, then, how do I get started? you wonder… 

If you’re more MERL than Tech, learning a new programming language can be daunting. There is a time and money cost to it and it’s hard to know where to start if you’re on your own.

In the video, I give a few tips. It’s also worth checking out free/cheap training online (for example here or here) ; looking out for a user group near you and getting advice from blogs, forums and newsletters.

Check out Claire’s presentation too if you want more info!

 

Tips for solar charging your data collection

Post by Julia Connors of Voltaicsystems. Email Julia with questions: julia@voltaicsystems.com

What is solar for M&E?

Solar technology can be extremely useful for M&E projects in areas with minimal or inconsistent access to power. Portable solar chargers can eliminate power constraints and keep phones and tablets reliably charged up in the field.

In this post we’ll discuss:

  • How to decide if solar is right for your project
  • How to properly size a solar charging system to meet your needs

Do you really need solar?

In many cases solar is not necessary and will simply add complexity and costs to your project. If your team can return every day to a central location with access to power, then the battery power of the tablet is sufficient in most scenarios. If not, we recommend implementing standard power saving tips to reduce power consumption during time out collecting data.

POWER SAVING TIPS

SolarforMEPhoto

If you do have daily access to the grid but find that users need to recharge at least once while out or need to spend more than one day without power, then add an external battery pack. This cost-effective option allows your team to have extra power without carrying a full solar charging system. To size a battery for your needs, skip down to ‘Step 3’ below.

If you don’t have reliable access to grid power, the next section will help you determine which size solar charging system is best for you.

Sizing your solar charger system

The key to making solar successful in your project is finding the best system for your needs. If a system is underpowered then your team can still run out of power when they’re collecting data. On the other hand, if your system is too powerful it will be heavier and more expensive than needed. We recommend the following three steps for sizing your solar needs:

  1. Estimate your daily power consumption
  2. Determine your minimum solar panel size
  3. Determine your minimum battery size

Step 1: Estimate your daily power consumption

Once you have chosen the device you will be using in the field, it’s easy to determine your daily power consumption. First you’ll need to figure out the size of your device’s battery (in Watt hours). This can often be found by looking on the back of the battery itself or doing a quick Google search to find your device’s technical specifications.

Next, you’ll need to determine your battery usage per day. For example, if you use half of your device’s battery on a typical day of data collection, then your usage is 50%. If you need to recharge twice in one day, then your usage is 200%.

Once you have those numbers, use the formula below to find your daily power consumption:

Size of Device’s Battery (Wh) x Battery Usage (per day) =

Daily Power Consumption (Wh/day)

Step 2: Determine your minimum solar panel size

The larger your device, the bigger the solar panel (measured in Watts) you’ll need. This is because larger solar panels can generate more power from the sun than smaller panels. To determine the best solar panel size for your needs, use our formula below:

Daily Power Consumption (from Step 1) / Expected Hours of Good Sun*

x 2 (Standard Power Loss Variable) =

Solar Panel Minimum (Watts)

*We typically use 5 hours as a baseline for good sun and then adjust up or down depending on the conditions. High temperatures, clouds, or shading will reduce the power produced by the panel.

Since solar conditions change frequently throughout the day, we recommend choosing a panel that is 2-4 times the minimum size required.

SolarforMEPhoto2

Step 3: Determine minimum battery size

External batteries offer extra power storage so that your device will be charged when you need it. The battery acts as a perfect backup on cloudy and rainy days so it’s important to choose the right size for your device.

It can vary, but typically about 30% of power is lost in the transfer from the external battery to your device. Therefore, to determine the battery capacity needed for one day of use, we’ll use our power consumption data from Step 1 and divide by 0.7 (100% – 30% power loss).

Watt hours per day / 0.7 hours =

Watt battery capacity needed for 1 day of use

SolarforMEPhoto3

Picking the right system for your project

Now that you’ve done the math, you’re one step closer to choosing a solar charging system for your project. Since solar chargers come in many different forms, the last step to determining your perfect system is to think about how your team will be using the solar chargers in their work. It’s important to factor in storage for device/cables and how the user will be carrying the system.

Most users aren’t that technical, so having a pack that stores the battery and the device can simplify their experience (rather than handing over a battery and a panel that they need to figure how to organize during their day). By simply finding the right style and size, you’ll experience higher usage rates and make your team’s solar-powered data collection go more smoothly.

IVR, Facebook and WhatsApp: tech and M&E at AfrEA

by Linda Raftree

At the African Evaluation Association (AfrEA) Conference in Uganda on March 29th,  we ran a session on how mobile and social media platforms are being used in monitoring and evaluation processes. Our discussants were Jamie Arkin from Human Network International (soon to be merging with VotoMobile) who spoke about interactive voice response (IVR); John Njovu, an independent consultant working with the Ministry of National Development Planning of the Zambian government, who shared experiences with technology tools for citizen feedback to monitor budgets and support transparency and accountability; and Noel Verrinder from Genesis who talked about using WhatsApp in a youth financial education program.

Using IVR for surveys

Jamie shared how HNI deploys IVR surveys to obtain information about different initiatives or interventions from a wide public or to understand the public’s beliefs about a particular topic. These surveys come in three formats: random dialing of telephone numbers until someone picks up; asking people to call in, for example, on a radio show; or using an existing list of phone numbers. “If there is an 80% phone penetration or higher, it is equal to a normal household level survey,” she said. The organization has list of thousands of phone numbers and can segment these to create a sample. “IVR really amplifies people’s voices. We record in local language. We can ask whether the respondent is a man or a woman. People use their keypads to reply or we can record their voices providing an open response to the question.” The voice responses are later digitized into text for analysis. In order to avoid too many free voice responses, the HNI system can cut the recording off after 30 seconds or limit voice responses to the first 100 calls. Often keypad responses are most effective as people are not used to leaving voice mails.

IVR is useful in areas where there is low literacy. “In Rwanda, 80% of women cannot read a full sentence, so SMS is not a silver bullet,” Jamie noted. “Smartphones are coming, and people want them, but 95% of people in Uganda have a simple feature phone, so we cannot reach them by Facebook or WhatsApp. If you are going with those tools, you will only reach the wealthiest 5% of the population.”

In order to reduce response bias, the survey question order can be randomized. Response rates tend to be ten times higher on IVR than on SMS surveys, Jamie said, in part, because IVR is cheaper for respondents. The HNI system can provide auto-analysis for certain categories such as most popular response. CSV files can also be exported for further analysis. Additionally, the system tracks length of session, language, time of day and other meta data about the survey exercise.

Regulatory and privacy implications in most countries are unclear about IVR, and currently there are few legal restrictions against calling people for surveys. “There are opt-outs for SMS but not for IVRs, if you don’t want to participate you just hang up.” In some case, however, like Rwanda, there are certain numbers that are on “do not disturb” lists and these need to be avoided, she said.

Citizen-led budget monitoring through Facebook

John shared results of a program where citizens were encouraged to visit government infrastructure projects to track whether budget allocations had been properly done. Citizens would visit a health center or a school to inquire about these projects and then fill out a form on Facebook or a website to share their findings. A first issue with the project was that voters were interested in availability and quality of service delivery, not in budget spending. “”I might ask what money you got, did you buy what you said, was it delivered and is it here. Yes. Fine. But the bigger question is: Are you using it? The clinic is supposed to have 1 doctor, 3 nurses and 3 lab technicians. Are they all there? Yes. But are they doing their jobs? How are they treating patients?”

Quantity and budget spend were being captured but quality of service was not addressed, which was problematic. Another challenge with the program was that people did not have a good sense of what the dollar can buy, thus it was difficult for them to assess whether budget had been spent. Additionally, in Zambia, it is not customary for citizens to question elected officials. The idea that the government owes the people something, or that citizens can walk into a government office to ask questions about budget is not a traditional one. “So people were not confident in asking question or pushing government for a response.”

The addition of technology to the program did not resolve any of these underlying issues, and on top of this, there was an apparent mismatch with the idea of using mobile phones to conduct feedback. “In Zambia it was said that everyone has a phone, so that’s why we thought we’d put in mobiles. But the thing is that the number of SIMs doesn’t equal the number of phone owners. The modern woman may have a good phone or two, but as you go down to people in the compound they don’t have even basic types of phones. In rural areas it’s even worse,” said John, “so this assumption was incorrect.” When the program began running in Zambia, there was surprise that no one was reporting. It was then realized that the actual mobile ownership statistics were not so clear.

Additionally, in Zambia only 11% of women can read a full sentence, and so there are massive literacy issues. And language is also an issue. In this case, it was assumed that Zambians all speak English, but often English is quite limited among rural populations. “You have accountability language that is related to budget tracking and people don’t understand it. Unless you are really out there working directly with people you will miss all of this.”

As a result of the evaluation of the program, the Government of Zambia is rethinking ways to assess the quality of services rather than the quantity of items delivered according to budget.

Gathering qualitative input through WhatsApp

Genesis’ approach to incorporating WhatsApp into their monitoring and evaluation was more emergent. “We didn’t plan for it, it just happened,” said Noel Verrinder. Genesis was running a program to support technical and vocational training colleges in peri-urban and rural areas in the Northwest part of South Africa. The young people in the program are “impoverished in our context, but they have smartphones, WhatsApp and Facebook.”

Genesis had set up a WhatsApp account to communicate about program logistics, but it morphed into a space for the trainers to provide other kinds of information and respond to questions. “We started to see patterns and we could track how engaged the different youth were based on how often they engaged on WhatsApp.” In addition to the content, it was possible to gain insights into which of the participants were more engage based on their time and responses on WhatsApp.

Genesis had asked the youth to create diaries about their experiences, and eventually asked them to photograph their diaries and submit them by WhatsApp, given that it made for much easier logistics as compared to driving around to various neighborhoods to track down the diaries. “We could just ask them to provide us with all of their feedback by WhatsApp, actually, and dispense with the diaries at some point,” noted Noel.

In future, Genesis plans to incorporate WhatsApp into its monitoring efforts in a more formal way and to consider some of the privacy and consent aspects of using the application for M&E. One challenge with using WhatsApp is that the type of language used in texting is short and less expressive, so the organization will have to figure out how to understand emoticons. Additionally, it will need to ask for consent from program participants so that WhatsApp engagement can be ethically used for M&E purposes.

We have a data problem

by Emily Tomkys, ICT in Programmes at Oxfam GB

Following my presentation at MERL Tech, I have realised that it’s not only Oxfam who have a data problem; many of us have a data problem. In the humanitarian and development space, we collect a lot of data – whether via mobile phone or a paper process, the amount of data each project generates is staggering. Some of this data goes into our MIS (Management Information Systems), but all too often data remains in Excel spreadsheets on computer hard drives, unconnected cloud storage systems or Access and bespoke databases.

(Watch Emily’s MERL Tech London Lightning Talk!)

This is an issue because the majority of our programme data is analysed in silos on a survey-to-survey basis and at best on a project-to-project basis. What about when we want to analyse data between projects, between countries, or even globally? It would currently take a lot of time and resources to bring data together in usable formats. Furthermore, issues of data security, limited support for country teams, data standards and the cost of systems or support mean there is a sustainability problem that is in many people’s interests to solve.

The demand from Oxfam’s country teams is high – one of the most common requests the ICT in Programme Team receive centres around databases and data analytics. Teams want to be able to store and analyse their data easily and safely; and there is growing demand for cross border analytics. Our humanitarian managers want to see statistics on the type of feedback we receive globally. Our livelihoods team wants to be able to monitor prices at markets on a national and regional scale. So this motivated us to look for a data solution but it’s something we know we can’t take on alone.

That’s why MERL Tech represented a great opportunity to check in with other peers about potential solutions and areas for collaboration. For now, our proposal is to design a data hub where no matter what the type of data (unstructured, semi-structured or structured) and no matter how we collect the data (mobile data collection tools or on paper), our data can integrate into a database. This isn’t about creating new tools – rather it’s about focusing on the interoperability and smooth transition between tools and storage options.  We plan to set this up so data can be pulled through into a reporting layer which may have a mixture of options for quantitative analysis, qualitative analysis and GIS mapping. We also know we need to give our micro-programme data a home and put everything in one place regardless of its source or format and make it easy to pull it through for analysis.

In this way we can explore data holistically, spot trends on a wider scale and really know more about our programmes and act accordingly. Not only should this reduce our cost of analysis, we will be able to analyse our data more efficiently and effectively. Moreover, taking a holistic view of the data life cycle will enable us to do data protection by design and it will be easier to support because the process and the tools being used will be streamlined. We know that one tool does not and cannot do everything we require when we work in such vast contexts, so a challenge will be how to streamline at the same time as factoring in contextual nuances.

Sounds easy, right? We will be starting to explore our options and working on the datahub in the coming months. MERL Tech was a great start to make connections, but we are keen to hear from others about how you are approaching “the data problem” and eager to set something up which can also be used by other actors. So please add your thoughts in the comments or get in touch if you have ideas!