MERL Tech News

An Agile MERL Manifesto

By Calum Handforth, Consultant at Agriculture, Learning and Impacts Network (ALINe)

Too often in MERL, we lead with a solution instead of focusing on the problem itself.  These solutions are often detailed and comprehensive, but not always aligned to the interests of beneficiaries and projects – or the realities of contexts.

At the Agriculture, Learning and Impacts Network (ALINe), we’re exploring an agile approach to MERL. An approach centred on MERL users, that’s able to generate rapid and actionable insights. It’s an iterative approach to respond to the challenges and realities of implementing MERL. It’s about learning and responding fast.

The ‘Agile’ approach has its roots in project management, where it’s usually linked to the development of digital tools. Agile was a response to what were seen to be bloated and inefficient ways of delivering software – and improvements – to users. It focuses on the importance of being user-centred. It’s about piloting and iterating to deliver products that customers need, and responding to change instead of trying to specify everything at the outset. These concepts were defined in the Agile Manifesto that launched this movement. The Agile approach is now used to design, develop and deliver a huge amount of technology, software, and digital tools.

So, should we be thinking about an Agile MERL Manifesto? And what should this contain?  We’ve got three main ideas that drive much of our work:

First, put the user at the heart of MERL. We need to know our audience, and their contexts and realities. We need to build MERL tools and approaches that align with these insights, and aim for co-design wherever possible. We need to properly understand the needs of our users and the problem(s) our MERL tools need to solve. This is also the case with the results that we’re generating: are they easy to understand and presented in a helpful format; are they actionable; can they be easily validated; are they integrated into ongoing project management processes; and are they tailored to the specific roles of different users in a system? And who needs to use the data to make what decisions?

With these foundations, our MERL tools need to be working to identify the right impact – whether it’s about the ‘big numbers’ of people reached or incomes increased, or at the level of outcomes and intermediate changes. The latter are particularly useful as these insights are often more actionable from a day-to-day management or decision-making perspective. We also need to be measuring over the right timeframe to capture these impacts or changes.

Second, collect the data that matters. We’ve all seen cases where surveys or other tools have been used that ask all the questions – except the right one. So we need to strip everything back and make sure that we can get the right data for the right decisions to be made. This is where feedback systems, which we have focused on extensively, can be important. These tend to focus on asking a smaller number of questions more frequently to understand the views and perspectives of users so as to inform decision-making.

Recently, we’ve worked on the monitoring of mobile phone-delivered agricultural and nutritional information across six countries. As part of this, we ran regular ‘Rapid Feedback Surveys’ that provided the User Experience team at each of the Mobile Network Operators with a platform to ask users questions on their experience with the service. This enabled actionable improvements to the service – for example, being able to tweak content or service design in order to better meet the needs of users. We’ve also been using the Progress Out Of Poverty Index (PPI) – a 10-question poverty measurement tool customised for more than 50 countries – to gain some valuable demographic insights to ensure that the project is reaching the correct beneficiaries.

More widely, in order to understand how different agricultural technologies promoted by the public extension system in Ethiopia are working out for farmers, we developed a lightweight tool called the ‘technology tracker’ to gather perceptual feedback from smallholder farmers about these technologies. The tools ask a short set of questions on key dimensions of technology performance (ease of understanding, cost of materials, labour requirements, production quantity and quality and profitability) along with the major challenges faced. This allows government workers to easily compare different technologies and diagnose the key problems to be addressed to make the technologies more successful.

These ideas are gaining increased traction in international development, as in the case of the Lean Data approach being explored by the impact investment fund, Acumen.

Third, be responsive to change. Methodologies don’t always work out. So adapt to the challenges thrown at you, and understand that methodologies shouldn’t be static – we need continual refinement to ensure that we’re always measuring the things that matter as problems and realities shift. We need to be thinking of iteration as central to the process of developing MERL tools and systems as opposed to continuing to focus on the big-up-front-design approach. However, since the process of figuring out what the right data is can be complex, starting with something simple but useful and iterating to refine it can help. In Agile, there’s the concept of the Minimum Viable Product – what’s your basic offering in order to generate suitable value for your customers? In Agile MERL, what should be the Minimum Viable Tool to get the insights that we need? It’s about starting with lightweight practical tools that solve immediate problems and generate value, rather than embarking on a giant and unwieldy system before we’ve managed to gain any traction and demonstrate value.

Agile MERL is about both the design of MERL systems and the wider piece of learning from the data that is generated. This is also about learning when things don’t work out. To borrow a tech-phrase from the environment that Agile initially grew out of: fail fast, and fail often. But learn from failure. This includes documenting it. The social enterprise One Acre Fund use a Failure Template to record how and why interventions and approaches didn’t work. They also have failure reports available on their website. Transparency is important here, too, as the more that these insights are shared, the more effective all of our work can be. There will be less duplication, and interventions will be based on stronger evidence of what works and what doesn’t.  There’s an important point here, too, about organisational culture being responsive to this approach – and we need to be pushing donors to understand the realities of MERL: it’s rarely ever perfect.  

Agile MERL, as with any other MERL approach, is not a panacea. It’s part of the wider MERL toolkit and there are limitations with this approach. In particular, one needs to ensure that in the quest for lean data collection, we are still getting valid insights that are robust enough to be used by decision-makers on critical issues. Moreover, while change and iteration need to be embraced, there is still a need to create continuous and comparable datasets. In some cases, the depth or comprehensiveness of research required may prevent a more lightweight approach. However, even in these situations the core tenets of Agile MERL remain relevant and MERL should continue to be user-driven, useful and iterative. We need to be continuously testing, learning and adapting.

These are our initial thoughts, which have guided some of our recent projects. We’re increasingly working on projects that use Agile-inspired tools and approaches: whether tech, software or data-driven development. We feel that MERL can learn from the Agile project management environments that these digital tools were designed in, which have used the Agile Manifesto to put users at the centre. 

Agile MERL, and using tools like mobile phones and tablets for data collection, democratise MERL through making it more accessible and useful. Not every organisation can afford to conduct a $1m Household Survey, but most organisations can use approaches like the 10-question PPI survey, the Rapid Feedback Survey or the technology tracker in some capacity. Agile MERL stops MERL from being a tick-box exercise. Instead, it can help users recognise the importance of MERL, and encourage them to put data and evidence-based learning at the heart of their work.

Watch Calum’s MERL Tech video below!

MERL Tech DC: Session ideas due by May 12th!

Don’t forget to sign up to present, register to attend, or reserve a demo table for MERL Tech DC on September 7-8, 2017 at FHI 360 in Washington, DC.

Submit Your Session Ideas by Friday, May 12th!

Like previous conferences, MERL Tech DC will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are particularly interested in:

  • Discussions around good practice and evidence-based review
  • Workshops with practical, hands-on exercises
  • Discussion and sharing on how to address methodological aspects such as rigor, bias, and construct validity in MERL Tech approaches
  • Future-focused thought provoking ideas and examples
  • Conversations about ethics, inclusion and responsible policy and practice in MERL Tech

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early June and, if selected, you will be asked to submit the final session title, summary and outline by June 30.

If you have questions or are unsure about a submission idea, please get in touch with Linda Raftree.

Submit your ideas here! 

Six priorities for the MERL Tech community

by Linda Raftree, MERL Tech Co-organizer

IMG_4636Participants at the London MERL Tech conference in February 2017 crowdsourced a MERL Tech History timeline (which I’ve shared in this post). Building on that, we projected out our hopes for a bright MERL Tech Future. Then we prioritized our top goals as a group (see below). We’ll aim to continue building on these as a sector going forward and would love more thoughts on them.

  1. Figure out how to be responsible with digital data and not put people, communities, vulnerable groups at risk. Subtopics included: share data with others responsibly without harming anyone; agree minimum ethical standard for MERL and data collection; agree principles for minimizing data we collect so that only essential data is captured, develop duty of care principles for MERL Tech and digital data; develop ethical data practices and policies at organization levels; shift the power balance so that digital data convenience costs are paid by orgs, not affected populations; develop a set of quality standards for evaluation using tech
  2. Increase data literacy across the sector, at individual level and within the various communities where we are working.
  3. Overcome the extraction challenge and move towards true downward accountability. Do good user/human centered design and planning together, be ‘leaner’ and more user-focused at all stages of planning and MERL. Subtopics included: development of more participatory MERL methods; bringing consensus decision-making to participatory MERL; realizing the potential of tech to shift power and knowledge hierarchies; greater use of appreciative inquiry in participatory MERL; more relevant use of tech in MERL — less data, more empowering, less extractive, more used.
  4. Integrate MERL into our daily opfor blogerations to avoid the thinking that it is something ‘separate;’ move it to the core of operations management and make sure we have the necessary funds to do so; demystify it and make it normal! Subtopics included that: we’ve stopped calling “MERL” a “thing” and the norm is to talk about monitoring as part of operations; data use is enabling real-time coordination; no more paper based surveys.
  5. Improve coordination and interoperability as related to data and tools, both between organizations and within organizations. Subtopics included: more interoperability; more data-sharing platforms; all data with suitable anonymization is open; universal exchange of machine readable M&E Data (e.g., standards? IATI? a platform?); sector-wide IATI compliance; tech solutions that enable sharing of qualitative and quantitative data; systems of use across agencies; e.g., to refer feedback; coordination; organizations sharing more data; interoperability of tools. It was emphasized that donors should incentivize this and ensure that there are resources to manage it.
  6. Enhance user-driven and accessible tech that supports impact and increases efficiency, that is open source and can be built on, and that allows for interoperability and consistent systems of measurement and evaluation approaches.

In order to move on these priorities, participants felt we needed better coordination and sharing of tools and lessons among the NGO community. This could be through a platform where different innovations and tools are appropriately documented so that donors and organizations can more easily find good practice, useful tools and get a sense of ‘what’s out there’ and what it’s being used for. This might help us to focus on implementing what is working where, when, why and how in M&E (based on a particular kind of context) rather than re-inventing the wheel and endlessly pushing for new tools.

Participants also wanted to see MERL Tech as a community that is collaborating to shape the field and to ensure that we are a sector that listens, learns, and adopts good practices. They suggested hosting MERL Tech events and conferences in ‘the South;’ and building out the MERL Tech community to include greater representation of users and developers in order to achieve optimal tools and management processes.

What do you think – have we covered it all? What’s missing?

Technology in MERL: an approximate history

by Linda Raftree, MERL Tech co-organizer.

At MERL Tech London, Maliha Khan led us in an exercise to map out our shared history of MERL Tech. Following that we did some prioritizing around potential next steps for the sector (which I’ll cover in a next post).

She had us each write down 1) When we first got involved in something related to MERL Tech, and 2) What would we identify as a defining moment or key event in the wider field or in terms of our own experiences with MERL Tech.

The results were a crowdsourced MERL Tech Timeline on the wall.

 

An approximate history of tech in MERL 

We discussed the general flow of how technology had come to merge with MERL in humanitarian and development work over the past 20 years. The purpose was not to debate about exact dates, but to get a sense of how the field and community had emerged and how participants had experienced its ebbs and flows over time.

Some highlights:

  • 1996 digital photos being used in community-led research
  • 1998 mobile phones start to creep more and more into our work
  • 2000 the rise of SMS
  • 2001 spread of mobile phone use among development/aid workers, especially when disasters hit
  • 2003 Mobile Money comes onto the scene
  • 2004 enter smart phones; Asian tsunami happens and illustrates need for greater collaboration
  • 2005 increased focus on smartphones; enter Google maps
  • 2008 IATI, Hans Rosling interactive data talk/data visualization
  • 2009 ODK, FrontlineSMS, more and more Mobile Money and smart phones, open data; global ICT4D conference
  • 2010 Haiti earthquakes – health, GIS and infrastructure data collected at large scale, SMS reporting and mapping
  • 2011 FrontlineSMS’ data integrity guide
  • 2012 introduction and spread of cloud services in our work; more and more mapping/GIS in humanitarian and development work
  • 2013 more focus and funding from donors for tech-enabled work, more awareness and work on data standards and protocols, more use of tablets for data collection, bitcoin and blockchain enter the humanitarian/development scene; big data
  • 2014 landscape report on use of ICTs for M&E; MERL Tech conference starts to come together; Responsible Data Forum; U-Report and feedback loops; thinking about SDGs and Data revolution
  • 2015 Ebola crisis leads to different approach to data, big data concerns and ‘big data disasters’, awareness of the need for much improved coordination on tech and digital data; World Bank Digital Dividends report; Oxfam Responsible Data policy
  • 2016 real-time data and feedback loops are better unpacked and starting to be more integrated, adaptive management focus, greater awareness of need of interoperability, concerns about digital data privacy and security
  • 2017 MERL Tech London and the coming-together of the related community

What do you think? What’s missing? We’d love to have a more complete and accurate timeline at some point…. 

 

Using R to produce innovative, quick and reproducible evidence

By Claire Benard, formerly of Crisis UK and now with National Council for Voluntary Organizations (NCVO). 

Most people who work with data in MERL will have heard of R. Some people will have been properly introduced to it, but only a few will invest the necessary time in learning how to use it. Being a relatively late convert, I wanted to share my experience of moving from a traditional data analysis software package to a language based one, so I did a Lightning Talk at MERL Tech London. (You can watch the video below.)

First things first, what is R?

Aside from being the 18th letter of the alphabet, R is also a language and environment for statistical computing and graphics. 

But wait, you say… why should I use it?

This is what the five-minute video below is about, but in short, here are a few reasons:

  • There is nothing your current software package does that R doesn’t do.
  • R is free.
  • Using a programming language makes the analysis easy to reproduce, whether it’s because you need to produce similar analysis year on year or because you have a team of analysts who need to collaborate and understand each other’s work.
  • R is an open source technology. People from all backgrounds contribute to it and make new tools available for free regularly. This is you’re insurance to stay at the cutting edge of what is being developed.

Well, then, how do I get started? you wonder… 

If you’re more MERL than Tech, learning a new programming language can be daunting. There is a time and money cost to it and it’s hard to know where to start if you’re on your own.

In the video, I give a few tips. It’s also worth checking out free/cheap training online (for example here or here) ; looking out for a user group near you and getting advice from blogs, forums and newsletters.

Check out Claire’s presentation too if you want more info!

 

Tips for solar charging your data collection

Post by Julia Connors of Voltaicsystems. Email Julia with questions: julia@voltaicsystems.com

What is solar for M&E?

Solar technology can be extremely useful for M&E projects in areas with minimal or inconsistent access to power. Portable solar chargers can eliminate power constraints and keep phones and tablets reliably charged up in the field.

In this post we’ll discuss:

  • How to decide if solar is right for your project
  • How to properly size a solar charging system to meet your needs

Do you really need solar?

In many cases solar is not necessary and will simply add complexity and costs to your project. If your team can return every day to a central location with access to power, then the battery power of the tablet is sufficient in most scenarios. If not, we recommend implementing standard power saving tips to reduce power consumption during time out collecting data.

POWER SAVING TIPS

SolarforMEPhoto

If you do have daily access to the grid but find that users need to recharge at least once while out or need to spend more than one day without power, then add an external battery pack. This cost-effective option allows your team to have extra power without carrying a full solar charging system. To size a battery for your needs, skip down to ‘Step 3’ below.

If you don’t have reliable access to grid power, the next section will help you determine which size solar charging system is best for you.

Sizing your solar charger system

The key to making solar successful in your project is finding the best system for your needs. If a system is underpowered then your team can still run out of power when they’re collecting data. On the other hand, if your system is too powerful it will be heavier and more expensive than needed. We recommend the following three steps for sizing your solar needs:

  1. Estimate your daily power consumption
  2. Determine your minimum solar panel size
  3. Determine your minimum battery size

Step 1: Estimate your daily power consumption

Once you have chosen the device you will be using in the field, it’s easy to determine your daily power consumption. First you’ll need to figure out the size of your device’s battery (in Watt hours). This can often be found by looking on the back of the battery itself or doing a quick Google search to find your device’s technical specifications.

Next, you’ll need to determine your battery usage per day. For example, if you use half of your device’s battery on a typical day of data collection, then your usage is 50%. If you need to recharge twice in one day, then your usage is 200%.

Once you have those numbers, use the formula below to find your daily power consumption:

Size of Device’s Battery (Wh) x Battery Usage (per day) =

Daily Power Consumption (Wh/day)

Step 2: Determine your minimum solar panel size

The larger your device, the bigger the solar panel (measured in Watts) you’ll need. This is because larger solar panels can generate more power from the sun than smaller panels. To determine the best solar panel size for your needs, use our formula below:

Daily Power Consumption (from Step 1) / Expected Hours of Good Sun*

x 2 (Standard Power Loss Variable) =

Solar Panel Minimum (Watts)

*We typically use 5 hours as a baseline for good sun and then adjust up or down depending on the conditions. High temperatures, clouds, or shading will reduce the power produced by the panel.

Since solar conditions change frequently throughout the day, we recommend choosing a panel that is 2-4 times the minimum size required.

SolarforMEPhoto2

Step 3: Determine minimum battery size

External batteries offer extra power storage so that your device will be charged when you need it. The battery acts as a perfect backup on cloudy and rainy days so it’s important to choose the right size for your device.

It can vary, but typically about 30% of power is lost in the transfer from the external battery to your device. Therefore, to determine the battery capacity needed for one day of use, we’ll use our power consumption data from Step 1 and divide by 0.7 (100% – 30% power loss).

Watt hours per day / 0.7 hours =

Watt battery capacity needed for 1 day of use

SolarforMEPhoto3

Picking the right system for your project

Now that you’ve done the math, you’re one step closer to choosing a solar charging system for your project. Since solar chargers come in many different forms, the last step to determining your perfect system is to think about how your team will be using the solar chargers in their work. It’s important to factor in storage for device/cables and how the user will be carrying the system.

Most users aren’t that technical, so having a pack that stores the battery and the device can simplify their experience (rather than handing over a battery and a panel that they need to figure how to organize during their day). By simply finding the right style and size, you’ll experience higher usage rates and make your team’s solar-powered data collection go more smoothly.

IVR, Facebook and WhatsApp: tech and M&E at AfrEA

by Linda Raftree

At the African Evaluation Association (AfrEA) Conference in Uganda on March 29th,  we ran a session on how mobile and social media platforms are being used in monitoring and evaluation processes. Our discussants were Jamie Arkin from Human Network International (soon to be merging with VotoMobile) who spoke about interactive voice response (IVR); John Njovu, an independent consultant working with the Ministry of National Development Planning of the Zambian government, who shared experiences with technology tools for citizen feedback to monitor budgets and support transparency and accountability; and Noel Verrinder from Genesis who talked about using WhatsApp in a youth financial education program.

Using IVR for surveys

Jamie shared how HNI deploys IVR surveys to obtain information about different initiatives or interventions from a wide public or to understand the public’s beliefs about a particular topic. These surveys come in three formats: random dialing of telephone numbers until someone picks up; asking people to call in, for example, on a radio show; or using an existing list of phone numbers. “If there is an 80% phone penetration or higher, it is equal to a normal household level survey,” she said. The organization has list of thousands of phone numbers and can segment these to create a sample. “IVR really amplifies people’s voices. We record in local language. We can ask whether the respondent is a man or a woman. People use their keypads to reply or we can record their voices providing an open response to the question.” The voice responses are later digitized into text for analysis. In order to avoid too many free voice responses, the HNI system can cut the recording off after 30 seconds or limit voice responses to the first 100 calls. Often keypad responses are most effective as people are not used to leaving voice mails.

IVR is useful in areas where there is low literacy. “In Rwanda, 80% of women cannot read a full sentence, so SMS is not a silver bullet,” Jamie noted. “Smartphones are coming, and people want them, but 95% of people in Uganda have a simple feature phone, so we cannot reach them by Facebook or WhatsApp. If you are going with those tools, you will only reach the wealthiest 5% of the population.”

In order to reduce response bias, the survey question order can be randomized. Response rates tend to be ten times higher on IVR than on SMS surveys, Jamie said, in part, because IVR is cheaper for respondents. The HNI system can provide auto-analysis for certain categories such as most popular response. CSV files can also be exported for further analysis. Additionally, the system tracks length of session, language, time of day and other meta data about the survey exercise.

Regulatory and privacy implications in most countries are unclear about IVR, and currently there are few legal restrictions against calling people for surveys. “There are opt-outs for SMS but not for IVRs, if you don’t want to participate you just hang up.” In some case, however, like Rwanda, there are certain numbers that are on “do not disturb” lists and these need to be avoided, she said.

Citizen-led budget monitoring through Facebook

John shared results of a program where citizens were encouraged to visit government infrastructure projects to track whether budget allocations had been properly done. Citizens would visit a health center or a school to inquire about these projects and then fill out a form on Facebook or a website to share their findings. A first issue with the project was that voters were interested in availability and quality of service delivery, not in budget spending. “”I might ask what money you got, did you buy what you said, was it delivered and is it here. Yes. Fine. But the bigger question is: Are you using it? The clinic is supposed to have 1 doctor, 3 nurses and 3 lab technicians. Are they all there? Yes. But are they doing their jobs? How are they treating patients?”

Quantity and budget spend were being captured but quality of service was not addressed, which was problematic. Another challenge with the program was that people did not have a good sense of what the dollar can buy, thus it was difficult for them to assess whether budget had been spent. Additionally, in Zambia, it is not customary for citizens to question elected officials. The idea that the government owes the people something, or that citizens can walk into a government office to ask questions about budget is not a traditional one. “So people were not confident in asking question or pushing government for a response.”

The addition of technology to the program did not resolve any of these underlying issues, and on top of this, there was an apparent mismatch with the idea of using mobile phones to conduct feedback. “In Zambia it was said that everyone has a phone, so that’s why we thought we’d put in mobiles. But the thing is that the number of SIMs doesn’t equal the number of phone owners. The modern woman may have a good phone or two, but as you go down to people in the compound they don’t have even basic types of phones. In rural areas it’s even worse,” said John, “so this assumption was incorrect.” When the program began running in Zambia, there was surprise that no one was reporting. It was then realized that the actual mobile ownership statistics were not so clear.

Additionally, in Zambia only 11% of women can read a full sentence, and so there are massive literacy issues. And language is also an issue. In this case, it was assumed that Zambians all speak English, but often English is quite limited among rural populations. “You have accountability language that is related to budget tracking and people don’t understand it. Unless you are really out there working directly with people you will miss all of this.”

As a result of the evaluation of the program, the Government of Zambia is rethinking ways to assess the quality of services rather than the quantity of items delivered according to budget.

Gathering qualitative input through WhatsApp

Genesis’ approach to incorporating WhatsApp into their monitoring and evaluation was more emergent. “We didn’t plan for it, it just happened,” said Noel Verrinder. Genesis was running a program to support technical and vocational training colleges in peri-urban and rural areas in the Northwest part of South Africa. The young people in the program are “impoverished in our context, but they have smartphones, WhatsApp and Facebook.”

Genesis had set up a WhatsApp account to communicate about program logistics, but it morphed into a space for the trainers to provide other kinds of information and respond to questions. “We started to see patterns and we could track how engaged the different youth were based on how often they engaged on WhatsApp.” In addition to the content, it was possible to gain insights into which of the participants were more engage based on their time and responses on WhatsApp.

Genesis had asked the youth to create diaries about their experiences, and eventually asked them to photograph their diaries and submit them by WhatsApp, given that it made for much easier logistics as compared to driving around to various neighborhoods to track down the diaries. “We could just ask them to provide us with all of their feedback by WhatsApp, actually, and dispense with the diaries at some point,” noted Noel.

In future, Genesis plans to incorporate WhatsApp into its monitoring efforts in a more formal way and to consider some of the privacy and consent aspects of using the application for M&E. One challenge with using WhatsApp is that the type of language used in texting is short and less expressive, so the organization will have to figure out how to understand emoticons. Additionally, it will need to ask for consent from program participants so that WhatsApp engagement can be ethically used for M&E purposes.

We have a data problem

by Emily Tomkys, ICT in Programmes at Oxfam GB

Following my presentation at MERL Tech, I have realised that it’s not only Oxfam who have a data problem; many of us have a data problem. In the humanitarian and development space, we collect a lot of data – whether via mobile phone or a paper process, the amount of data each project generates is staggering. Some of this data goes into our MIS (Management Information Systems), but all too often data remains in Excel spreadsheets on computer hard drives, unconnected cloud storage systems or Access and bespoke databases.

(Watch Emily’s MERL Tech London Lightning Talk!)

This is an issue because the majority of our programme data is analysed in silos on a survey-to-survey basis and at best on a project-to-project basis. What about when we want to analyse data between projects, between countries, or even globally? It would currently take a lot of time and resources to bring data together in usable formats. Furthermore, issues of data security, limited support for country teams, data standards and the cost of systems or support mean there is a sustainability problem that is in many people’s interests to solve.

The demand from Oxfam’s country teams is high – one of the most common requests the ICT in Programme Team receive centres around databases and data analytics. Teams want to be able to store and analyse their data easily and safely; and there is growing demand for cross border analytics. Our humanitarian managers want to see statistics on the type of feedback we receive globally. Our livelihoods team wants to be able to monitor prices at markets on a national and regional scale. So this motivated us to look for a data solution but it’s something we know we can’t take on alone.

That’s why MERL Tech represented a great opportunity to check in with other peers about potential solutions and areas for collaboration. For now, our proposal is to design a data hub where no matter what the type of data (unstructured, semi-structured or structured) and no matter how we collect the data (mobile data collection tools or on paper), our data can integrate into a database. This isn’t about creating new tools – rather it’s about focusing on the interoperability and smooth transition between tools and storage options.  We plan to set this up so data can be pulled through into a reporting layer which may have a mixture of options for quantitative analysis, qualitative analysis and GIS mapping. We also know we need to give our micro-programme data a home and put everything in one place regardless of its source or format and make it easy to pull it through for analysis.

In this way we can explore data holistically, spot trends on a wider scale and really know more about our programmes and act accordingly. Not only should this reduce our cost of analysis, we will be able to analyse our data more efficiently and effectively. Moreover, taking a holistic view of the data life cycle will enable us to do data protection by design and it will be easier to support because the process and the tools being used will be streamlined. We know that one tool does not and cannot do everything we require when we work in such vast contexts, so a challenge will be how to streamline at the same time as factoring in contextual nuances.

Sounds easy, right? We will be starting to explore our options and working on the datahub in the coming months. MERL Tech was a great start to make connections, but we are keen to hear from others about how you are approaching “the data problem” and eager to set something up which can also be used by other actors. So please add your thoughts in the comments or get in touch if you have ideas!

Dropping down your ignorance ratio: Campaigns meet KNIME

by Rodrigo Barahona (Oxfam Intermon, @rbarahona77) and Enrique Rodriguez (Consultant, @datanauta)

A few year ago, we ran a Campaign targeting the Guatemalan Government, which generated a good deal of global public support (100,000 signatures, online activism, etc.). This, combined with other advocacy strategies, finally pushed change to happen. We did an evaluation in order to learn from such a success and found a key area where there was little to learn because we were unable to get and analyze the information:  we knew almost nothing about which online channels drove traffic to the online petition and which had better conversion rates. We didn’t know the source of more than 80% of our signatures, so we couldn’t establish recommendations for future similar actions

Building on the philosophy underneath Vanity Metrics, we started developing a system to evaluate public engagement as part of advocacy campaigns and spike actions. We wanted to improve our knowledge on what works and what doesn’t on mobilizing citizens to take action (mostly signing petitions or other online action), and which were the most effective channels in terms of generating traffic and converting petitions. So we started implementing a relatively simple Google Analytics Tracking system that helped us determine the source of the visit/signatures, establish conversion rates, etc. The only caveat was that it was time consuming — the extraction of the information and its analysis was mostly manual.

Later on, we were asked to implement the methodology on a complex campaign that had 3 landing/petition pages, 3 exit pages, and all this in two different languages. Our preliminary analysis was that it would take us up to 8-10 hours of work, with high risk of mistakes as it needed cross analysis of up to 12 pages, and required distinguishing among more than 15 different sources for each page.

But then we met KNIME: an Information Miner tool that helped us to extract different sets of data from Google analytics (through plugins), create the data flow in a visual way and automatically execute part of the analysis. So far, we have automated the capture and analysis of statistics of web traffic (Google Analytics), the community of users on Twitter and the relevance of posts in that social network. We’ve been able to minimize the risk of errors, focus on the definition of new indicators and visualizations and provide reports to draw conclusions and design new communication strategies (based on data) in a very short period of time.

KNIME helped us to scale up our evaluation system, making it suitable for very complex campaigns, with a significant reduction of time dedication and also lowering the risk of mistakes. And most important of all, introducing KNIME into our system has dropped down our ignorance ratio significantly, because nowadays we can identify the source of more than 95% of the signatures. This means that we can shed light on how different strategies are working, which channels are bringing more visits to the different landing pages, and which have the higher conversion rate. All this is relevant information to inform decisions and adapt strategies and improve the outputs of a campaign.

Watch Rodrigo’s MERL Tech Lightning Talk here!