Tag Archives: data

Exploring Causality in Complex Situations using EvalC3

By Hur Hassnain, Monitoring, Evaluation, Accountability and Learning Adviser, War Child UK

At the 2017 MERL Tech London conference, my team and I gave a presentation that addressed the possibilities for and limitations of evaluating complex situations using simple Excel-based tools. The question we explored was: can Excel help us manipulate data to create predictive models and suggest  promising avenues  to project success? Our basic answer was “not yet,” at least not to its full extent. However, there are people working with accessible software like Excel to make analysis simpler for evaluators with less technical expertise.

In our presentation, Rick Davies, Mark Skipper and I showcased EvalC3, an Excel based evaluation tool that enables users to easily identify sets of attributes in a project dataset and to then compare and evaluate the relevance of these attributes to achieving the desired outcome. In other words, it helps answer the question ‘what combination of factors helped bring about the results we observed?’ In the presentation, after we explained what EvalC3 is and gave a live demonstration of how it works, we spoke about our experience using it to analyze real data from a UNICEF funded War Child UK project in Afghanistan–a project that helps children who have been deported back to Afghanistan from Iran.

Our team first learned of EvalC3 when, upon returning from a trip to our Afghanistan country programme, we discussed how our M&E team in Afghanistan uses Excel for storing and analysing data but is not able to use the software to explore or evaluate complex causal configurations. We reached out to Rick with this issue, and he introduced us to EvalC3. It sounded like the solution to our problem, and our M&E officer in Afghanistan decided to test it by using it to dig deeper into an Excel database he’d created to store data on one thousand children who were registered when they were deported to Afghanistan.  

Rick, Hosain Hashmi (our M&E Officer in Afghanistan) and I formed a working group on Skype to test drive EvalC3. First, we needed to clean the data. To do this, we asked our social workers to contact the children and their caretakers to collect important missing data. Missing data is a common problem when collecting data in fragile and conflict affected contexts like those where War Child works. Fortunately, we found that EvalC3 algorithms can work with some missing data, with the tradeoff being slightly less accurate measures of model performance. Compare this to other algorithms (like Quine-McCluskey used in QCA) which do not work at all if the data is missing for some variables. We also had to reduce the number of dimensions we used. If we did not, there could be millions of combinations that could be possible outcome predictors, and an algorithm could not search all of these possibilities in a reasonable span of time. This exercise spoke to M. A. Munson’s theory that “model building only consumes 14% of the time spent on a typical [data mining] project; the remaining time is spent on the pre and post processing steps”.

With a few weeks of work on the available dataset of children deported from Iran, we found that the children who are most likely to go back to Iran for economic purposes are mainly the children who:

  • Are living with friends (instead of with. relatives/caretakers)
  • Had not been doing farming work when they were in Iran
  • Had not completed 3 months vocational training
  • Are from adult headed households (instead of from child headed households).

As the project is still ongoing, we will continue to  investigate the cases covered by the model described here in order to better understand the causal mechanisms at work.

This experience of using EvalC3 encouraged War Child to refine the data it routinely collects with a view to developing a better understanding of where War Child interventions help or don’t help. The in-depth data-mining process and analysis conducted by the national M&E Officer and programmes team resulted in improved understanding of the results we can achieve by analyzing quality data.  EvalC3 is a user-friendly evaluation tool that is not only useful in improving current programmes but also designing new and evidence based programmes.

Using R to produce innovative, quick and reproducible evidence

By Claire Benard, formerly of Crisis UK and now with National Council for Voluntary Organizations (NCVO). 

Most people who work with data in MERL will have heard of R. Some people will have been properly introduced to it, but only a few will invest the necessary time in learning how to use it. Being a relatively late convert, I wanted to share my experience of moving from a traditional data analysis software package to a language based one, so I did a Lightning Talk at MERL Tech London. (You can watch the video below.)

First things first, what is R?

Aside from being the 18th letter of the alphabet, R is also a language and environment for statistical computing and graphics. 

But wait, you say… why should I use it?

This is what the five-minute video below is about, but in short, here are a few reasons:

  • There is nothing your current software package does that R doesn’t do.
  • R is free.
  • Using a programming language makes the analysis easy to reproduce, whether it’s because you need to produce similar analysis year on year or because you have a team of analysts who need to collaborate and understand each other’s work.
  • R is an open source technology. People from all backgrounds contribute to it and make new tools available for free regularly. This is you’re insurance to stay at the cutting edge of what is being developed.

Well, then, how do I get started? you wonder… 

If you’re more MERL than Tech, learning a new programming language can be daunting. There is a time and money cost to it and it’s hard to know where to start if you’re on your own.

In the video, I give a few tips. It’s also worth checking out free/cheap training online (for example here or here) ; looking out for a user group near you and getting advice from blogs, forums and newsletters.

Check out Claire’s presentation too if you want more info!

 

Tips for solar charging your data collection

Post by Julia Connors of Voltaicsystems. Email Julia with questions: julia@voltaicsystems.com

What is solar for M&E?

Solar technology can be extremely useful for M&E projects in areas with minimal or inconsistent access to power. Portable solar chargers can eliminate power constraints and keep phones and tablets reliably charged up in the field.

In this post we’ll discuss:

  • How to decide if solar is right for your project
  • How to properly size a solar charging system to meet your needs

Do you really need solar?

In many cases solar is not necessary and will simply add complexity and costs to your project. If your team can return every day to a central location with access to power, then the battery power of the tablet is sufficient in most scenarios. If not, we recommend implementing standard power saving tips to reduce power consumption during time out collecting data.

POWER SAVING TIPS

SolarforMEPhoto

If you do have daily access to the grid but find that users need to recharge at least once while out or need to spend more than one day without power, then add an external battery pack. This cost-effective option allows your team to have extra power without carrying a full solar charging system. To size a battery for your needs, skip down to ‘Step 3’ below.

If you don’t have reliable access to grid power, the next section will help you determine which size solar charging system is best for you.

Sizing your solar charger system

The key to making solar successful in your project is finding the best system for your needs. If a system is underpowered then your team can still run out of power when they’re collecting data. On the other hand, if your system is too powerful it will be heavier and more expensive than needed. We recommend the following three steps for sizing your solar needs:

  1. Estimate your daily power consumption
  2. Determine your minimum solar panel size
  3. Determine your minimum battery size

Step 1: Estimate your daily power consumption

Once you have chosen the device you will be using in the field, it’s easy to determine your daily power consumption. First you’ll need to figure out the size of your device’s battery (in Watt hours). This can often be found by looking on the back of the battery itself or doing a quick Google search to find your device’s technical specifications.

Next, you’ll need to determine your battery usage per day. For example, if you use half of your device’s battery on a typical day of data collection, then your usage is 50%. If you need to recharge twice in one day, then your usage is 200%.

Once you have those numbers, use the formula below to find your daily power consumption:

Size of Device’s Battery (Wh) x Battery Usage (per day) =

Daily Power Consumption (Wh/day)

Step 2: Determine your minimum solar panel size

The larger your device, the bigger the solar panel (measured in Watts) you’ll need. This is because larger solar panels can generate more power from the sun than smaller panels. To determine the best solar panel size for your needs, use our formula below:

Daily Power Consumption (from Step 1) / Expected Hours of Good Sun*

x 2 (Standard Power Loss Variable) =

Solar Panel Minimum (Watts)

*We typically use 5 hours as a baseline for good sun and then adjust up or down depending on the conditions. High temperatures, clouds, or shading will reduce the power produced by the panel.

Since solar conditions change frequently throughout the day, we recommend choosing a panel that is 2-4 times the minimum size required.

SolarforMEPhoto2

Step 3: Determine minimum battery size

External batteries offer extra power storage so that your device will be charged when you need it. The battery acts as a perfect backup on cloudy and rainy days so it’s important to choose the right size for your device.

It can vary, but typically about 30% of power is lost in the transfer from the external battery to your device. Therefore, to determine the battery capacity needed for one day of use, we’ll use our power consumption data from Step 1 and divide by 0.7 (100% – 30% power loss).

Watt hours per day / 0.7 hours =

Watt battery capacity needed for 1 day of use

SolarforMEPhoto3

Picking the right system for your project

Now that you’ve done the math, you’re one step closer to choosing a solar charging system for your project. Since solar chargers come in many different forms, the last step to determining your perfect system is to think about how your team will be using the solar chargers in their work. It’s important to factor in storage for device/cables and how the user will be carrying the system.

Most users aren’t that technical, so having a pack that stores the battery and the device can simplify their experience (rather than handing over a battery and a panel that they need to figure how to organize during their day). By simply finding the right style and size, you’ll experience higher usage rates and make your team’s solar-powered data collection go more smoothly.

We have a data problem

by Emily Tomkys, ICT in Programmes at Oxfam GB

Following my presentation at MERL Tech, I have realised that it’s not only Oxfam who have a data problem; many of us have a data problem. In the humanitarian and development space, we collect a lot of data – whether via mobile phone or a paper process, the amount of data each project generates is staggering. Some of this data goes into our MIS (Management Information Systems), but all too often data remains in Excel spreadsheets on computer hard drives, unconnected cloud storage systems or Access and bespoke databases.

(Watch Emily’s MERL Tech London Lightning Talk!)

This is an issue because the majority of our programme data is analysed in silos on a survey-to-survey basis and at best on a project-to-project basis. What about when we want to analyse data between projects, between countries, or even globally? It would currently take a lot of time and resources to bring data together in usable formats. Furthermore, issues of data security, limited support for country teams, data standards and the cost of systems or support mean there is a sustainability problem that is in many people’s interests to solve.

The demand from Oxfam’s country teams is high – one of the most common requests the ICT in Programme Team receive centres around databases and data analytics. Teams want to be able to store and analyse their data easily and safely; and there is growing demand for cross border analytics. Our humanitarian managers want to see statistics on the type of feedback we receive globally. Our livelihoods team wants to be able to monitor prices at markets on a national and regional scale. So this motivated us to look for a data solution but it’s something we know we can’t take on alone.

That’s why MERL Tech represented a great opportunity to check in with other peers about potential solutions and areas for collaboration. For now, our proposal is to design a data hub where no matter what the type of data (unstructured, semi-structured or structured) and no matter how we collect the data (mobile data collection tools or on paper), our data can integrate into a database. This isn’t about creating new tools – rather it’s about focusing on the interoperability and smooth transition between tools and storage options.  We plan to set this up so data can be pulled through into a reporting layer which may have a mixture of options for quantitative analysis, qualitative analysis and GIS mapping. We also know we need to give our micro-programme data a home and put everything in one place regardless of its source or format and make it easy to pull it through for analysis.

In this way we can explore data holistically, spot trends on a wider scale and really know more about our programmes and act accordingly. Not only should this reduce our cost of analysis, we will be able to analyse our data more efficiently and effectively. Moreover, taking a holistic view of the data life cycle will enable us to do data protection by design and it will be easier to support because the process and the tools being used will be streamlined. We know that one tool does not and cannot do everything we require when we work in such vast contexts, so a challenge will be how to streamline at the same time as factoring in contextual nuances.

Sounds easy, right? We will be starting to explore our options and working on the datahub in the coming months. MERL Tech was a great start to make connections, but we are keen to hear from others about how you are approaching “the data problem” and eager to set something up which can also be used by other actors. So please add your thoughts in the comments or get in touch if you have ideas!

5 Insights from MERL Tech 2016

By Katherine Haugh, a visual note taker who summarizes content in a visually simple manner while keeping the complexity of the subject matter. Originally published on Katherine’s blog October 20, 2015 and here on ICT Works January 18th, 2016. 

MT1MT2

Recently, I had the opportunity to participate in the 2015 MERL Tech conference that brought together over 260 people from 157 different organizations. I joined the conference as a “visual note-taker,” and I documented the lightning talks, luncheon discussions, and breakout sessions with a mix of infographics, symbols and text.

Experiencing several “a-ha” moments myself, I thought it would be helpful to go a step further than just documenting what was covered and add some insights on my own. Five clear themes stood out to me: 1) There is such a thing as “too much data”2) “Lessons learned” is like a song on repeat 3) Humans > computers 4) Sharing is caring 5) Social impact investment is crucial.

1) There is such a thing as “too much data.”

MERLTech 2015 began with a presentation by Ben Ramalingham, who explained that, “big data is like teenage sex. No one knows how to do it and everyone thinks that everyone else is doing it.” In addition to being the most widely tweeted quote at the conference and eliciting a lot of laughter and nods of approval, Ben’s point was well-received by the audience. The fervor for collecting more and more data has been, ironically, limiting the ability of organizations to meaningfully understand their data and carry out data-driven decision-making.

Additionally, I attended the breakout session on “data minimalism” with Vanessa CorlazzoliMonalisa Salib, from USAID LEARN, and Teresa Crawford that further emphasized this point.

The session covered the ways that we can identify key learning questions and pinpoint need-to-have-data (not nice-to-have-data) to be able to answer those questions. [What this looks like in practice: a survey with onlyfive questions. Yes, just five questions.] This approach to data collection enforces the need to think critically each step of the way about what is needed and absolutely necessary, as opposed to collecting as much as possible and then thinking about what is “usable” later.

2) “Lessons learned” is like a song on repeat.

Similar to a popular song, the term “lessons learned” has been on repeat for many M&E practitioners (including myself). How many reports have we seen that conclude with lessons learned that are never actually learned? Having concluded my own capstone project with a set of “lessons learned,” I am at fault for this as well. In her lightning talk on “Lessons Not Learned in MERL,” Susan Davis explained that, “while it’s OK to re-invent the wheel, it’s not OK to re-invent a flat tire.”

It seems that we are learning the same “lessons” over and over again in the M&E-tech field and never implementing or adapting in accordance with those lessons. Susan suggested we retire the “MERL” acronym and update to “MERLA” (monitoring, evaluation, research, learning and adaptation).

How do we bridge the gap between M&E findings and organizational decision-making? Dave Algoso has some answers. (In fact, just to get a little meta here: Dave Algoso wrote about “lessons not learned” last year at M&E Tech and now we’re learning about “lessons not learned” again at MERLTech 2015. Just some food for thought.). A tip from Susan for not re-inventing a flat wheel: find other practitioners who have done similar work and look over their “lessons learned” before writing your own. Stay tuned for more on this at FailFest 2015 in December!

3) Humans > computers.

Who would have thought that at a tech-related conference, a theme would be the need for more human control and insight? Not me. That’s for sure! A funny aside: I have (for a very long time) been fearful that the plot of the Will Smith movie, “I-Robot” would become a reality. I now feel slightly more assured that this won’t happen, given that there was a consensus at this conference and others on the need for humans in the M&E process (and in the world). As Ben Ramalingham so eloquently explained, “you can’t use technology to substitute humans; use technology to understand humans.”

4) Sharing is caring.

Circling back to the lessons learned on repeat point, “sharing is caring” is definitely one we’ve heard before. Jacob Korenblum emphasized the need for more sharing in the M&E field and suggested three mechanisms for publicizing M&E results: 1)Understanding the existing eco-system (i.e. the decision between using WhatsApp in Jordan or in Malawi) 2) Building feedback loops directly into M&E design and 3) Creating and tracking indicators related to sharing. Dave Algoso also expands on this concept in TechChange’s TC111 course on Technology for Monitoring and Evaluation; Dave explains that bridging the gaps between the different levels of learning (individual, organizational, and sectoral) is necessary for building the overall knowledge of the field, which spans beyond the scope of a singular project.

5) Social impact investment is crucial.

I’ve heard this at other conferences I’ve attended, like the Millennial Action Project’sCongressional Summit on Next Generation Leadership and many others.  As a panelist on “The Future of MERLTech: A Donor View,” Nancy McPherson got right down to business: she addressed the elephant in the room by asking questions about “who the data is really for” and “what projects are really about.” Nancy emphasized the need for role reversal if we as practitioners and researchers are genuine in our pursuit of “locally-led initiatives.” I couldn’t agree more. In addition to explaining that social impact investing is the new frontier of donors in this space, she also gave a brief synopsis of trends in the evaluation field (a topic that my brilliant colleague Deborah Grodzicki and I will be expanding on. Stay tuned!)