Tag Archives: merltech

Discrete choice experiment (DCE) to generate weights for a multidimensional index

In his MERL Tech Lightning Talk, Simone Lombardini, Global Impact Evaluation Adviser, Oxfam, discussed his experience with an innovative method for applying tech to help determine appropriate metrics for measuring concepts that escape easy definition. To frame his talk, he referenced Oxfam’s recent experience with using discrete choice experiments (DCE) to establish a strategy for measuring women’s empowerment.

Two methods already exist, Simone points out, for transforming soft concepts into hard metrics. First, the evaluator could assume full authority and responsibility over defining the metrics. Alternatively, the evaluator could design the evaluation so that relevant stakeholders are incorporated into the process and use their input to help define the metrics.

Though both methods are common, they are missing (for practical reasons) the level of mass input that could make them truly accurate reflections of the social perception of whatever concept is being considered. Tech has a role to play in scaling the quantity of input that can be collected. If used correctly, this could lead to better evaluation metrics.

Simone described this approach as “context-specific” and “multi-dimensional.” The process starts by defining the relevant characteristics (such as those found in empowered women) in their social context, then translating these characteristics into indicators, and finally combining indicators into one empowerment index for evaluating the project.

After the characteristics are defined, a discrete choice experiment can be used to determine its “weight” in a particular social context. A discrete choice experiment (DCE) is a technique that’s frequently been used in health economics and marketing, but not much in impact evaluation. To implement a DCE, researchers present different hypothetical scenarios to respondents and ask them to decide which one they consider to best reflect the concept in question (i.e. women’s empowerment). The responses are used to assess the indicators covered by the DCE, and these can then be used to develop an empowerment index.

This process was integrated into data collection process and added 10 mins at the end of a one hour survey, and was made practicable due to the ubiquity of smartphones. The results from Oxfam’s trial run using this method are still being analyzed. For more on this, watch Lombardini’s video below!

Community-led mobile research–What could it look like?

Adam Groves, Head of Programs at On Our Radar, gave a presentation at MERL Tech London in February where he elaborated on a new method for collecting qualitative ethnographic data remotely.

The problem On Our Radar sought to confront, Adam declares, is the cold and impenetrable bureaucratic machinery of complex organizations. To many people, the unresponsiveness and inhumanity of the bureaucracies that provide them with services is dispiriting, and this is a challenge to overcome for anyone that wants to provide a quality service.

On Our Radar’s solution is to enable people to share their real-time experiences of services by recording audio and SMS diaries with their basic mobile phones. Because of the intimacy they capture, these first-person accounts have the capacity to grab the people behind services and make them listen to and experience the customer’s thoughts and feelings as they happened.

Responses obtained from audio and SMS diaries are different from those obtained from other qualitative data collection methods because, unlike solutions that crowdsource feedback, these diaries contain responses from a small group of trained citizen reporters that share their experiences in these diaries over a sustained period of time. The product is a rich and textured insight into the reporters’ emotions and priorities. One can track their journeys through services and across systems.

On Our Radar worked with British Telecom (BT) to implement this technique. The objective was to help BT understand how their customers with dementia experience their services. Over a few weeks, forty people living with dementia recorded audio diaries about their experiences dealing with big companies.

Adam explained how the audio diary method was effective for this project:

  • Because diaries and dialogues are in real time, they captured emotional highs and lows (such as the anxiety of picking up the phone and making a call) that would not be recalled in post fact interviews.
  • Because diaries are focused on individuals and their journeys instead of on discrete interactions with specific services, they showed how encountering seemingly unrelated organizations or relationships impacted users’ experiences of BT. For example, cold calls became terrifying for people with dementia and made them reluctant to answer the phone for anyone.
  • Because this method follows people’s experiences over time, it allows researchers to place individual pain points and problems in the context of a broader experience.
  • Because the data is in first person and in the moment, it moved people emotionally. Data was shared with call center staff and managers, and they found it compelling. It was an emotional human story told in one’s own words. It invited decision makers to walk in other people’s shoes.

On Our Radar’s future projects include working in Sierra Leone with local researchers to understand how households are changing their practices post-Ebola and a major piece of research with the London School of Hygiene and Tropical Medicine in Malaysia and the Philippines to gain insight on people’s understanding of their health systems.

For more, find a video of Adam’s original presentation below!

Cost-benefit comparisons of IVR, SMS, and phone survey methods

In his MERL Tech London Lightning Talk back in February, Jan Liebnitzky of Firetail provided a research-backed assessment of the costs and benefits of using interactive voice response surveys (IVR), SMS surveys, and phone surveys for MERL purposes.

First, he outlined the opportunities and challenges of using phones for survey research:

  • They are a good means for providing incentives. And research shows that incentives don’t have to be limited to airtime credits. The promise of useful information is sometimes the best motivator for respondents to participate in surveys.
  • They are less likely to reach subgroupsThough mobile phones are ubiquitous, one challenge is that groups like women, illiterate people and people in low-connectivity areas do not always have access to them. Thus, phones may not be as effective as one would hope for reaching the people most often targeted by aid programs.
  • They are scalable and have expansive reach. Scripting and outsourcing phone-based surveys to call centers takes time and capacity. Fixed costs are high, while marginal costs for each new question or respondent is low. This means that they can be cost effective (compared to on the ground surveys) if implemented at a large scale or in remote and high risk areas with problematic access.

Then, Jan shared some strategies for using phones for MERL purposes:

1. Interactive Voice Response Surveys

    • These are pre-recorded and automated surveys. Respondent can reply to them by voice or with the numerical keypad.
    • IVR has been used in interactive radio programs in Tanzania, where listening posts were established for the purpose of interacting with farmers. Listening posts are multi-channel, web-based platforms that gather and analyze feedback and questions from farmers that listen to particular radio shows. The radio station will run the IVR, and farmers can call in to the radio show to submit their questions or responses. These are effective because they are run through a trusted radio shows. However, it is important that farmers receive answers for the questions they ask, as this incentivizes future participation.

2. SMS Surveys

    • These make use of mobile messaging capabilities to send questions and receive answers. Usually, the SMS survey respondent will either choose between fixed multiple choice answers or write a freeform response. Responses, however, are limited to 160 characters.
    • One example of this is U-Reporter, a free SMS social monitoring tool for community participation in Uganda. Polls are sent to U-Reporters who answer back in real time, and the results are then shared back with the community.

3. Phone Surveys

    • Phone surveys are run through call centers by enumerators. They function like face to face interview, but over the phone.
    • As an example, phone surveys were used as a monitoring tool by an agriculture extension services provider. Farmers in the area subscribed to receive texts from the provider with tips about when and how to plant crops. From the list of subscribers, prospective respondents were sampled and in-country call centers were contracted to call up to 1,000 service users to inquire about quality of service, behaviour changes and adoption of new farming technologies.
    • The challenges here were that the data were only as good as call staff was trained. Also, there was a 80% drop off rate, partly due to the language limitations of call staff.

Finally, Jan provided a rough cost and effectivity assessment for each method:

  • IVR survey: medium cost, high response
  • SMS survey: low cost, low response
  • Phone survey: high cost, medium response

Jan closed with a question: What is the value of these methods for MERL?

His answer: The surveys are quick and dirty and, to their merit, they produce timely data from remote areas at a reasonable cost. If the data is made use of, it can be effective for monitoring. However, these methods are not yet adequate for use in evaluation.

For more, watch Jan’s Lightning Talk below!

Focus on the right users to avoid an M&E apocalypse

In his MERL Tech London Lightning Talk, George Flatters from the Open Data Institute told us that M&E is extractive. “It takes data from poor communities, it refines it, and it sells it to to rich communities.” he noted, and this process is unsustainable. The ease of deploying a survey means that there are more and more surveys being administered. This leads to survey fatigue, and when people stop wanting to take surveys, the data quality suffers, leading to an M&E apocalypse.

George outlined 4 ways to mitigate against doomsday:

1) Understand the problem–who is doing what, where?

At the moment, no one can be totally sure about which NGOs are doing what data collection and where. What is needed is the Development equivalent of the Humanitarian Data Exchange–a way to centralize and share all collected Development data. Besides the International Household Survey Catalog and NGO Aid Map (which serve a similar function, but to a limited degree), no such central location exists. With it, the industry could avoid duplication and maximize the use of its survey-administering resources.

2) Share more and use existing data

Additionally, with access to a large and comprehensive database such as this, the industry could greatly expand the scope of analysis done with the same set of data. This, of course, should be paired with the appropriate privacy considerations. For example, the data should be anonymized. Generally, a balance must be struck between accessibility and ethics. The Open Data Institute has a useful framework for thinking about how different data should be governed and shared.

3) Focus on the right users

One set of users is the data-collectors at the head office of an NGO. There are M&E solutions that will make their lives easier. However, attention must also be given to the people in communities providing the data. We need to think about how to make their lives easier as well.

4) Think like a multinational tech corporation (and/or get their data)

These corporations do not sit there and think about how to extract the maximum amount of data, they consider how they can provide quality services that will attract customers. Most of their data is obtained through the provision of services. Similarly, the question here should be, “what M&E services can we provide and receive data as a byproduct?” Examples include: cash-transfers, health visits, app download & usage, and remote watch sensing.

These principles can help minimize the amount of effort spent on extracting data, alleviating the strain placed on those who provide the data, and staving of the end of days for a little longer.

Watch George’s Lightning Talk for some additional tips!

Exploring Causality in Complex Situations using EvalC3

By Hur Hassnain, Monitoring, Evaluation, Accountability and Learning Adviser, War Child UK

At the 2017 MERL Tech London conference, my team and I gave a presentation that addressed the possibilities for and limitations of evaluating complex situations using simple Excel-based tools. The question we explored was: can Excel help us manipulate data to create predictive models and suggest  promising avenues  to project success? Our basic answer was “not yet,” at least not to its full extent. However, there are people working with accessible software like Excel to make analysis simpler for evaluators with less technical expertise.

In our presentation, Rick Davies, Mark Skipper and I showcased EvalC3, an Excel based evaluation tool that enables users to easily identify sets of attributes in a project dataset and to then compare and evaluate the relevance of these attributes to achieving the desired outcome. In other words, it helps answer the question ‘what combination of factors helped bring about the results we observed?’ In the presentation, after we explained what EvalC3 is and gave a live demonstration of how it works, we spoke about our experience using it to analyze real data from a UNICEF funded War Child UK project in Afghanistan–a project that helps children who have been deported back to Afghanistan from Iran.

Our team first learned of EvalC3 when, upon returning from a trip to our Afghanistan country programme, we discussed how our M&E team in Afghanistan uses Excel for storing and analysing data but is not able to use the software to explore or evaluate complex causal configurations. We reached out to Rick with this issue, and he introduced us to EvalC3. It sounded like the solution to our problem, and our M&E officer in Afghanistan decided to test it by using it to dig deeper into an Excel database he’d created to store data on one thousand children who were registered when they were deported to Afghanistan.  

Rick, Hosain Hashmi (our M&E Officer in Afghanistan) and I formed a working group on Skype to test drive EvalC3. First, we needed to clean the data. To do this, we asked our social workers to contact the children and their caretakers to collect important missing data. Missing data is a common problem when collecting data in fragile and conflict affected contexts like those where War Child works. Fortunately, we found that EvalC3 algorithms can work with some missing data, with the tradeoff being slightly less accurate measures of model performance. Compare this to other algorithms (like Quine-McCluskey used in QCA) which do not work at all if the data is missing for some variables. We also had to reduce the number of dimensions we used. If we did not, there could be millions of combinations that could be possible outcome predictors, and an algorithm could not search all of these possibilities in a reasonable span of time. This exercise spoke to M. A. Munson’s theory that “model building only consumes 14% of the time spent on a typical [data mining] project; the remaining time is spent on the pre and post processing steps”.

With a few weeks of work on the available dataset of children deported from Iran, we found that the children who are most likely to go back to Iran for economic purposes are mainly the children who:

  • Are living with friends (instead of with. relatives/caretakers)
  • Had not been doing farming work when they were in Iran
  • Had not completed 3 months vocational training
  • Are from adult headed households (instead of from child headed households).

As the project is still ongoing, we will continue to  investigate the cases covered by the model described here in order to better understand the causal mechanisms at work.

This experience of using EvalC3 encouraged War Child to refine the data it routinely collects with a view to developing a better understanding of where War Child interventions help or don’t help. The in-depth data-mining process and analysis conducted by the national M&E Officer and programmes team resulted in improved understanding of the results we can achieve by analyzing quality data.  EvalC3 is a user-friendly evaluation tool that is not only useful in improving current programmes but also designing new and evidence based programmes.

Using an online job platform to understand gender dynamics in the Mozambican informal labour market

Oxford Policy Management Consultant Paul Jasper’s self-professed professional passion is exploring new ways to use data to improve policy making in the Global South. At MERL Tech London in 2016, he presented on two tech-driven initiatives in Mozambique–Muva and Biscate. In his talk, he explains why these are two great examples of how programming and social policy can benefit from innovations and data coming from the tech sector.

Muva is a program that aims at testing new ways of improving women’s access to economic opportunities. It works primarily with women and girls in urban areas of Mozambique and across all sectors of the economy. Its beneficiaries include employees, self employed people, and micro-entrepreneurs. While conducting its work, the program recognized that one key characteristic of the Mozambican economy is that the informal sector is pervasive. Over 90% of new jobs in sub-Saharan Africa were produced in the informal sector. Given this quality, the challenge for organizations like Muva is that because it is an informal sector, there is very little quantitative data about it, and analysts are not quite sure how it works, what dynamics are in operation, or what role gender plays.

This is where UX, a startup based in Maputo, was able to step in. The startup noted that the majority of jobs in informal sector were assigned in old-fashioned way–people put up signs with a telephone number advertising their service. They came up with an USSD-based solution called Biscate. Biscate is a service that allows workers to register on the platform using normal mobile phones (few people have access to smartphones) and set up advertising profiles with educational status and skills. Clients can then check the platform to find people offering a service they need and can leave reviews and ratings about the service they received.

UX has soft-launched the platform and registered 30 thousand workers across the country. Since its launch, Biscate has provided unique data and insights into the labor market and helped close the data and information gap that plagued the informal sector. Muva can use this info to improve women’s access to opportunities. The organizations have partnered and started a pilot with three objectives:

  1. To understand dynamics in the informal labor market.
  2. To test whether Muva’s approaches can be used to help women become more successful in the informal sector.
  3. To influence policy makers that want to develop similar programs by producing valuable and up to date lessons.

See Paul’s presentation below if you’d like to learn more!

Using analytics as a validation tool: rethinking quality and reliability of data collection

Rebecca Rumbul, the Head of Research at My Society, gave a Lightning Talk at MERL Tech London in which she described the potential for using Google Analytics as a tool for informing and validating research.

First, she explained her organization’s work. Broadly speaking, My Society is a non-profit social enterprise with a mission to invent and popularize digital tools that enable citizens to exert power over institutions and decision makers. She noted that her organization exists solely online, and that as a result it gathers a significant amount of data from their software’s users in the 44 countries where they operate.

My Society is currently using this data to research and examine whether it is worth continuing to pursue civic technology. To do this, they are taking rational and measured approaches designed to help them evaluate and compare their products and to see to what extent they have valuable real world effects.

One tool that Rebecca’s organization makes extensive use of is Google Analytics. Google Analytics allows My Society’s research team to see who is using their software, where they are from, if they are returning users or new ones, and the number of sessions happening at one time. Beyond this, it also provides basic demographic information. Basically, Google Analytics alone gives them ample data to work with.

One application of this data is to take trends that emerge and use them to frame new research questions. For example, if more women than men are searching for a particular topic on a given day, this phenomenon could merit further exploration.

Additionally, it can act as a validation tool. For example, if the team wants to conduct a new survey, Google Analytics provides a set of data that can complement the results from that survey. It enables one to cross-check the survey results with Google’s data to determine the extent to which the survey results may or may not have suffered from errors like self-selection bias. With it, one can develop a better sense on whether there are issues with the research or if the data can be relied upon.

Google Analytics, despite having its flaws, enables one to think more deeply about their data, have frank discussions and frame research questions. All of this is can be very valuable to evaluation efforts in the development sector.

For more, see Rebecca’s Lightning Talk below!

An Agile MERL Manifesto

By Calum Handforth, Consultant at Agriculture, Learning and Impacts Network (ALINe)

Too often in MERL, we lead with a solution instead of focusing on the problem itself.  These solutions are often detailed and comprehensive, but not always aligned to the interests of beneficiaries and projects – or the realities of contexts.

At the Agriculture, Learning and Impacts Network (ALINe), we’re exploring an agile approach to MERL. An approach centred on MERL users, that’s able to generate rapid and actionable insights. It’s an iterative approach to respond to the challenges and realities of implementing MERL. It’s about learning and responding fast.

The ‘Agile’ approach has its roots in project management, where it’s usually linked to the development of digital tools. Agile was a response to what were seen to be bloated and inefficient ways of delivering software – and improvements – to users. It focuses on the importance of being user-centred. It’s about piloting and iterating to deliver products that customers need, and responding to change instead of trying to specify everything at the outset. These concepts were defined in the Agile Manifesto that launched this movement. The Agile approach is now used to design, develop and deliver a huge amount of technology, software, and digital tools.

So, should we be thinking about an Agile MERL Manifesto? And what should this contain?  We’ve got three main ideas that drive much of our work:

First, put the user at the heart of MERL. We need to know our audience, and their contexts and realities. We need to build MERL tools and approaches that align with these insights, and aim for co-design wherever possible. We need to properly understand the needs of our users and the problem(s) our MERL tools need to solve. This is also the case with the results that we’re generating: are they easy to understand and presented in a helpful format; are they actionable; can they be easily validated; are they integrated into ongoing project management processes; and are they tailored to the specific roles of different users in a system? And who needs to use the data to make what decisions?

With these foundations, our MERL tools need to be working to identify the right impact – whether it’s about the ‘big numbers’ of people reached or incomes increased, or at the level of outcomes and intermediate changes. The latter are particularly useful as these insights are often more actionable from a day-to-day management or decision-making perspective. We also need to be measuring over the right timeframe to capture these impacts or changes.

Second, collect the data that matters. We’ve all seen cases where surveys or other tools have been used that ask all the questions – except the right one. So we need to strip everything back and make sure that we can get the right data for the right decisions to be made. This is where feedback systems, which we have focused on extensively, can be important. These tend to focus on asking a smaller number of questions more frequently to understand the views and perspectives of users so as to inform decision-making.

Recently, we’ve worked on the monitoring of mobile phone-delivered agricultural and nutritional information across six countries. As part of this, we ran regular ‘Rapid Feedback Surveys’ that provided the User Experience team at each of the Mobile Network Operators with a platform to ask users questions on their experience with the service. This enabled actionable improvements to the service – for example, being able to tweak content or service design in order to better meet the needs of users. We’ve also been using the Progress Out Of Poverty Index (PPI) – a 10-question poverty measurement tool customised for more than 50 countries – to gain some valuable demographic insights to ensure that the project is reaching the correct beneficiaries.

More widely, in order to understand how different agricultural technologies promoted by the public extension system in Ethiopia are working out for farmers, we developed a lightweight tool called the ‘technology tracker’ to gather perceptual feedback from smallholder farmers about these technologies. The tools ask a short set of questions on key dimensions of technology performance (ease of understanding, cost of materials, labour requirements, production quantity and quality and profitability) along with the major challenges faced. This allows government workers to easily compare different technologies and diagnose the key problems to be addressed to make the technologies more successful.

These ideas are gaining increased traction in international development, as in the case of the Lean Data approach being explored by the impact investment fund, Acumen.

Third, be responsive to change. Methodologies don’t always work out. So adapt to the challenges thrown at you, and understand that methodologies shouldn’t be static – we need continual refinement to ensure that we’re always measuring the things that matter as problems and realities shift. We need to be thinking of iteration as central to the process of developing MERL tools and systems as opposed to continuing to focus on the big-up-front-design approach. However, since the process of figuring out what the right data is can be complex, starting with something simple but useful and iterating to refine it can help. In Agile, there’s the concept of the Minimum Viable Product – what’s your basic offering in order to generate suitable value for your customers? In Agile MERL, what should be the Minimum Viable Tool to get the insights that we need? It’s about starting with lightweight practical tools that solve immediate problems and generate value, rather than embarking on a giant and unwieldy system before we’ve managed to gain any traction and demonstrate value.

Agile MERL is about both the design of MERL systems and the wider piece of learning from the data that is generated. This is also about learning when things don’t work out. To borrow a tech-phrase from the environment that Agile initially grew out of: fail fast, and fail often. But learn from failure. This includes documenting it. The social enterprise One Acre Fund use a Failure Template to record how and why interventions and approaches didn’t work. They also have failure reports available on their website. Transparency is important here, too, as the more that these insights are shared, the more effective all of our work can be. There will be less duplication, and interventions will be based on stronger evidence of what works and what doesn’t.  There’s an important point here, too, about organisational culture being responsive to this approach – and we need to be pushing donors to understand the realities of MERL: it’s rarely ever perfect.  

Agile MERL, as with any other MERL approach, is not a panacea. It’s part of the wider MERL toolkit and there are limitations with this approach. In particular, one needs to ensure that in the quest for lean data collection, we are still getting valid insights that are robust enough to be used by decision-makers on critical issues. Moreover, while change and iteration need to be embraced, there is still a need to create continuous and comparable datasets. In some cases, the depth or comprehensiveness of research required may prevent a more lightweight approach. However, even in these situations the core tenets of Agile MERL remain relevant and MERL should continue to be user-driven, useful and iterative. We need to be continuously testing, learning and adapting.

These are our initial thoughts, which have guided some of our recent projects. We’re increasingly working on projects that use Agile-inspired tools and approaches: whether tech, software or data-driven development. We feel that MERL can learn from the Agile project management environments that these digital tools were designed in, which have used the Agile Manifesto to put users at the centre. 

Agile MERL, and using tools like mobile phones and tablets for data collection, democratise MERL through making it more accessible and useful. Not every organisation can afford to conduct a $1m Household Survey, but most organisations can use approaches like the 10-question PPI survey, the Rapid Feedback Survey or the technology tracker in some capacity. Agile MERL stops MERL from being a tick-box exercise. Instead, it can help users recognise the importance of MERL, and encourage them to put data and evidence-based learning at the heart of their work.

Watch Calum’s MERL Tech video below!