Tag Archives: evaluation

An Agile MERL Manifesto

By Calum Handforth, Consultant at Agriculture, Learning and Impacts Network (ALINe)

Too often in MERL, we lead with a solution instead of focusing on the problem itself.  These solutions are often detailed and comprehensive, but not always aligned to the interests of beneficiaries and projects – or the realities of contexts.

At the Agriculture, Learning and Impacts Network (ALINe), we’re exploring an agile approach to MERL. An approach centred on MERL users, that’s able to generate rapid and actionable insights. It’s an iterative approach to respond to the challenges and realities of implementing MERL. It’s about learning and responding fast.

The ‘Agile’ approach has its roots in project management, where it’s usually linked to the development of digital tools. Agile was a response to what were seen to be bloated and inefficient ways of delivering software – and improvements – to users. It focuses on the importance of being user-centred. It’s about piloting and iterating to deliver products that customers need, and responding to change instead of trying to specify everything at the outset. These concepts were defined in the Agile Manifesto that launched this movement. The Agile approach is now used to design, develop and deliver a huge amount of technology, software, and digital tools.

So, should we be thinking about an Agile MERL Manifesto? And what should this contain?  We’ve got three main ideas that drive much of our work:

First, put the user at the heart of MERL. We need to know our audience, and their contexts and realities. We need to build MERL tools and approaches that align with these insights, and aim for co-design wherever possible. We need to properly understand the needs of our users and the problem(s) our MERL tools need to solve. This is also the case with the results that we’re generating: are they easy to understand and presented in a helpful format; are they actionable; can they be easily validated; are they integrated into ongoing project management processes; and are they tailored to the specific roles of different users in a system? And who needs to use the data to make what decisions?

With these foundations, our MERL tools need to be working to identify the right impact – whether it’s about the ‘big numbers’ of people reached or incomes increased, or at the level of outcomes and intermediate changes. The latter are particularly useful as these insights are often more actionable from a day-to-day management or decision-making perspective. We also need to be measuring over the right timeframe to capture these impacts or changes.

Second, collect the data that matters. We’ve all seen cases where surveys or other tools have been used that ask all the questions – except the right one. So we need to strip everything back and make sure that we can get the right data for the right decisions to be made. This is where feedback systems, which we have focused on extensively, can be important. These tend to focus on asking a smaller number of questions more frequently to understand the views and perspectives of users so as to inform decision-making.

Recently, we’ve worked on the monitoring of mobile phone-delivered agricultural and nutritional information across six countries. As part of this, we ran regular ‘Rapid Feedback Surveys’ that provided the User Experience team at each of the Mobile Network Operators with a platform to ask users questions on their experience with the service. This enabled actionable improvements to the service – for example, being able to tweak content or service design in order to better meet the needs of users. We’ve also been using the Progress Out Of Poverty Index (PPI) – a 10-question poverty measurement tool customised for more than 50 countries – to gain some valuable demographic insights to ensure that the project is reaching the correct beneficiaries.

More widely, in order to understand how different agricultural technologies promoted by the public extension system in Ethiopia are working out for farmers, we developed a lightweight tool called the ‘technology tracker’ to gather perceptual feedback from smallholder farmers about these technologies. The tools ask a short set of questions on key dimensions of technology performance (ease of understanding, cost of materials, labour requirements, production quantity and quality and profitability) along with the major challenges faced. This allows government workers to easily compare different technologies and diagnose the key problems to be addressed to make the technologies more successful.

These ideas are gaining increased traction in international development, as in the case of the Lean Data approach being explored by the impact investment fund, Acumen.

Third, be responsive to change. Methodologies don’t always work out. So adapt to the challenges thrown at you, and understand that methodologies shouldn’t be static – we need continual refinement to ensure that we’re always measuring the things that matter as problems and realities shift. We need to be thinking of iteration as central to the process of developing MERL tools and systems as opposed to continuing to focus on the big-up-front-design approach. However, since the process of figuring out what the right data is can be complex, starting with something simple but useful and iterating to refine it can help. In Agile, there’s the concept of the Minimum Viable Product – what’s your basic offering in order to generate suitable value for your customers? In Agile MERL, what should be the Minimum Viable Tool to get the insights that we need? It’s about starting with lightweight practical tools that solve immediate problems and generate value, rather than embarking on a giant and unwieldy system before we’ve managed to gain any traction and demonstrate value.

Agile MERL is about both the design of MERL systems and the wider piece of learning from the data that is generated. This is also about learning when things don’t work out. To borrow a tech-phrase from the environment that Agile initially grew out of: fail fast, and fail often. But learn from failure. This includes documenting it. The social enterprise One Acre Fund use a Failure Template to record how and why interventions and approaches didn’t work. They also have failure reports available on their website. Transparency is important here, too, as the more that these insights are shared, the more effective all of our work can be. There will be less duplication, and interventions will be based on stronger evidence of what works and what doesn’t.  There’s an important point here, too, about organisational culture being responsive to this approach – and we need to be pushing donors to understand the realities of MERL: it’s rarely ever perfect.  

Agile MERL, as with any other MERL approach, is not a panacea. It’s part of the wider MERL toolkit and there are limitations with this approach. In particular, one needs to ensure that in the quest for lean data collection, we are still getting valid insights that are robust enough to be used by decision-makers on critical issues. Moreover, while change and iteration need to be embraced, there is still a need to create continuous and comparable datasets. In some cases, the depth or comprehensiveness of research required may prevent a more lightweight approach. However, even in these situations the core tenets of Agile MERL remain relevant and MERL should continue to be user-driven, useful and iterative. We need to be continuously testing, learning and adapting.

These are our initial thoughts, which have guided some of our recent projects. We’re increasingly working on projects that use Agile-inspired tools and approaches: whether tech, software or data-driven development. We feel that MERL can learn from the Agile project management environments that these digital tools were designed in, which have used the Agile Manifesto to put users at the centre. 

Agile MERL, and using tools like mobile phones and tablets for data collection, democratise MERL through making it more accessible and useful. Not every organisation can afford to conduct a $1m Household Survey, but most organisations can use approaches like the 10-question PPI survey, the Rapid Feedback Survey or the technology tracker in some capacity. Agile MERL stops MERL from being a tick-box exercise. Instead, it can help users recognise the importance of MERL, and encourage them to put data and evidence-based learning at the heart of their work.

Watch Calum’s MERL Tech video below!

Six priorities for the MERL Tech community

by Linda Raftree, MERL Tech Co-organizer

IMG_4636Participants at the London MERL Tech conference in February 2017 crowdsourced a MERL Tech History timeline (which I’ve shared in this post). Building on that, we projected out our hopes for a bright MERL Tech Future. Then we prioritized our top goals as a group (see below). We’ll aim to continue building on these as a sector going forward and would love more thoughts on them.

  1. Figure out how to be responsible with digital data and not put people, communities, vulnerable groups at risk. Subtopics included: share data with others responsibly without harming anyone; agree minimum ethical standard for MERL and data collection; agree principles for minimizing data we collect so that only essential data is captured, develop duty of care principles for MERL Tech and digital data; develop ethical data practices and policies at organization levels; shift the power balance so that digital data convenience costs are paid by orgs, not affected populations; develop a set of quality standards for evaluation using tech
  2. Increase data literacy across the sector, at individual level and within the various communities where we are working.
  3. Overcome the extraction challenge and move towards true downward accountability. Do good user/human centered design and planning together, be ‘leaner’ and more user-focused at all stages of planning and MERL. Subtopics included: development of more participatory MERL methods; bringing consensus decision-making to participatory MERL; realizing the potential of tech to shift power and knowledge hierarchies; greater use of appreciative inquiry in participatory MERL; more relevant use of tech in MERL — less data, more empowering, less extractive, more used.
  4. Integrate MERL into our daily opfor blogerations to avoid the thinking that it is something ‘separate;’ move it to the core of operations management and make sure we have the necessary funds to do so; demystify it and make it normal! Subtopics included that: we’ve stopped calling “MERL” a “thing” and the norm is to talk about monitoring as part of operations; data use is enabling real-time coordination; no more paper based surveys.
  5. Improve coordination and interoperability as related to data and tools, both between organizations and within organizations. Subtopics included: more interoperability; more data-sharing platforms; all data with suitable anonymization is open; universal exchange of machine readable M&E Data (e.g., standards? IATI? a platform?); sector-wide IATI compliance; tech solutions that enable sharing of qualitative and quantitative data; systems of use across agencies; e.g., to refer feedback; coordination; organizations sharing more data; interoperability of tools. It was emphasized that donors should incentivize this and ensure that there are resources to manage it.
  6. Enhance user-driven and accessible tech that supports impact and increases efficiency, that is open source and can be built on, and that allows for interoperability and consistent systems of measurement and evaluation approaches.

In order to move on these priorities, participants felt we needed better coordination and sharing of tools and lessons among the NGO community. This could be through a platform where different innovations and tools are appropriately documented so that donors and organizations can more easily find good practice, useful tools and get a sense of ‘what’s out there’ and what it’s being used for. This might help us to focus on implementing what is working where, when, why and how in M&E (based on a particular kind of context) rather than re-inventing the wheel and endlessly pushing for new tools.

Participants also wanted to see MERL Tech as a community that is collaborating to shape the field and to ensure that we are a sector that listens, learns, and adopts good practices. They suggested hosting MERL Tech events and conferences in ‘the South;’ and building out the MERL Tech community to include greater representation of users and developers in order to achieve optimal tools and management processes.

What do you think – have we covered it all? What’s missing?

We have a data problem

by Emily Tomkys, ICT in Programmes at Oxfam GB

Following my presentation at MERL Tech, I have realised that it’s not only Oxfam who have a data problem; many of us have a data problem. In the humanitarian and development space, we collect a lot of data – whether via mobile phone or a paper process, the amount of data each project generates is staggering. Some of this data goes into our MIS (Management Information Systems), but all too often data remains in Excel spreadsheets on computer hard drives, unconnected cloud storage systems or Access and bespoke databases.

(Watch Emily’s MERL Tech London Lightning Talk!)

This is an issue because the majority of our programme data is analysed in silos on a survey-to-survey basis and at best on a project-to-project basis. What about when we want to analyse data between projects, between countries, or even globally? It would currently take a lot of time and resources to bring data together in usable formats. Furthermore, issues of data security, limited support for country teams, data standards and the cost of systems or support mean there is a sustainability problem that is in many people’s interests to solve.

The demand from Oxfam’s country teams is high – one of the most common requests the ICT in Programme Team receive centres around databases and data analytics. Teams want to be able to store and analyse their data easily and safely; and there is growing demand for cross border analytics. Our humanitarian managers want to see statistics on the type of feedback we receive globally. Our livelihoods team wants to be able to monitor prices at markets on a national and regional scale. So this motivated us to look for a data solution but it’s something we know we can’t take on alone.

That’s why MERL Tech represented a great opportunity to check in with other peers about potential solutions and areas for collaboration. For now, our proposal is to design a data hub where no matter what the type of data (unstructured, semi-structured or structured) and no matter how we collect the data (mobile data collection tools or on paper), our data can integrate into a database. This isn’t about creating new tools – rather it’s about focusing on the interoperability and smooth transition between tools and storage options.  We plan to set this up so data can be pulled through into a reporting layer which may have a mixture of options for quantitative analysis, qualitative analysis and GIS mapping. We also know we need to give our micro-programme data a home and put everything in one place regardless of its source or format and make it easy to pull it through for analysis.

In this way we can explore data holistically, spot trends on a wider scale and really know more about our programmes and act accordingly. Not only should this reduce our cost of analysis, we will be able to analyse our data more efficiently and effectively. Moreover, taking a holistic view of the data life cycle will enable us to do data protection by design and it will be easier to support because the process and the tools being used will be streamlined. We know that one tool does not and cannot do everything we require when we work in such vast contexts, so a challenge will be how to streamline at the same time as factoring in contextual nuances.

Sounds easy, right? We will be starting to explore our options and working on the datahub in the coming months. MERL Tech was a great start to make connections, but we are keen to hear from others about how you are approaching “the data problem” and eager to set something up which can also be used by other actors. So please add your thoughts in the comments or get in touch if you have ideas!

Dropping down your ignorance ratio: Campaigns meet KNIME

by Rodrigo Barahona (Oxfam Intermon, @rbarahona77) and Enrique Rodriguez (Consultant, @datanauta)

A few year ago, we ran a Campaign targeting the Guatemalan Government, which generated a good deal of global public support (100,000 signatures, online activism, etc.). This, combined with other advocacy strategies, finally pushed change to happen. We did an evaluation in order to learn from such a success and found a key area where there was little to learn because we were unable to get and analyze the information:  we knew almost nothing about which online channels drove traffic to the online petition and which had better conversion rates. We didn’t know the source of more than 80% of our signatures, so we couldn’t establish recommendations for future similar actions

Building on the philosophy underneath Vanity Metrics, we started developing a system to evaluate public engagement as part of advocacy campaigns and spike actions. We wanted to improve our knowledge on what works and what doesn’t on mobilizing citizens to take action (mostly signing petitions or other online action), and which were the most effective channels in terms of generating traffic and converting petitions. So we started implementing a relatively simple Google Analytics Tracking system that helped us determine the source of the visit/signatures, establish conversion rates, etc. The only caveat was that it was time consuming — the extraction of the information and its analysis was mostly manual.

Later on, we were asked to implement the methodology on a complex campaign that had 3 landing/petition pages, 3 exit pages, and all this in two different languages. Our preliminary analysis was that it would take us up to 8-10 hours of work, with high risk of mistakes as it needed cross analysis of up to 12 pages, and required distinguishing among more than 15 different sources for each page.

But then we met KNIME: an Information Miner tool that helped us to extract different sets of data from Google analytics (through plugins), create the data flow in a visual way and automatically execute part of the analysis. So far, we have automated the capture and analysis of statistics of web traffic (Google Analytics), the community of users on Twitter and the relevance of posts in that social network. We’ve been able to minimize the risk of errors, focus on the definition of new indicators and visualizations and provide reports to draw conclusions and design new communication strategies (based on data) in a very short period of time.

KNIME helped us to scale up our evaluation system, making it suitable for very complex campaigns, with a significant reduction of time dedication and also lowering the risk of mistakes. And most important of all, introducing KNIME into our system has dropped down our ignorance ratio significantly, because nowadays we can identify the source of more than 95% of the signatures. This means that we can shed light on how different strategies are working, which channels are bringing more visits to the different landing pages, and which have the higher conversion rate. All this is relevant information to inform decisions and adapt strategies and improve the outputs of a campaign.

Watch Rodrigo’s MERL Tech Lightning Talk here!

 

5 Insights from MERL Tech 2016

By Katherine Haugh, a visual note taker who summarizes content in a visually simple manner while keeping the complexity of the subject matter. Originally published on Katherine’s blog October 20, 2015 and here on ICT Works January 18th, 2016. 

MT1MT2

Recently, I had the opportunity to participate in the 2015 MERL Tech conference that brought together over 260 people from 157 different organizations. I joined the conference as a “visual note-taker,” and I documented the lightning talks, luncheon discussions, and breakout sessions with a mix of infographics, symbols and text.

Experiencing several “a-ha” moments myself, I thought it would be helpful to go a step further than just documenting what was covered and add some insights on my own. Five clear themes stood out to me: 1) There is such a thing as “too much data”2) “Lessons learned” is like a song on repeat 3) Humans > computers 4) Sharing is caring 5) Social impact investment is crucial.

1) There is such a thing as “too much data.”

MERLTech 2015 began with a presentation by Ben Ramalingham, who explained that, “big data is like teenage sex. No one knows how to do it and everyone thinks that everyone else is doing it.” In addition to being the most widely tweeted quote at the conference and eliciting a lot of laughter and nods of approval, Ben’s point was well-received by the audience. The fervor for collecting more and more data has been, ironically, limiting the ability of organizations to meaningfully understand their data and carry out data-driven decision-making.

Additionally, I attended the breakout session on “data minimalism” with Vanessa CorlazzoliMonalisa Salib, from USAID LEARN, and Teresa Crawford that further emphasized this point.

The session covered the ways that we can identify key learning questions and pinpoint need-to-have-data (not nice-to-have-data) to be able to answer those questions. [What this looks like in practice: a survey with onlyfive questions. Yes, just five questions.] This approach to data collection enforces the need to think critically each step of the way about what is needed and absolutely necessary, as opposed to collecting as much as possible and then thinking about what is “usable” later.

2) “Lessons learned” is like a song on repeat.

Similar to a popular song, the term “lessons learned” has been on repeat for many M&E practitioners (including myself). How many reports have we seen that conclude with lessons learned that are never actually learned? Having concluded my own capstone project with a set of “lessons learned,” I am at fault for this as well. In her lightning talk on “Lessons Not Learned in MERL,” Susan Davis explained that, “while it’s OK to re-invent the wheel, it’s not OK to re-invent a flat tire.”

It seems that we are learning the same “lessons” over and over again in the M&E-tech field and never implementing or adapting in accordance with those lessons. Susan suggested we retire the “MERL” acronym and update to “MERLA” (monitoring, evaluation, research, learning and adaptation).

How do we bridge the gap between M&E findings and organizational decision-making? Dave Algoso has some answers. (In fact, just to get a little meta here: Dave Algoso wrote about “lessons not learned” last year at M&E Tech and now we’re learning about “lessons not learned” again at MERLTech 2015. Just some food for thought.). A tip from Susan for not re-inventing a flat wheel: find other practitioners who have done similar work and look over their “lessons learned” before writing your own. Stay tuned for more on this at FailFest 2015 in December!

3) Humans > computers.

Who would have thought that at a tech-related conference, a theme would be the need for more human control and insight? Not me. That’s for sure! A funny aside: I have (for a very long time) been fearful that the plot of the Will Smith movie, “I-Robot” would become a reality. I now feel slightly more assured that this won’t happen, given that there was a consensus at this conference and others on the need for humans in the M&E process (and in the world). As Ben Ramalingham so eloquently explained, “you can’t use technology to substitute humans; use technology to understand humans.”

4) Sharing is caring.

Circling back to the lessons learned on repeat point, “sharing is caring” is definitely one we’ve heard before. Jacob Korenblum emphasized the need for more sharing in the M&E field and suggested three mechanisms for publicizing M&E results: 1)Understanding the existing eco-system (i.e. the decision between using WhatsApp in Jordan or in Malawi) 2) Building feedback loops directly into M&E design and 3) Creating and tracking indicators related to sharing. Dave Algoso also expands on this concept in TechChange’s TC111 course on Technology for Monitoring and Evaluation; Dave explains that bridging the gaps between the different levels of learning (individual, organizational, and sectoral) is necessary for building the overall knowledge of the field, which spans beyond the scope of a singular project.

5) Social impact investment is crucial.

I’ve heard this at other conferences I’ve attended, like the Millennial Action Project’sCongressional Summit on Next Generation Leadership and many others.  As a panelist on “The Future of MERLTech: A Donor View,” Nancy McPherson got right down to business: she addressed the elephant in the room by asking questions about “who the data is really for” and “what projects are really about.” Nancy emphasized the need for role reversal if we as practitioners and researchers are genuine in our pursuit of “locally-led initiatives.” I couldn’t agree more. In addition to explaining that social impact investing is the new frontier of donors in this space, she also gave a brief synopsis of trends in the evaluation field (a topic that my brilliant colleague Deborah Grodzicki and I will be expanding on. Stay tuned!)