Tag Archives: MERL

Maturity Models: Visualizing Progress Towards Next-Generation Transparency and Accountability

photo-sep-08-768x953By Alison Miranda (TAI) and Megan Colnar (Open Society Foundation). This is a cross-post of a piece published on September 17th on the Transparency and Accountability Initiative’s blog.

How can we assess progress on a second-generation way of working in the transparency, accountability and participation (TAP) field? Monitoring, evaluation, research, and learning (MERL) maturity models can provide some inspiration. The 2017 MERL Tech conference in Washington, DC was a two-day bonanza of lightening talks, plenary sessions, and hands-on workshops among participants who use technology for MERL.

Here are key conference takeaways from two MEL practitioners in the TAP field.

1. Making open data useful

Several MERL Tech sessions resonated deeply with the TAP field’s efforts to transition from fighting for transparent and open data towards linking open data to accountability and governance outcomes. Development Gateway and InterAction drew on findings from “Avoiding Data Graveyards” as we explored progress and challenges for International Aid Transparency Initiative (IATI) data use cases. While transparency is an important value, what is gained (or lost) in data use for collaboration when there are many different potential data consumers?

A partnership between Freedom House and DataKind is moving the Freedom in the World study towards a more transparent display of index sub-indicators, and building a more robust – and usable! – data set by reformatting and integrating their data and other secondary big data sets. What could such an initiative yield for the Extractive Industry Transparency Initiative (EITI), for example, if equivalent data sets were available?

And finally, as TAP practitioners are keenly aware, power and politics can overshadow evidence in decision making. Another Development Gateway presentation reminded us that it is important to work with data producers and users to identify decisions that are (or can be) data-driven, and to recognize when other factors are driving decisions. (The incentives to supply open data is whole other can of worms!)

Drawing on our second-generation TAP approach, more work is needed for the TAP and MERL fields to move from “open data everywhere, all of the time” to planning for, and encouraging more effective data use.

2. Tech for MERL for improved policy, practice, and outcomes

Among our favorite moments at MERL Tech was when Dobility Founder and CEO Christopher Robert remarked that “the most interesting innovations at MERL Tech aren’t the new, cutting-edge technology developments, but generic technology applied in innovative ways.” Unsurprising for a tech company focused on using affordable technology to enable quality data collection for social impact, but a refreshing reminder amidst the talk of ‘AI’, ‘chatbots’, and ‘blockchains’ for development coursing through the conference.

The TAP field is certainly not a stranger to employing technology from apps to curb trade corruption in Nigeria to Citizen Helpdesks in Nepal, Liberia, and Mali to crowdsourced political campaign expenditure monitoring in Bolivia, but our second-generation TAP insights remind us technology tools are not an end in themselves. MERL and technology are our means for collecting effective data, generating important insights and learning, building larger movements, and gathering context-specific evidence on transparency and accountability.

We are undoubtedly on the precipice of revolutionary technological advancements that can be readily (and maybe even affordably) deployed[1] to solve complex global challenges, but they will still be tools and not solutions.

3. Balancing inclusion and participation with efficiency and accuracy

We explored a constant conundrum for MERL: how to balance inclusion and participation with efficiency and accuracy. Girl Effect and Praekelt Foundation took “mixed methods” research to another level, combining online and offline efforts to understand user needs of adolescent girls and to support user-developed digital media content. Their iterative process showcased an effective way to integrate tech into the balancing act of inclusive – and holistic – design, paired with real-time data use.

This session on technology in citizen generated data brought to light two case studies of how tech can both help and hinder this balancing act. The World Café discussions underscored the importance of planning for – and recognizing the constraints on – feedback loops. And provided us a helpful reminder that MERL and tech professionals are often considering different “end users” in their design work!

So, which is it – balancing act or zero-sum game between inclusion and efficiency? The MERL community has long applied participatory methods. And tech solutions abound that can help with efficiency, accuracy, and inclusion. Indeed, the second-generation TAP focus on learning and collaboration is grounded in effective data use – but there are many potential “end users” to consider. These principles and practices can force uncomfortable compromises – particularly in the face of finite resources and limited data availability – but they are not at odds with each other. Perhaps the MERL and TAP communities can draw lessons from each other in striking the right balance.

4. Tech sees no development sector silos

One of the things that makes MERL Tech such an exciting conference, is the deliberate mixing of tech nerds with MERL nerds. It’s pretty unique in its dual targeting of both types of professionals who share a common purpose of social impact (where as conferences like ICT4D cast a wider net looking at application of technology to broader development issues). And, though we MERL professionals like to think of design and iteration as squarely within our wheelhouse, being in a room full of tech experts can quickly remind you that our adaptation game has a lot of room to grow. We talk about user-centered design in TAP, but when the tech crowd was asked in plenary “would you even think of designing software or an app without knowing who was going to use it?” they responded with a loud and exuberant laugh.

Tech has long employed systematic approaches to user-centered design, prototyping, iteration, and adaptation, all of which can offer compelling lessons to guide MERL practices and methods. Though we know Context is King, it is exhilarating to know that the tech specialists assembled at the conference work across traditional silos of development work (from health to corruption, and everything in between). End users are, of course, crucial to the final product but the life cycle process and steps follow a regular pattern, regardless of the topic area or users.

The second-generation wave in TAP similarly moved away from project-specific, fragmented, or siloed planning and learning towards a focus on collective approaches and long-term, more organic engagement.

American Evaluation Association President, Kathy Newcomer, quipped that maybe an ‘Academy Awards for Adaptation’ could inspire better informed and more adept evolutions to strategy as circumstances and context shift around us. Adding to this, and borrowing from the tech community, we wonder where we can build more room to beta test, follow real demand, and fail fast. Are we looking towards other sectors and industries enough or continuing to reinvent the wheel?

Alison left thinking:

  • Concepts and practices are colliding across the overlapping MERL, tech, and TAP worlds! In leading the Transparency and Accountability Initiative’s learning strategy, and supporting our work on data use for accountability, I often find myself toggling between different meanings of ‘data’, ‘data users’, and tech applications that can enable both of these themes in our strategy. These worlds don’t have to be compatible all the time, and concepts don’t have to compute immediately (I am personally still working out hypothetical blockchain applications for my MERL work!). But this collision of worlds is a helpful reminder that there are many perspectives to draw from in tackling accountable governance outcomes.
  • Maturity models come in all shapes and sizes, as we saw in the creative depictions created at MERL Tech that included, steps, arrows, paths, circles, cycles, and carrots! And the transparency and accountability field is collectively pursuing a next generation of more effective practice that will take unique turns for different accountability actors and outcomes. Regardless of what our organizational or programmatic models look like, MERL Tech reminded me that champions of continuous improvement are needed at all stages of the model – in MERL, in tech for development, and in the TAP field.

Megan left thinking:

  • That I am beginning to feel like I’m a Dr. Seuss book. We talked ‘big data’, ‘small data’, ‘lean data’, and ‘thick data’. Such jargon-filled conversations can be useful for communicating complex concepts simply with others. Ah, but this is also the problem. This shorthand glosses over the nuances that explain what we actually mean. Jargon is also exclusive—it clearly defines the limits of your community and makes it difficult for newcomers. In TAP, I can’t help but see missed opportunities for connecting our work to other development sectors. How can health outcomes improve without holding governments and service providers accountable for delivering quality healthcare? How can smallholder farmers expect better prices without governments budgeting for and building better roads? Jargon is helpful until it divides us up. We have collective, global problems and we need to figure out how to talk to each other if we’re going to solve them.
  • In general, I’m observing a trend towards organic, participatory, and inclusive processes—in MERL, in TAP, and across the board in development and governance work. This is, almost universally speaking, a good thing. In MERL, a lot of this movement is a backlash to randomistas and imposing The RCT Gold Standard to social impact work. And, while I confess to being overjoyed that the “RCT-or-bust” mindset is fading out, I can’t help but think we’re on a slippery slope. We need scientific rigor, validation, and objective evidence. There has to be a line between ‘asking some good questions’ and ‘conducting an evaluation’. Collectively, we are working to eradicate unjust systems and eliminate poverty, and these issues require not just our best efforts and intentions, but workable solutions. Listen to Freakonomic’s recent podcast When Helping Hurts and commit with me to find ways to keep participatory and inclusive evaluation techniques rigorous and scientific, too.

[1] https://channels.theinnovationenterprise.com/articles/ai-in-developing-countries

Building bridges between evaluators and big data analysts

By Michael Bamberger, Independent Evaluation Consultant. Michael has been involved in development evaluation for 50 years and recently wrote the report: “Integrating Big Data into the Monitoring and Evaluation of Development Programs” for UN Global Pulse.

MERLTech-2016_Panel_VisualNotes

In Part 1 of this series we argued that, while applications of big data and data analytics are expanding rapidly in many areas of development programs, evaluators have been slow to adopt these applications. We predicted that one possible future scenario could be that evaluation may no longer be considered as a separate function, and that it may be treated as one of the outputs of the integrated information systems that will gradually be adopted by many development agencies. Furthermore, many evaluations will use data analytics approaches, rather than conventional evaluation designs. (Image: Big Data session notes from USAIDLearning’s Katherine Haugh [@katherine_haugh}. MERL Tech DC 2016).

Here, in Part 2 we identify some of the reasons why development evaluators have been slow to adopt big data analytics and we propose some promising approaches for building bridges between evaluators and data analysts.

Why have evaluators been slow to adopt big data analytics?

Caroline Heider at the World Bank Independent Evaluation Group identifies four sets of data collection-related challenges affecting the adoption of new technologies by evaluators: ethics, governance, biases (potentially amplified through the use of ICT), and capacity.

We also see:

1. Weak institutional linkages. Over the past few years some development agencies have created data centers to explore ways to exploit new information technologies. These centers are mainly staffed by people with a background in data science or statistics and the institutional links to the agency’s evaluation office are often weak.

2. Many evaluators have limited familiarity with big data/analytics. Evaluation training programs tend to only present conventional experimental, quasi-experimental and mixed-methods/qualitative designs. They usually do not cover smart data analytics (see Part 1 of this blog). Similarly, many data scientists do not have a background in conventional evaluation methodology (though there are of course exceptions).

3. Methodological differences. Many big data approaches do not conform to the basic principles that underpin conventional program evaluation, for example:

  • Data quality: real-time big data provides one of the potentially most powerful sources of data for development programs. Among other things, real-time data can provide early warning signals of potential diseases (e.g. Google Flu), ethnic tension, drought and poverty (Meier 2015). However, when an evaluator asks if the data is biased or of poor quality, the data analyst may respond “Sure the data is biased (e.g. only captured from mobile phone users or twitter feeds) and it may be of poor quality. All data is biased and usually of poor quality, but it does not matter because tomorrow we will have new data.” This reflects the very different kinds of data that evaluators and data analysts typically work with, and the difference can be explained, but a statement such as the above can create the impression that data analysts do not take issues of bias and data quality very seriously.
  • Data mining: Many data analytics methods are based on the mining of large data sets to identify patterns of correlation, which are then built into predictive models, normally using Bayesian statistics. Many evaluators frown on data mining due to its potentially identifying spurious associations.
  • The role of theory: Most (but not all) evaluators believe that an evaluation design should be based on a theoretical framework (theory of change or program theory) that hypothesizes the processes through which the intended outcomes will be achieved. In contrast, there is plenty of debate among data analysts concerning the role of theory, and whether it is necessary at all. Some even go as far as to claim that data analytics means “the end of theory”(Anderson 2008). This, combined with data mining, creates the impression among some evaluators that data analytics uses whatever data is easily accessible with no theoretical framework to guide the selection of evaluation questions as to assess the adequacy of available data.
  • Experimental designs versus predictive analytics: Most quantitative evaluation designs are based on an experimental or quasi-experimental design using a pretest/posttest comparison group design. Given the high cost of data collection, statistical power calculations are frequently used to estimate the minimum size of sample required to ensure a certain level of statistical significance. Usually this means that analysis can only be conducted on the total sample, as sample size does not permit statistical significance testing for sub-samples. In contrast, predictive analytics usually employ Bayesian probability models. Due to the low cost of data collection and analysis, it is usually possible to conduct the analysis on the total population (rather than a sample), so that disaggregated analysis can be conducted to compare sub-populations, and often (particularly when also using machine learning) to compute outcome probabilities for individual subjects. There continues to be heated debates concerning the merits of each approach, and there has been much less discussion of how experimental and predictive analytics approaches could complement each other.
As Pete York at CommunityScience.com observes: “Herein lies the opportunity – we evaluators can’t battle the wave of big data and data science that will transform the way we do research. However, we can force it to have to succumb to the rules of objective rigor via the scientific method. Evaluators/researchers train people how to do it, they can train machines. We are already doing so.”  (Personal communication 8/7/17)

4. Ethical and political concerns: Many evaluators also have concerns about who designs and markets big data apps and who benefits financially. Many commercial agencies collect data on low income populations (for example their consumption patterns) which may then be sold to consumer products companies with little or no benefit going to the populations from which the information is collected. Some of the algorithms may also include a bias against poor and vulnerable groups (O’Neil 2016) that are difficult to detect given the proprietary nature of the algorithms.

Another set of issues concern whether the ways in which big data are collected and used (for making decisions affecting poor and vulnerable groups) tends to be exclusive (governments and donors use big data to make decisions about programs affecting the poor without consulting them), or whether big data is used to promote inclusion (giving voice to vulnerable groups). These issues are discussed in a recent Rockefeller Foundation blog. There are also many issues around privacy and data security. There is of course no simple answer to these questions, but many of these concerns are often lurking in the background when evaluators are considering the possibility of incorporating big data into their evaluations.

Table 1. Reasons evaluators have been slow to adopt big data and opportunities for bridge building between evaluators and data analysts

Reason for slow adoption

Opportunities for bridge building

1. Weak institutional linkages
  • Strengthening formal and informal links between data centers and evaluators
2. Evaluators have limited knowledge about big data and data analytics
  • Capacity development programs covering both big data and conventional evaluation
  • Collaborative pilot evaluation projects
3. Methodological differences
  • Creating opportunities for dialogue to explore differences and to determine how they can be reconciled
  • Viewing data analytics and evaluation as being complementary rather than competing
4. Ethical and political concerns about big data
  • Greater focus on ethical codes of conduct, privacy and data security
  • Focusing on making approaches to big data and evaluation inclusive and avoiding exclusive/extractive approaches

Building bridges between evaluators and big data/analytics 

There are a number of possible steps that could be taken to build bridges between evaluators and big data analysts, and thus to promote the integration of big data into development evaluation. Catherine Cheney (2016) presents interviews with a number of data scientists and development practitioners stressing that data driven development needs both social and computer scientists. No single approach is likely to be successful, and the best approach(es) will depend on each specific context, but we could consider:

  • Strengthening the formal and informal linkages between data centers and evaluation offices. It may be possible to achieve this within the existing organizational structure, but it will often require some formal organizational changes in terms of lines of communication. Linda Raftree provides a useful framework for understanding how different “buckets” of data (including among others, traditional data and big data) can be brought together, which suggests one pathway to collaboration between data centers and evaluation offices.
  • Identifying opportunities for collaborative pilot projects. A useful starting point may be to identify opportunities for collaboration on pilot projects in order to test/demonstrate the value-added of cooperation between the data analysts and evaluators. The pilots should be carefully selected to ensure that both groups are involved equally in the design of the initiative. Time should be budgeted to promote team-building so that each team can understand the other’s approach.
  • Promoting dialogue to explore ways to reconcile differences of approach and methodology between big data and evaluation. While many of these differences may at first appear to be based on fundamental differences of approach, at least some differences result at least in part from questions of terminology and in other cases it may be that different approaches can be applied at different stages of the evaluation process. For example:
    • Many evaluators are suspicious of real-time data from sources such as twitter, or analysis of phone records due to selection bias and issues of data quality. However, evaluators are familiar with exploratory data (collected, for example, during project visits, or feedback from staff), which is then checked more systematically in a follow-up study. When presented in this way, the two teams would be able to discuss in a non-confrontational way, how many kinds of real-time data could be built into evaluation designs.
    • When using Bayesian probability analysis it is necessary to begin with a prior distribution. The probabilities are then updated as more data becomes available. The results of a conventional experimental design can often be used as an input to the definition of the prior distribution. Consequently, it may be possible to consider experimental designs and Bayesian probability analysis as sequential stages of an evaluation rather than as competing approaches.
  • Integrated capacity development programs for data analysts and evaluators. These activities would both help develop a broader common methodological framework and serve as an opportunity for team building.

Conclusion

There are a number of factors that together explain the slow take-up of big data and data analytics by development evaluators. A number of promising approaches are proposed for building bridges to overcoming these barriers and to promote the integration of big data into development evaluation.

See Part 1 for a list of useful references!

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Better or different or both?

by Linda Raftree, Independent Consultant and MERL Tech Organizer

As we delve into why, when, where, if, and how to incorporate various types of technology and digital data tools and approaches into monitoring, evaluation, research and learning (MERL), it can be helpful to think about MERL technologies from two angles:

  1. Doing our work better:  How can new technologies and approaches help us do what we’ve always done — the things that we know are working and having an impact — but do them better? (E.g., faster, with higher quality, more efficiently, less expensively, with greater reach or more inclusion of different voices)
  2. Doing our work differently:  What brand new, previously unthinkable things can be done because of new technologies and approaches? How might these totally new ideas contribute positively to our work or push us to work in an entirely different way.

Sometimes these two things happen simultaneously and sometimes they do not.  Some organizations are better at Thing 1, and others are set-up well to explore Thing 2. Not all organizations need to feel pressured into doing Thing 2; however, and sometimes it can be a distraction from Thing 1. Some organizations may be better off letting early adopters focus on Thing 2 and investing their own budgets and energy in Thing 1 until innovations have been tried and tested by the early adopters. Organizations may also have staff members or teams working on both Thing 1 and Thing 2 separately. Others may conceptualize this as process or pathway moving from Thing 2 to Thing 1, where Thing 2 (once tested and evaluated) is a pipeline into Thing 1.

Here are some potentially useful past discussions on the topic of innovations within development organizations that flesh out some of these thoughts:

Many of the new tools and approaches that were considered experimental 10 years ago have moved from being “brand new and innovative” to simply “helping us do what we’ve always done.” Some of these earlier “innovations” are related to digital data and data collection and processing, and they help us do better monitoring, evaluation and research.

On the flip side, monitoring, evaluation and research have played a key role in helping organizations and the sector overall learn more about how, where, when, why and in what contexts these different tools and approaches (including digital data for MERL) can be adopted. MERL on ICT4D and Digital Development approaches can help calibrate the “hype cycle” and weed out the shiny new tools and approaches that are actually not very effective or useful to the sector and highlight those that cause harm or put people at risk.

There are always going to be new tools and approaches that emerge. Humanitarian and development organizations, then, need to think strategically about what kind of organization they are (or want to be) and where they fit on the MERL Tech continuum between Thing 1 and Thing 2.

What capacities does an organization have for working on Thing 2 (brand new and different)? When and for how long should an organization focus on Thing 1, building on what it knows is working or could work, keeping an eye on the early adopters who are working on Thing 2. When does an organization have enough “proof” to start adopting new tools and approaches that seem to add value? How are these new tools and approaches being monitored, evaluated and researched to improve our use of them?

It’s difficult for widespread adoption to happen in the development space, where there is normally limited time and capacity for failure or for experimentation, without solid MERL. And even with “solid MERL” it can be difficult for organizations to adapt and change due to a multitude of factors, both internal and external.

I’m looking forward to September’s MERL Tech Conference in DC where we have some sessions that explore “the MERL on ICT4MERL?” and others that examine aspects of organizational change related to adopting newer MERL Tech tools and approaches.

(Register here if you haven’t already!)

 

 

Discrete choice experiment (DCE) to generate weights for a multidimensional index

In his MERL Tech Lightning Talk, Simone Lombardini, Global Impact Evaluation Adviser, Oxfam, discussed his experience with an innovative method for applying tech to help determine appropriate metrics for measuring concepts that escape easy definition. To frame his talk, he referenced Oxfam’s recent experience with using discrete choice experiments (DCE) to establish a strategy for measuring women’s empowerment.

Two methods already exist, Simone points out, for transforming soft concepts into hard metrics. First, the evaluator could assume full authority and responsibility over defining the metrics. Alternatively, the evaluator could design the evaluation so that relevant stakeholders are incorporated into the process and use their input to help define the metrics.

Though both methods are common, they are missing (for practical reasons) the level of mass input that could make them truly accurate reflections of the social perception of whatever concept is being considered. Tech has a role to play in scaling the quantity of input that can be collected. If used correctly, this could lead to better evaluation metrics.

Simone described this approach as “context-specific” and “multi-dimensional.” The process starts by defining the relevant characteristics (such as those found in empowered women) in their social context, then translating these characteristics into indicators, and finally combining indicators into one empowerment index for evaluating the project.

After the characteristics are defined, a discrete choice experiment can be used to determine its “weight” in a particular social context. A discrete choice experiment (DCE) is a technique that’s frequently been used in health economics and marketing, but not much in impact evaluation. To implement a DCE, researchers present different hypothetical scenarios to respondents and ask them to decide which one they consider to best reflect the concept in question (i.e. women’s empowerment). The responses are used to assess the indicators covered by the DCE, and these can then be used to develop an empowerment index.

This process was integrated into data collection process and added 10 mins at the end of a one hour survey, and was made practicable due to the ubiquity of smartphones. The results from Oxfam’s trial run using this method are still being analyzed. For more on this, watch Lombardini’s video below!

Community-led mobile research–What could it look like?

Adam Groves, Head of Programs at On Our Radar, gave a presentation at MERL Tech London in February where he elaborated on a new method for collecting qualitative ethnographic data remotely.

The problem On Our Radar sought to confront, Adam declares, is the cold and impenetrable bureaucratic machinery of complex organizations. To many people, the unresponsiveness and inhumanity of the bureaucracies that provide them with services is dispiriting, and this is a challenge to overcome for anyone that wants to provide a quality service.

On Our Radar’s solution is to enable people to share their real-time experiences of services by recording audio and SMS diaries with their basic mobile phones. Because of the intimacy they capture, these first-person accounts have the capacity to grab the people behind services and make them listen to and experience the customer’s thoughts and feelings as they happened.

Responses obtained from audio and SMS diaries are different from those obtained from other qualitative data collection methods because, unlike solutions that crowdsource feedback, these diaries contain responses from a small group of trained citizen reporters that share their experiences in these diaries over a sustained period of time. The product is a rich and textured insight into the reporters’ emotions and priorities. One can track their journeys through services and across systems.

On Our Radar worked with British Telecom (BT) to implement this technique. The objective was to help BT understand how their customers with dementia experience their services. Over a few weeks, forty people living with dementia recorded audio diaries about their experiences dealing with big companies.

Adam explained how the audio diary method was effective for this project:

  • Because diaries and dialogues are in real time, they captured emotional highs and lows (such as the anxiety of picking up the phone and making a call) that would not be recalled in post fact interviews.
  • Because diaries are focused on individuals and their journeys instead of on discrete interactions with specific services, they showed how encountering seemingly unrelated organizations or relationships impacted users’ experiences of BT. For example, cold calls became terrifying for people with dementia and made them reluctant to answer the phone for anyone.
  • Because this method follows people’s experiences over time, it allows researchers to place individual pain points and problems in the context of a broader experience.
  • Because the data is in first person and in the moment, it moved people emotionally. Data was shared with call center staff and managers, and they found it compelling. It was an emotional human story told in one’s own words. It invited decision makers to walk in other people’s shoes.

On Our Radar’s future projects include working in Sierra Leone with local researchers to understand how households are changing their practices post-Ebola and a major piece of research with the London School of Hygiene and Tropical Medicine in Malaysia and the Philippines to gain insight on people’s understanding of their health systems.

For more, find a video of Adam’s original presentation below!

Cost-benefit comparisons of IVR, SMS, and phone survey methods

In his MERL Tech London Lightning Talk back in February, Jan Liebnitzky of Firetail provided a research-backed assessment of the costs and benefits of using interactive voice response surveys (IVR), SMS surveys, and phone surveys for MERL purposes.

First, he outlined the opportunities and challenges of using phones for survey research:

  • They are a good means for providing incentives. And research shows that incentives don’t have to be limited to airtime credits. The promise of useful information is sometimes the best motivator for respondents to participate in surveys.
  • They are less likely to reach subgroupsThough mobile phones are ubiquitous, one challenge is that groups like women, illiterate people and people in low-connectivity areas do not always have access to them. Thus, phones may not be as effective as one would hope for reaching the people most often targeted by aid programs.
  • They are scalable and have expansive reach. Scripting and outsourcing phone-based surveys to call centers takes time and capacity. Fixed costs are high, while marginal costs for each new question or respondent is low. This means that they can be cost effective (compared to on the ground surveys) if implemented at a large scale or in remote and high risk areas with problematic access.

Then, Jan shared some strategies for using phones for MERL purposes:

1. Interactive Voice Response Surveys

    • These are pre-recorded and automated surveys. Respondent can reply to them by voice or with the numerical keypad.
    • IVR has been used in interactive radio programs in Tanzania, where listening posts were established for the purpose of interacting with farmers. Listening posts are multi-channel, web-based platforms that gather and analyze feedback and questions from farmers that listen to particular radio shows. The radio station will run the IVR, and farmers can call in to the radio show to submit their questions or responses. These are effective because they are run through a trusted radio shows. However, it is important that farmers receive answers for the questions they ask, as this incentivizes future participation.

2. SMS Surveys

    • These make use of mobile messaging capabilities to send questions and receive answers. Usually, the SMS survey respondent will either choose between fixed multiple choice answers or write a freeform response. Responses, however, are limited to 160 characters.
    • One example of this is U-Reporter, a free SMS social monitoring tool for community participation in Uganda. Polls are sent to U-Reporters who answer back in real time, and the results are then shared back with the community.

3. Phone Surveys

    • Phone surveys are run through call centers by enumerators. They function like face to face interview, but over the phone.
    • As an example, phone surveys were used as a monitoring tool by an agriculture extension services provider. Farmers in the area subscribed to receive texts from the provider with tips about when and how to plant crops. From the list of subscribers, prospective respondents were sampled and in-country call centers were contracted to call up to 1,000 service users to inquire about quality of service, behaviour changes and adoption of new farming technologies.
    • The challenges here were that the data were only as good as call staff was trained. Also, there was a 80% drop off rate, partly due to the language limitations of call staff.

Finally, Jan provided a rough cost and effectivity assessment for each method:

  • IVR survey: medium cost, high response
  • SMS survey: low cost, low response
  • Phone survey: high cost, medium response

Jan closed with a question: What is the value of these methods for MERL?

His answer: The surveys are quick and dirty and, to their merit, they produce timely data from remote areas at a reasonable cost. If the data is made use of, it can be effective for monitoring. However, these methods are not yet adequate for use in evaluation.

For more, watch Jan’s Lightning Talk below!

Focus on the right users to avoid an M&E apocalypse

In his MERL Tech London Lightning Talk, George Flatters from the Open Data Institute told us that M&E is extractive. “It takes data from poor communities, it refines it, and it sells it to to rich communities.” he noted, and this process is unsustainable. The ease of deploying a survey means that there are more and more surveys being administered. This leads to survey fatigue, and when people stop wanting to take surveys, the data quality suffers, leading to an M&E apocalypse.

George outlined 4 ways to mitigate against doomsday:

1) Understand the problem–who is doing what, where?

At the moment, no one can be totally sure about which NGOs are doing what data collection and where. What is needed is the Development equivalent of the Humanitarian Data Exchange–a way to centralize and share all collected Development data. Besides the International Household Survey Catalog and NGO Aid Map (which serve a similar function, but to a limited degree), no such central location exists. With it, the industry could avoid duplication and maximize the use of its survey-administering resources.

2) Share more and use existing data

Additionally, with access to a large and comprehensive database such as this, the industry could greatly expand the scope of analysis done with the same set of data. This, of course, should be paired with the appropriate privacy considerations. For example, the data should be anonymized. Generally, a balance must be struck between accessibility and ethics. The Open Data Institute has a useful framework for thinking about how different data should be governed and shared.

3) Focus on the right users

One set of users is the data-collectors at the head office of an NGO. There are M&E solutions that will make their lives easier. However, attention must also be given to the people in communities providing the data. We need to think about how to make their lives easier as well.

4) Think like a multinational tech corporation (and/or get their data)

These corporations do not sit there and think about how to extract the maximum amount of data, they consider how they can provide quality services that will attract customers. Most of their data is obtained through the provision of services. Similarly, the question here should be, “what M&E services can we provide and receive data as a byproduct?” Examples include: cash-transfers, health visits, app download & usage, and remote watch sensing.

These principles can help minimize the amount of effort spent on extracting data, alleviating the strain placed on those who provide the data, and staving of the end of days for a little longer.

Watch George’s Lightning Talk for some additional tips!

Exploring Causality in Complex Situations using EvalC3

By Hur Hassnain, Monitoring, Evaluation, Accountability and Learning Adviser, War Child UK

At the 2017 MERL Tech London conference, my team and I gave a presentation that addressed the possibilities for and limitations of evaluating complex situations using simple Excel-based tools. The question we explored was: can Excel help us manipulate data to create predictive models and suggest  promising avenues  to project success? Our basic answer was “not yet,” at least not to its full extent. However, there are people working with accessible software like Excel to make analysis simpler for evaluators with less technical expertise.

In our presentation, Rick Davies, Mark Skipper and I showcased EvalC3, an Excel based evaluation tool that enables users to easily identify sets of attributes in a project dataset and to then compare and evaluate the relevance of these attributes to achieving the desired outcome. In other words, it helps answer the question ‘what combination of factors helped bring about the results we observed?’ In the presentation, after we explained what EvalC3 is and gave a live demonstration of how it works, we spoke about our experience using it to analyze real data from a UNICEF funded War Child UK project in Afghanistan–a project that helps children who have been deported back to Afghanistan from Iran.

Our team first learned of EvalC3 when, upon returning from a trip to our Afghanistan country programme, we discussed how our M&E team in Afghanistan uses Excel for storing and analysing data but is not able to use the software to explore or evaluate complex causal configurations. We reached out to Rick with this issue, and he introduced us to EvalC3. It sounded like the solution to our problem, and our M&E officer in Afghanistan decided to test it by using it to dig deeper into an Excel database he’d created to store data on one thousand children who were registered when they were deported to Afghanistan.  

Rick, Hosain Hashmi (our M&E Officer in Afghanistan) and I formed a working group on Skype to test drive EvalC3. First, we needed to clean the data. To do this, we asked our social workers to contact the children and their caretakers to collect important missing data. Missing data is a common problem when collecting data in fragile and conflict affected contexts like those where War Child works. Fortunately, we found that EvalC3 algorithms can work with some missing data, with the tradeoff being slightly less accurate measures of model performance. Compare this to other algorithms (like Quine-McCluskey used in QCA) which do not work at all if the data is missing for some variables. We also had to reduce the number of dimensions we used. If we did not, there could be millions of combinations that could be possible outcome predictors, and an algorithm could not search all of these possibilities in a reasonable span of time. This exercise spoke to M. A. Munson’s theory that “model building only consumes 14% of the time spent on a typical [data mining] project; the remaining time is spent on the pre and post processing steps”.

With a few weeks of work on the available dataset of children deported from Iran, we found that the children who are most likely to go back to Iran for economic purposes are mainly the children who:

  • Are living with friends (instead of with. relatives/caretakers)
  • Had not been doing farming work when they were in Iran
  • Had not completed 3 months vocational training
  • Are from adult headed households (instead of from child headed households).

As the project is still ongoing, we will continue to  investigate the cases covered by the model described here in order to better understand the causal mechanisms at work.

This experience of using EvalC3 encouraged War Child to refine the data it routinely collects with a view to developing a better understanding of where War Child interventions help or don’t help. The in-depth data-mining process and analysis conducted by the national M&E Officer and programmes team resulted in improved understanding of the results we can achieve by analyzing quality data.  EvalC3 is a user-friendly evaluation tool that is not only useful in improving current programmes but also designing new and evidence based programmes.

Using an online job platform to understand gender dynamics in the Mozambican informal labour market

Oxford Policy Management Consultant Paul Jasper’s self-professed professional passion is exploring new ways to use data to improve policy making in the Global South. At MERL Tech London in 2016, he presented on two tech-driven initiatives in Mozambique–Muva and Biscate. In his talk, he explains why these are two great examples of how programming and social policy can benefit from innovations and data coming from the tech sector.

Muva is a program that aims at testing new ways of improving women’s access to economic opportunities. It works primarily with women and girls in urban areas of Mozambique and across all sectors of the economy. Its beneficiaries include employees, self employed people, and micro-entrepreneurs. While conducting its work, the program recognized that one key characteristic of the Mozambican economy is that the informal sector is pervasive. Over 90% of new jobs in sub-Saharan Africa were produced in the informal sector. Given this quality, the challenge for organizations like Muva is that because it is an informal sector, there is very little quantitative data about it, and analysts are not quite sure how it works, what dynamics are in operation, or what role gender plays.

This is where UX, a startup based in Maputo, was able to step in. The startup noted that the majority of jobs in informal sector were assigned in old-fashioned way–people put up signs with a telephone number advertising their service. They came up with an USSD-based solution called Biscate. Biscate is a service that allows workers to register on the platform using normal mobile phones (few people have access to smartphones) and set up advertising profiles with educational status and skills. Clients can then check the platform to find people offering a service they need and can leave reviews and ratings about the service they received.

UX has soft-launched the platform and registered 30 thousand workers across the country. Since its launch, Biscate has provided unique data and insights into the labor market and helped close the data and information gap that plagued the informal sector. Muva can use this info to improve women’s access to opportunities. The organizations have partnered and started a pilot with three objectives:

  1. To understand dynamics in the informal labor market.
  2. To test whether Muva’s approaches can be used to help women become more successful in the informal sector.
  3. To influence policy makers that want to develop similar programs by producing valuable and up to date lessons.

See Paul’s presentation below if you’d like to learn more!