Tag Archives: merltech

MERL Tech London 2018 Agenda is out!

We’ve been working hard over the past several weeks to finish up the agenda for MERL Tech London 2018, and it’s now ready!

We’ve got workshops, panels, discussions, case studies, lightning talks, demos, community building, socializing, and an evening reception with a Fail Fest!

Topics range from mobile data collection, to organizational capacity, to learning and good practice for information systems, to data science approaches, to qualitative methods using mobile ethnography and video, to biometrics and blockchain, to data ethics and privacy and more.

You can search the agenda to find the topics, themes and tools that are most interesting, identify sessions that are most relevant to your organization’s size and approach, pick the session methodologies that you prefer (some of us like participatory and some of us like listening), and to learn more about the different speakers and facilitators and their work.

Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here!)

View the MERL Tech London schedule & directory.

 

DataDay TV: MERL Tech Edition

What data superpower would you ask for? How would you describe data to your grandparents? What’s the worst use of data you’ve come across? 

These are a few of the questions that TechChange’s DataDay TV Show tackles in its latest episode.

The DataDay Team (Nick Martin, Samhir Vasdev, and Priyanka Pathak) traveled to MERL Tech DC last September to ask attendees some tough data-related questions. They came away with insightful, unusual, and occasionally funny answers….

If you’re a fan of discussing data, technology and MERL, join us at MERL Tech London on March 19th and 20th. 

Tickets are going fast, so be sure to register soon if you’d like to attend!

If you want to take your learning to the next level with a full-blown course, TechChange has a great 2018 schedule, including topics like blockchain, AI, digital health, data visualization, e-learning, and more. Check out their course catalog here.

What about you, what data superpower would you ask for?

 

M&E software – 8 Tips on How to Talk to IT Folks

.

You want to take your M&E system one step further and introduce a proper M&E software? That’s great, because a software has the potential of making the monitoring process more efficient and transparent, reducing errors and getting more accurate data. But how to go about it? You have three options:

  1. You build your own system, for example in Microsoft Excel;
  2. You purchase an M&E software package off-the-shelf;
  3. You hire an IT consultant to set up a customized M&E system according to your organization’s specific requirements.

If options one and two do not work out for you, you can hire consultants to develop a solution for you. You will probably start a public tender to find the most suitable IT company to entrust with this task. While there a lot of things to pay attention to when formulating the Terms of Reference (TOR), I would like to give you some tips specifically about the communication with the hired IT consultants. These insights come from years of experience of being on both sides: The party who wants a tool and needs to describe it to the implementing programmers and being the IT guy (or rather lady) who implements Excel and web-based database tools for M&E.

To be on the safe side, I recommend you to work with this assumption: IT consultants have no clue about M&E. There are few IT companies who come from the development sector, like energypedia consult does, and are familiar with M&E concepts such as indicators, logframes and impact chains. To still get what you need, you should pay attention to the following communication tips:

  1. Take your time explaining what you need: Writing TOR takes time – but it takes even longer and becomes more costly when you hire somebody for something that is not thought through. If you don’t know all the details right from the start, get some expert assistance in formulating terms – it’s worthwhile.
  2. Use graphs: Instead of using words to describe your monitoring logic and the system you need, it is much easier to make graphs to depict the structure, user groups, linking of information, flow of monitoring data etc.
  3. Give examples: When unsure about how to put a feature into words, send a link or a screenshot of the function that you might have come across elsewhere and wish to have in your tool.
  4. Explain concepts and terminology: Many results frameworks work with the terms “input” and “output”. Most IT guys, however, will not have equipment and finished schools in mind, but rather data flows that consist of inputs and outputs. Make sure you clarify this. Also, the term web-based or web monitoring itself is a source of misunderstanding. In the IT world, web monitoring refers to monitoring activity in the internet, for example website visits or monitoring a server. That is probably not what you want when building up an M&E system for e.g. a good governance programme.
  5. Meet in person: In your budget calculation, allow for at least one workshop where you meet in person, for example a kick-off workshop in which you clarify your requirements. This is not only a possibility to ask each other questions, but also to get a feeling of the other party’s language and way of thinking.
  6. Maintain a dialogue: During the implementation phase, make sure to stay in regular touch with the programmers. Ask them to show you updates every once in a while to allow you to give feedback. When you detect that the programmers are heading into the wrong direction, you want to find out rather sooner than later.
  7. Document communication: When we implement web-based systems, we typically create a page within the web platform itself that outlines all the agreed steps. This list serves as a to-do list and an implementation protocol at the same time. It facilitates communication, particularly when on both sides multiple persons are involved that are not always present in all meetings or phone calls.
  8. Be prepared for misunderstandings: They happen. It’s normal. Plan for some buffer days before launching the final tool.

In general, the implementation phase should allow for some flexibility. As both parties learn from each other during the process, you should not be afraid to adjust initial plans, because the final tool will benefit greatly from it (if the contract has some flexibility). Big customized IT projects take some time.

If you need more advice on this matter and some more insights on setting up IT-based M&E systems, please feel free to contact me any time! In the past we supported some clients by setting up a prototype for their web-based M&E system with our flexible WebMo approach. During the prototype process the client learnt a lot and afterwards it was quite easy for other developers to copy the prototype and migrate it to their e.g. Microsoft Share Point environment (in case your IT guys don’t believe in Open Source or don’t want to host third-party software on their server).

Please leave your comments, if you think that I have missed an important communication rule.

Good luck!

Qualitative Coding: From Low Tech to High Tech Options

by Daniel Ramirez-Raftree, MERL Tech volunteer

In their MERL Tech DC session on qualitative coding, Charles Guedenet and Anne Laesecke from IREX together with Danielle de Garcia of Social Impact offered an introduction to the qualitative coding process followed by a hands-on demonstration on using Excel and Dedoose for coding and analyzing text.

They began by defining content analysis as any effort to make sense of qualitative data that takes a volume of qualitative material and attempts to identify core consistencies and meanings. More concretely, it is a research method that uses a set of procedures to make valid inferences from text. They also shared their thoughts on what makes for a good qualitative coding method.

Their belief is that: it should

  • consider what is already known about the topic being explored
  • be logically grounded in this existing knowledge
  • use existing knowledge as a basis for looking for evidence in the text being analyzed

With this definition laid out, they moved to a discussion about the coding process where they elaborated on four general steps:

  1. develop codes and a codebook
  2. decide on a sampling plan
  3. code your data
  4. go back and do it again!
  5. test for reliability

Developing codes and a codebook is important for establishing consistency in the coding process, especially if there will be multiple coders working on the data. A good way to start developing these codes is to consider what is already known. For example, you can think about literature that exists on the subject you’re studying. Alternatively, you can simply turn to the research questions the project seeks to answer and use them as a guide for creating your codes. Beyond this, it is also useful to go through the content and think about what you notice as you read. Once a codebook is created, it will lend stability and some measure of objectivity to the project.

The next important issue is the question of sampling. When determining sample size, though a larger sample will yield more robust results, one must of course consider the practical constraints of time, cost and effort. Does the benefit of higher quality results justify the additional investment? Fortunately, the type of data will often inform sampling. For example, if there is a huge volume of data, it may be impossible to analyze it all, but it would be prudent to sample at least 30% of it. On the other hand, usually interview and focus group data will all be analyzed, because otherwise the effort of obtaining the data would have gone to waste.

Regarding sampling method, session leads highlighted two strategies that produce sound results. One is systematic random sampling and the other is quota sampling–a method employed to ensure that the proportions of demographic group data are fairly represented.

Once these key decisions have been made, the actual coding can begin. Here, all coders should work from the same codebook and apply the codes to the same unit of analysis. Typical units of analysis are: single words, themes, sentences, paragraphs, and items (such as articles, images, books, or programs). Consistency is essential. A way to test the level of consistency is to have a 10% overlap in the content each coder analyzes and aim for 80% agreement between their coding of that content. If the coders are not applying the same codes to the same units this could either mean that they are not trained properly or that the code book needs to be altered.

Along a similar vein, the fourth step in the coding process is to test for reliability. Challenges in producing stable and consistent results in coding could include: using a unit of analysis that is too large for a simple code to be reliably applied, coding themes or concepts that are ambiguous, and coding nonverbal items. For each of these, the central problem is that the units of analysis leave too much room for subjective interpretation that can introduce bias. Having a detailed codebook can help to mitigate against this.

After giving an overview of the coding process, the session leads suggested a few possible strategies for data visualization. One is to use a word tree, which helps one look at the context in which a word appears. Another is a bubble chart, which is useful if one has descriptive data and demographic information. Thirdly, correlation maps are good for showing what sorts of relationships exist among the data. The leads suggested visiting the website stephanieevergreen.com/blog for more ideas about data visualization.

Finally, the leads covered low-tech and high-tech options for coding. On the low-tech end of the spectrum, paper and pen get the job done. They are useful when there are few data sources to analyze, when the coding is simple, and when there is limited tech literacy among the coders. Next up the scale is Excel, which works when there are few data sources and when the coders are familiar with Excel. Then the session leads closed their presentation with a demonstration of Dedoose, which is a qualitative coding tool with advanced capabilities like the capacity to code audio and video files and specialized visualization tools. In addition to Dedoose, the presenters mentioned Nvivo and Atlas as other available qualitative coding software.

Despite the range of qualitative content available for analysis, there are a few core principles that can help ensure that it is analyzed well, these include consistency and disciplined methodology. And if qualitative coding will be an ongoing part of your organization’s operations, there are several options for specialized software that are available for you to explore. [Click here for links and additional resources from the session.]

Data quality in the age of lean data

by Daniel Ramirez-Raftree, MERL Tech support team.

Evolving data collection methods call for evolving quality assurance methods. In their session titled Data Quality in the Age of Lean Data, Sam Schueth of Intermedia, Woubedle Alemayehu of Oxford Policy Management, Julie Peachey of the Progress out of Poverty Index, and Christina Villella of MEASURE Evaluation discussed problems, solutions, and ethics related to digital data collection methods. [Bios and background materials here]

Sam opened the conversation by comparing the quality assurance and control challenges in paper assisted personal interviewing (PAPI) to those in digital assisted personal interviewing (DAPI). Across both methods, the fundamental problem is that the data that is delivered is a black box. It comes in, it’s turned into numbers and it’s disseminated, but in this process alone there is no easily apparent information about what actually happened on the ground.

During the age of PAPI, this was dealt with by sending independent quality control teams to the field to review the paper questionnaire that was administered and perform spot checks by visiting random homes to validate data accuracy. Under DAPI, the quality control process becomes remote. Survey administrators can now schedule survey sessions to be recorded automatically and without the interviewer’s knowledge, thus effectively gathering a random sample of interviews that can give them a sense of how well the sessions were conducted. Additionally, it is now possible to use GPS to track the interviewers’ movements and verify the range of households visited. The key point here is that with some creativity, new technological capacities can be used to ensure higher data quality.

Woubedle presented next and elaborated on the theme of quality control for DAPI. She brought up the point that data quality checks can be automated, but that this requires pre-survey-implementation decisions about what indicators to monitor and how to manage the data. The amount of work that is put into programming this upfront design has a direct relationship on the ultimate data quality.

One useful tool is a progress indicator. Here, one collects information on trends such as the number of surveys attempted compared to those completed. Processing this data could lead to further questions about whether there is a pattern in the populations that did or did not complete the survey, thus alerting researchers to potential bias. Additionally, one can calculate the average time taken to complete a survey and use it to identify outliers that took too little or too long to finish. Another good practice is to embed consistency checks in the survey itself; for example, making certain questions required or including two questions that, if answered in a particular way, would be logically contradictory, thus signaling a problem in either the question design or the survey responses. One more practice could be to apply constraints to the survey, depending on the households one is working with.

After this discussion, Julie spoke about research that was done to assess the quality of different methods for measuring the Progress out of Poverty Index (PPI). She began by explaining that the PPI is a household level poverty measurement tool unique to each country. To create it, the answers to 10 questions about a household’s characteristics and asset ownership are scored to compute the likelihood that the household is living below the poverty line. It is a simple, yet effective method to evaluate household level poverty. The research project Julie described set out to determine if the process of collecting data to create the PPI could be made less expensive by using SMS, IVR or phone calls.

Grameen Foundation conducted the study and tested four survey methods for gathering data: 1) in-person and at home, 2) in-person and away from home, 3) in-person and over the phone, and 4) automated and over the phone. Further, it randomized key aspects of the study, including the interview method and the enumerator.

Ultimately, Grameen Foundation determined that the interview method does affect completion rates, responses to questions, and the resulting estimated poverty rates. However, the differences in estimated poverty rates was likely not due to the method itself, but rather to completion rates (which were affected by the method). Thus, as long as completion rates don’t differ significantly, neither will the results. Given that the in-person at home and in-person away from home surveys had similar completion rates (84% and 91% respectively), either could be feasibly used with little deviation in output. On the other hand, in-person over the phone surveys had a 60% completion rate and automated over the phone surveys had a 12% completion rate, making both methods fairly problematic. And with this understanding, developers of the PPI have an evidence-based sense of the quality of their data.

This case study illustrates the the possibility of testing data quality before any changes are made to collection methods, which is a powerful strategy for minimizing the use of low quality data.

Christina closed the session with a presentation on ethics in data collection. She spoke about digital health data ethics in particular, which is the intersection of public health ethics, clinical ethics, and information systems security. She grounded her discussion in MEASURE Evaluation’s experience thinking through ethical problems, which include: the vulnerability of devices where data is collected and stored, the privacy and confidentiality of the data on these devices, the effect of interoperability on privacy, data loss if the device is damaged, and the possibility of wastefully collecting unnecessary data.

To explore these issues, MEASURE conducted a landscape assessment in Kenya and Tanzania and analyzed peer reviewed research to identify key themes for ethics. Five themes emerged: 1) legal frameworks and the need for laws, 2) institutional structures to oversee implementation and enforcement, 3) information systems security knowledge (especially for countries that may not have the expertise), 4) knowledge of the context and users (are clients comfortable with their data being used?), and 5) incorporating tools and standard operating procedures.

Based in this framework, MEASURE has made progress towards rolling out tools that can help institute a stronger ethics infrastructure. They’ve been developing guidelines that countries can use to develop policies, building health informatic capacity through a university course, and working with countries to strengthen their health information systems governance structures.

Finally, Christina explained her take on how ethics are related to data quality. In her view, it comes down to trust. If a device is lost, this may lead to incomplete data. If the clients are mistrustful, this could lead to inaccurate data. If a health worker is unable to check or clean data, this could create a lack of confidence. Each of these risks can lead to the erosion of data integrity.

Register for MERL Tech London, March 19-20th 2018! Session ideas due November 10th.

MERL Tech and the World of ICT Social Entrepreneurs (WISE)

by Dale Hill, an economist/evaluator with over 35 years experience in development and humanitarian work. Dale led the session on “The growing world of ICT Social Entrepreneurs (WISE): Is social Impact significant?” at MERL Tech DC 2018.

Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.
Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.

What happens when evaluators trying to build bridges with new private sector actors meet real social entrepreneurs? A new appreciation for the dynamic “World of ICT Social Entrepreneurs (WISE)” and the challenges they face in marketing, pricing, and financing (not to mention measurement of social impact.)

During this MERL Tech session on WISE, Dale Hill, evaluation consultant, presented grant funded research on measurement of social impact of social entrepreneurship ventures (SEVs) from three perspectives. She then invited five ICT company CEOs to comment.

The three perspectives are:

  • the public: How to hold companies accountable, particularly if they have chosen to be legal or certified “benefit corporations”?
  • the social entrepreneurs, who are plenty occupied trying to reach financial sustainability or profit goals, while also serving the public good; and
  • evaluators, who see the important influence of these new actors, but know their professional tools need adaptation to capture their impact.

Dale’s introduction covered overlapping definitions of various categories of SEVs, including legally defined “benefit corporations”, and “B Corps”, which are intertwined with the options of certification available to social entrepreneurs. The “new middle” of SEVs are on a spectrum between for-profit companies on one end and not-for profit organizations on the other. Various types of funders, including social impact investors, new certification agencies, and monitoring and evaluation (M&E) professionals, are now interested in measuring the growing social impact of these enterprises. A show of hands revealed that representatives of most of these types of actors were present at the session.

The five social entrepreneur panelists all had ICT businesses with global reach, but they varied in legal and certification status and the number of years operating (1 to 11). All aimed to deploy new technologies to non-profit organizations or social sector agencies on high value, low price terms. Some had worked in non-profits in the past and hoped that venture capital rather than grant funding would prove easier to obtain. Others had worked for Government and observed the need for customized solutions, which required market incentives to fully develop.

The evaluator and CEO panelists’ identification of challenges converged in some cases:

  • maintaining affordability and quality when using market pricing
  • obtaining venture capital or other financing
  • worry over “mission drift” – if financial sustainability imperatives or shareholder profit maximization preferences prevail over founders’ social impact goals; and
  • the still present digital divide, when serving global customers (insufficient bandwidth, affordability issues, limited small business capital in some client countries.

New issues raised by the CEOs (and some social entrepreneurs in the audience) included:

  • the need to provide incentives to customers to use quality assurance or security features of software, to avoid falling short of achieving the SEV’s “public good” goals;
  • the possibility of hostile takeover, given high value of technological innovations;
  • the fact that mention of a “social impact goal” was a red flag to some funders who then went elsewhere to seek profit maximization.

There was also a rich discussion on the benefits and costs of obtaining certification: it was a useful “branding and market signal” to some consumers, but a negative one to some funders; also, it posed an added burden on managers to document and report social impact, sometimes according to guidelines not in line with their preferences.

Surprises?

a) Despite the “hype”, social impact investment funding proved elusive to the panelists. Options for them included: sliding scale pricing; establishment of a complementary for-profit arm; or debt financing;

b) Many firms were not yet implementing planned monitoring and evaluation (M&E) programs, despite M&E being one of their service offerings; and

c) The legislation on reporting social impact of benefit corporations among the 31 states varies considerably, and the degree of enforcement is not clear.

A conclusion for evaluators: Social entrepreneurs’ use of market solutions indeed provides an evolving, dynamic environment which poses more complex challenges for measuring social impact, and requires new criteria and tools, ideally timed with an understanding of market ups and downs, and developed with full participation of the business managers.

Discrete choice experiment (DCE) to generate weights for a multidimensional index

In his MERL Tech Lightning Talk, Simone Lombardini, Global Impact Evaluation Adviser, Oxfam, discussed his experience with an innovative method for applying tech to help determine appropriate metrics for measuring concepts that escape easy definition. To frame his talk, he referenced Oxfam’s recent experience with using discrete choice experiments (DCE) to establish a strategy for measuring women’s empowerment.

Two methods already exist, Simone points out, for transforming soft concepts into hard metrics. First, the evaluator could assume full authority and responsibility over defining the metrics. Alternatively, the evaluator could design the evaluation so that relevant stakeholders are incorporated into the process and use their input to help define the metrics.

Though both methods are common, they are missing (for practical reasons) the level of mass input that could make them truly accurate reflections of the social perception of whatever concept is being considered. Tech has a role to play in scaling the quantity of input that can be collected. If used correctly, this could lead to better evaluation metrics.

Simone described this approach as “context-specific” and “multi-dimensional.” The process starts by defining the relevant characteristics (such as those found in empowered women) in their social context, then translating these characteristics into indicators, and finally combining indicators into one empowerment index for evaluating the project.

After the characteristics are defined, a discrete choice experiment can be used to determine its “weight” in a particular social context. A discrete choice experiment (DCE) is a technique that’s frequently been used in health economics and marketing, but not much in impact evaluation. To implement a DCE, researchers present different hypothetical scenarios to respondents and ask them to decide which one they consider to best reflect the concept in question (i.e. women’s empowerment). The responses are used to assess the indicators covered by the DCE, and these can then be used to develop an empowerment index.

This process was integrated into data collection process and added 10 mins at the end of a one hour survey, and was made practicable due to the ubiquity of smartphones. The results from Oxfam’s trial run using this method are still being analyzed. For more on this, watch Lombardini’s video below!

Community-led mobile research–What could it look like?

Adam Groves, Head of Programs at On Our Radar, gave a presentation at MERL Tech London in February where he elaborated on a new method for collecting qualitative ethnographic data remotely.

The problem On Our Radar sought to confront, Adam declares, is the cold and impenetrable bureaucratic machinery of complex organizations. To many people, the unresponsiveness and inhumanity of the bureaucracies that provide them with services is dispiriting, and this is a challenge to overcome for anyone that wants to provide a quality service.

On Our Radar’s solution is to enable people to share their real-time experiences of services by recording audio and SMS diaries with their basic mobile phones. Because of the intimacy they capture, these first-person accounts have the capacity to grab the people behind services and make them listen to and experience the customer’s thoughts and feelings as they happened.

Responses obtained from audio and SMS diaries are different from those obtained from other qualitative data collection methods because, unlike solutions that crowdsource feedback, these diaries contain responses from a small group of trained citizen reporters that share their experiences in these diaries over a sustained period of time. The product is a rich and textured insight into the reporters’ emotions and priorities. One can track their journeys through services and across systems.

On Our Radar worked with British Telecom (BT) to implement this technique. The objective was to help BT understand how their customers with dementia experience their services. Over a few weeks, forty people living with dementia recorded audio diaries about their experiences dealing with big companies.

Adam explained how the audio diary method was effective for this project:

  • Because diaries and dialogues are in real time, they captured emotional highs and lows (such as the anxiety of picking up the phone and making a call) that would not be recalled in post fact interviews.
  • Because diaries are focused on individuals and their journeys instead of on discrete interactions with specific services, they showed how encountering seemingly unrelated organizations or relationships impacted users’ experiences of BT. For example, cold calls became terrifying for people with dementia and made them reluctant to answer the phone for anyone.
  • Because this method follows people’s experiences over time, it allows researchers to place individual pain points and problems in the context of a broader experience.
  • Because the data is in first person and in the moment, it moved people emotionally. Data was shared with call center staff and managers, and they found it compelling. It was an emotional human story told in one’s own words. It invited decision makers to walk in other people’s shoes.

On Our Radar’s future projects include working in Sierra Leone with local researchers to understand how households are changing their practices post-Ebola and a major piece of research with the London School of Hygiene and Tropical Medicine in Malaysia and the Philippines to gain insight on people’s understanding of their health systems.

For more, find a video of Adam’s original presentation below!

Cost-benefit comparisons of IVR, SMS, and phone survey methods

In his MERL Tech London Lightning Talk back in February, Jan Liebnitzky of Firetail provided a research-backed assessment of the costs and benefits of using interactive voice response surveys (IVR), SMS surveys, and phone surveys for MERL purposes.

First, he outlined the opportunities and challenges of using phones for survey research:

  • They are a good means for providing incentives. And research shows that incentives don’t have to be limited to airtime credits. The promise of useful information is sometimes the best motivator for respondents to participate in surveys.
  • They are less likely to reach subgroupsThough mobile phones are ubiquitous, one challenge is that groups like women, illiterate people and people in low-connectivity areas do not always have access to them. Thus, phones may not be as effective as one would hope for reaching the people most often targeted by aid programs.
  • They are scalable and have expansive reach. Scripting and outsourcing phone-based surveys to call centers takes time and capacity. Fixed costs are high, while marginal costs for each new question or respondent is low. This means that they can be cost effective (compared to on the ground surveys) if implemented at a large scale or in remote and high risk areas with problematic access.

Then, Jan shared some strategies for using phones for MERL purposes:

1. Interactive Voice Response Surveys

    • These are pre-recorded and automated surveys. Respondent can reply to them by voice or with the numerical keypad.
    • IVR has been used in interactive radio programs in Tanzania, where listening posts were established for the purpose of interacting with farmers. Listening posts are multi-channel, web-based platforms that gather and analyze feedback and questions from farmers that listen to particular radio shows. The radio station will run the IVR, and farmers can call in to the radio show to submit their questions or responses. These are effective because they are run through a trusted radio shows. However, it is important that farmers receive answers for the questions they ask, as this incentivizes future participation.

2. SMS Surveys

    • These make use of mobile messaging capabilities to send questions and receive answers. Usually, the SMS survey respondent will either choose between fixed multiple choice answers or write a freeform response. Responses, however, are limited to 160 characters.
    • One example of this is U-Reporter, a free SMS social monitoring tool for community participation in Uganda. Polls are sent to U-Reporters who answer back in real time, and the results are then shared back with the community.

3. Phone Surveys

    • Phone surveys are run through call centers by enumerators. They function like face to face interview, but over the phone.
    • As an example, phone surveys were used as a monitoring tool by an agriculture extension services provider. Farmers in the area subscribed to receive texts from the provider with tips about when and how to plant crops. From the list of subscribers, prospective respondents were sampled and in-country call centers were contracted to call up to 1,000 service users to inquire about quality of service, behaviour changes and adoption of new farming technologies.
    • The challenges here were that the data were only as good as call staff was trained. Also, there was a 80% drop off rate, partly due to the language limitations of call staff.

Finally, Jan provided a rough cost and effectivity assessment for each method:

  • IVR survey: medium cost, high response
  • SMS survey: low cost, low response
  • Phone survey: high cost, medium response

Jan closed with a question: What is the value of these methods for MERL?

His answer: The surveys are quick and dirty and, to their merit, they produce timely data from remote areas at a reasonable cost. If the data is made use of, it can be effective for monitoring. However, these methods are not yet adequate for use in evaluation.

For more, watch Jan’s Lightning Talk below!

Focus on the right users to avoid an M&E apocalypse

In his MERL Tech London Lightning Talk, George Flatters from the Open Data Institute told us that M&E is extractive. “It takes data from poor communities, it refines it, and it sells it to to rich communities.” he noted, and this process is unsustainable. The ease of deploying a survey means that there are more and more surveys being administered. This leads to survey fatigue, and when people stop wanting to take surveys, the data quality suffers, leading to an M&E apocalypse.

George outlined 4 ways to mitigate against doomsday:

1) Understand the problem–who is doing what, where?

At the moment, no one can be totally sure about which NGOs are doing what data collection and where. What is needed is the Development equivalent of the Humanitarian Data Exchange–a way to centralize and share all collected Development data. Besides the International Household Survey Catalog and NGO Aid Map (which serve a similar function, but to a limited degree), no such central location exists. With it, the industry could avoid duplication and maximize the use of its survey-administering resources.

2) Share more and use existing data

Additionally, with access to a large and comprehensive database such as this, the industry could greatly expand the scope of analysis done with the same set of data. This, of course, should be paired with the appropriate privacy considerations. For example, the data should be anonymized. Generally, a balance must be struck between accessibility and ethics. The Open Data Institute has a useful framework for thinking about how different data should be governed and shared.

3) Focus on the right users

One set of users is the data-collectors at the head office of an NGO. There are M&E solutions that will make their lives easier. However, attention must also be given to the people in communities providing the data. We need to think about how to make their lives easier as well.

4) Think like a multinational tech corporation (and/or get their data)

These corporations do not sit there and think about how to extract the maximum amount of data, they consider how they can provide quality services that will attract customers. Most of their data is obtained through the provision of services. Similarly, the question here should be, “what M&E services can we provide and receive data as a byproduct?” Examples include: cash-transfers, health visits, app download & usage, and remote watch sensing.

These principles can help minimize the amount of effort spent on extracting data, alleviating the strain placed on those who provide the data, and staving of the end of days for a little longer.

Watch George’s Lightning Talk for some additional tips!