Tag Archives: learning

Five lessons learned from applying design thinking to data use

by Amanda Makulec, Data Visualization Lead, Excella Consulting and Barb Knittel, Research, Monitoring & Evaluation Advisor, John Snow Inc. Amanda and Barb led “How the Simpsons Make Data Use Happen” at MERL Tech DC.

MERL-DesignforDataUse 1

Workshopping ways to make data use happen.

Human centered design isn’t a new concept. We’ve heard engineers, from aerospace to software, quietly snicker as they’ve seen the enthusiasm for design thinking explode within the social good space in recent years. “To start with the end user in mind? Of course! How else would you create a product someone wants to use?”

However, in our work designing complex health information systems, dashboards, and other tools and strategies to improve data use, the idea of starting with the end user does feel relatively new.

Thinking back to graduate school nearly ten years ago, dashboard design classes focused on the functional skills, like how to use a pivot table in Excel, not on the complex processes of gathering user requirements to design something that could not only delight the end user, but be co-designed with them.

As part of designing for data use and data visualization design workshops, we’ve collaborated with design firms to find new ways to crack the nut of developing products and processes that help decisionmakers use information. Using design thinking tools like ranking exercises, journey maps, and personas has helped users identify and find innovative ways to address critical barriers to data use.

If you’re thinking about integrating design thinking approaches into data-centered projects, here are our five key considerations to take into account before you begin:

  1. Design thinking is a mindset, not a workshop agenda. When you’re setting out to incorporate design thinking into your work, consider what that means throughout the project lifecycle. From continuous engagement and touchpoints with your data users to
  1. Engage the right people – you need a diverse range of perspectives and experiences to uncover problems and co-create solutions. This means thinking of the usual stakeholders using the data at hand, but also engaging those adjacent to the data. In health information systems, this could be the clinicians reporting on the registers, the mid-level managers at the district health office, and even the printer responsible for distributing paper registers.
  1. Plan for the long haul. Don’t limit your planning and projections of time, resources, and end user engagement to initial workshops. Coming out of your initial design workshops, you’ll likely have prototypes that require continued attention to functionally build and implement.
  1. Focus on identifying and understanding the problem you’ll be solving. You’ll never be able to solve every problem and overcome every data use barrier in one workshop (or even in one project). Work with your users to develop a specific focus and thoroughly understand the barriers and challenges from their perspectives so you can tackle the most pressing issues (or choose deliberately to work on longer term solutions to the largest impediments).
  1. The journey matters as much as the destination. One of the greatest ah-ha moments coming out of these workshops has been from participants who see opportunities to change how they facilitate meetings or manage teams by adopting some of the activities and facilitation approaches in their own work. Adoption of the prototypes shouldn’t be your only metric of success.

The Designing for Data Use workshops were funded by (1) USAID and implemented by the MEASURE Evaluation project and (2) the Global Fund through the Data Use Innovations Fund. Matchboxology was the design partner for both sets of workshops, and John Snow Inc. was the technical partner for the Data Use Innovations sessions. Learn more about the process and learning from the MEASURE Evaluation workshops in Applying User Centered Design to Data Use Challenges: What we Learned and see our slides from our MERL Tech session “The Simpsons, Design, and Data Use” to learn more.

MERL Tech Round Up | October 2, 2017

We’ll be experimenting with a monthly round-up of MERL Tech related content (bi-weekly if there’s enough to fill a post). Let us know if it’s useful! We aim to keep it manageable and varied, rather than a laundry list of every possible thing. The format, categories, and topics will evolve as we see how it goes and what the appetite is.

If you have anything you’d like to share or see featured, feel free to send it on over or post on Twitter using the #MERLTech hashtag.

On the MERL Tech Blog:

Big Data in Evaluation – Michael Bamberger discusses the future of development evaluation in the age of Big Data and ways to build bridges between evaluators and Big Data analysis. Rick Davies (Monitoring and Evaluation News) raises some great points in the comments (and Michael replies).

Experiences with Mobile case management for multi-dimensional accountability from Oxfam and Survey CTO.

Thoughts on MERL Tech Maturity Models & Next Generation Transparency & Accountability from Megan Colner (Open Society Foundations) and Alison Miranda (Transparency and Accountability Initiative).

The best learning at MERL Tech DC came from sharing failures from Ambika Samarthya-Howard (Praekelt.org).

We’ll be posting more MERL Tech DC summaries and wrap-up posts over the next month or two. We’re also gearing up for MERL Tech London coming up in March 2018. Stay tuned for more information on that.

Stuff we’re reading / watching:

New research (Making All Voices Count research team) on ICT-mediated citizen engagement. What makes it transformative? 

Opportunities and risks in emerging technologies, including white papers on Artificial IntelligenceAlgorithmic Accountability; and Control of Personal Data (The Web Foundation).

Research on Privacy, Security, and Digital Inequality: How Technology Experiences and Resources Vary by Socioeconomic Status, Race, and Ethnicity in the United States from Mary Madden (Data & Society).

Tools, frameworks and guidance we’re bookmarking:

A framework for evaluating inclusive technology, technology for social impact and ICT4D programming (SIMLab) and an example of its application. The framework is open source, so you can use and adapt it!

survey tool and guidance for assessing women’s ICT access and use (FHI 360’s mSTAR project).  (Webinar coming up on Oct 10th)

Series on data management (DAI) covering 1) planning and collecting data; 2) managing and storing data; and 3) getting value and use out of the data that’s collected through analysis and visualization.

Events and training:

Webinar on using ICT in monitoring and evaluation of education programming for refugee populations (USAID and INEE). Recording and presentations from the Sep 28th event here.

Webinar on assessing women’s ICT access and use, Oct 10th (Nethope, USAID and mSTAR/FHI 360.

Let us know of upcoming events we should feature.

Jobs:

Send us vacancies for MERL Tech-related jobs, consultants, RFPs and we’ll help spread the word.

Failures are the way forward

By Ambika Samarthya-Howard, Head of Communications at Praekelt.org. This post also appears on the Praekelt.org blog.

Marc Mitchell, President of D-Tree International, gives his Lightning Talk: When the Control Group Wins.
Marc Mitchell, President of D-Tree International, gives his Lightning Talk: “When the Control Group Wins.”

Attending conferences often reminds me of dating: you put your best foot forward and do yourself up, and hide the rest for a later time. I always found it refreshing when people openly put their cards on the table.

I think that’s why I especially respected and enjoyed my experience at MERL Tech in DC last week. One of the first sessions I went to was a World Cafe style break out exploring how to be inclusive in M&E tech in the field. The organisations, like Global Giving and Keystone, posed hard questions about community involvement in data collection at scale, and how to get people less familiar or with less access to technology involved in the process. They didn’t have any of the answers. They wanted to learn from us.

This was followed by lightning talks after lunch where organisations gave short talks.  One organisation spoke very openly about how much money and time they were wasting on their data collection technologies. Another organisation confessed their lack of structure and organisation, and talked about collaborating with organisations like DataKind to make sense of their data. Anahi Ayala Iacucci from Internews did a presentation on the pitfalls and realities of M&E: “we all skew the results in order to get the next round of funding.” She fondly asked us to work together so we could “stop farting in the wind”. D-Tree International spoke about a trial around transport vouchers for pregnant women in Zanzibar, and how the control group that did not receive any funding actually did better.  They had to stop funding the vouchers.

The second day I attended an entire session where we looked at existing M&E reports available online to critique their deficiencies and identify where the field was lacking in knowledge dissemination. As a Communications person, looking at the write-ups of the data ironically gave me instant insight into ways forward and where gaps could be filled — which I believe is exactly what the speakers of the session intended. When you can so clearly see why and how things aren’t working, it actually inspires a different approach and way of working.

I was thoroughly impressed with the way people shared at MERL Tech. When you see an organisation able to talk so boldly about its learning curves or gaps, you respect their work, growth, and learnings.  And that is essentially the entire purpose of a conference.

Back to dating… and partnerships. Sooner or later, if the relationship works out, your partner is going to see you in the a.m. for who you really are. Why not cut to the chase, and go in with your natural look?  Then you can take the time to really do any of the hard work together, on the same footing.

Mobile Case Management for Multi-Dimensional Accountability

This is a cross-post from Christopher Robert of Dobility. It was originally published September 13 on the SurveyCTO blog.

At MERL Tech DC 2017, Oxfam’s Emily Tomkys Valteri and I teamed up to lead a session on Mobile case management for multi-dimensional accountability. This blog post shares some highlights from that session. [Note: session slides are available here]

Background

In their Your Word Counts project, Oxfam is collaborating with local and global partners to capture, analyze, and respond to community feedback data using a mobile case management tool. The goal is to inform Oxfam’s Middle East humanitarian response and give those affected by crisis a voice for improved support and services. This project is a scale-up of an earlier pilot project, and both the pilot and the scale-up have been supported by the Humanitarian Innovation Fund.

Oxfam’s use of SurveyCTO’s case-management features has been innovative, and they have been helping to support improvements in the core technology. In this session, we discussed both the core technology and the broader organizational and logistical challenges that Oxfam has encountered in the field.

Mobile case management: an introduction 

In standard applications of mobile data collection, enumerators, inspectors, program officers, or others use a mobile phone or tablet to collect data. Whether they quietly observe things, interview people, or inspect facilities, they ultimately enter some kind of data into a mobile device. In systems like SurveyCTO, data-collection officially begins when they click a Fill Blank Formbutton and choose a digital form to fill out.

Mobile data collection

Mobile case management is much the same, but the process begins with cases and then proceeds to forms. As far as the core technology is concerned, a case might be a clinic, a school, a water point, a household – pretty much any unit that’s meaningful in the given context. Instead of choosing Fill Blank Form and choosing a form, users in the field choose Manage Cases and then choose a particular case from a list that’s filtered specifically for that user (e.g., to include only schools in their area); once they select a case, they then select one of the forms that is outstanding for that case.

Mobile case management

Behind the scenes, the case list is really just a spreadsheet. It includes columns for the unique case ID, the label that should be used to identify the case to users, the list of forms that should be filled for the case, and the users and/or user roles that should see the case listed in their case list. Importantly, the case list is not static: any form can update or add a case, and thus as users fill forms the case list can be dynamically revised and extended. (In SurveyCTO, the case list is simply a server dataset: it can be manually uploaded as a .csv, attached to forms, and updated just like any other dataset.)

Mobile case management: case list

Oxfam’s innovative use case: Your Word Counts 

Oxfam accountability feedback loop

Oxfam accountability feedback loop. Diagram credit: Oxfam GB.

In Oxfam’s Your Word Counts project, cases represent any kind of feedback from the community. Volunteers and program staff carry mobile phones and log feedback as new cases whenever they interact with community members; technical teams then work to resolve feedback within a week, filling out new forms to update cases as their status changes; and program staff then close the loop with the original community members when possible, before closing the case. Because the data is all available in a single electronic system, in-country, regional, and even global teams can then report on and analyze both the community feedback and the responses over time.

There have been some definite successes in piloting and early scale-up:

  • By listening to community members, recording their feedback, and following up, the community feedback system has helped to build trust.
  • The digital process of recording referrals, updates, and eventually responses has been rapid, speeding responsiveness to feedback overall.
  • Since all digital forms can be updated easily, the system is dynamic and flexible enough to adapt as programs or needs change.
  • The solution appears to be low-cost, scalable, and sustainable.

There have been both organizational and logistical challenges, however. For example:

  • For a system like this to truly be effective, fundamental responsibility for accountability must be shared organization-wide. While MEAL officers (monitoring, evaluation, accountability, and learning officers) can help to set up and manage accountability systems, technical teams, program teams, and senior leadership ultimately have to share ownership and responsibility in order for the system to function and sustain.
  • Globally-predefined feedback categories turned out not to fit well with early deployment contexts, and so the program team needed to re-think how to most effectively categorize feedback. (See Oxfam’s blog post on the subject.)
  • In dynamic in-country settings, staff turnover can be high, posing major logistical and sustainability challenges for systems of all kinds.
  • While community members can add and update cases offline, ultimately an Internet connection is required to synchronize case lists with a central server. In some settings, access to office Internet has been a challenge.
  • Ideally, cases would be easily referred across agencies working in a particular setting, but some agencies have been reluctant to buy into shared digital systems.

Oxfam’s MEAL team is exploring ways to facilitate a broader accountability culture throughout the organization. In country programs, for example, MEAL coordinators are looking to use office whiteboards to track key indicators of feedback performance and engage staff in discussions of what those indicators mean for them. More broadly, Oxfam is looking to highlight best practices in responding and acting on feedback and seeking other ways to incentivize teams in this area.

Oxfam’s work is ongoing, and you can follow their progress on their project blog.

Mobile case management: Where it’s going 

While Oxfam works to build and support both systems and culture for accountability in their humanitarian response programs, we at Dobility are working to improve the core technology. With Oxfam’s feedback and support, we are currently working to improve the user interface used to filter and browse case lists, both on devices (in the field) and on the web (in the office). We are also working to improve the user interface for those setting up and managing these kinds of case-management system. If you have specific ideas, please share them by commenting below!

Maturity Models: Visualizing Progress Towards Next-Generation Transparency and Accountability

photo-sep-08-768x953By Alison Miranda (TAI) and Megan Colnar (Open Society Foundation). This is a cross-post of a piece published on September 17th on the Transparency and Accountability Initiative’s blog.

How can we assess progress on a second-generation way of working in the transparency, accountability and participation (TAP) field? Monitoring, evaluation, research, and learning (MERL) maturity models can provide some inspiration. The 2017 MERL Tech conference in Washington, DC was a two-day bonanza of lightening talks, plenary sessions, and hands-on workshops among participants who use technology for MERL.

Here are key conference takeaways from two MEL practitioners in the TAP field.

1. Making open data useful

Several MERL Tech sessions resonated deeply with the TAP field’s efforts to transition from fighting for transparent and open data towards linking open data to accountability and governance outcomes. Development Gateway and InterAction drew on findings from “Avoiding Data Graveyards” as we explored progress and challenges for International Aid Transparency Initiative (IATI) data use cases. While transparency is an important value, what is gained (or lost) in data use for collaboration when there are many different potential data consumers?

A partnership between Freedom House and DataKind is moving the Freedom in the World study towards a more transparent display of index sub-indicators, and building a more robust – and usable! – data set by reformatting and integrating their data and other secondary big data sets. What could such an initiative yield for the Extractive Industry Transparency Initiative (EITI), for example, if equivalent data sets were available?

And finally, as TAP practitioners are keenly aware, power and politics can overshadow evidence in decision making. Another Development Gateway presentation reminded us that it is important to work with data producers and users to identify decisions that are (or can be) data-driven, and to recognize when other factors are driving decisions. (The incentives to supply open data is whole other can of worms!)

Drawing on our second-generation TAP approach, more work is needed for the TAP and MERL fields to move from “open data everywhere, all of the time” to planning for, and encouraging more effective data use.

2. Tech for MERL for improved policy, practice, and outcomes

Among our favorite moments at MERL Tech was when Dobility Founder and CEO Christopher Robert remarked that “the most interesting innovations at MERL Tech aren’t the new, cutting-edge technology developments, but generic technology applied in innovative ways.” Unsurprising for a tech company focused on using affordable technology to enable quality data collection for social impact, but a refreshing reminder amidst the talk of ‘AI’, ‘chatbots’, and ‘blockchains’ for development coursing through the conference.

The TAP field is certainly not a stranger to employing technology from apps to curb trade corruption in Nigeria to Citizen Helpdesks in Nepal, Liberia, and Mali to crowdsourced political campaign expenditure monitoring in Bolivia, but our second-generation TAP insights remind us technology tools are not an end in themselves. MERL and technology are our means for collecting effective data, generating important insights and learning, building larger movements, and gathering context-specific evidence on transparency and accountability.

We are undoubtedly on the precipice of revolutionary technological advancements that can be readily (and maybe even affordably) deployed[1] to solve complex global challenges, but they will still be tools and not solutions.

3. Balancing inclusion and participation with efficiency and accuracy

We explored a constant conundrum for MERL: how to balance inclusion and participation with efficiency and accuracy. Girl Effect and Praekelt Foundation took “mixed methods” research to another level, combining online and offline efforts to understand user needs of adolescent girls and to support user-developed digital media content. Their iterative process showcased an effective way to integrate tech into the balancing act of inclusive – and holistic – design, paired with real-time data use.

This session on technology in citizen generated data brought to light two case studies of how tech can both help and hinder this balancing act. The World Café discussions underscored the importance of planning for – and recognizing the constraints on – feedback loops. And provided us a helpful reminder that MERL and tech professionals are often considering different “end users” in their design work!

So, which is it – balancing act or zero-sum game between inclusion and efficiency? The MERL community has long applied participatory methods. And tech solutions abound that can help with efficiency, accuracy, and inclusion. Indeed, the second-generation TAP focus on learning and collaboration is grounded in effective data use – but there are many potential “end users” to consider. These principles and practices can force uncomfortable compromises – particularly in the face of finite resources and limited data availability – but they are not at odds with each other. Perhaps the MERL and TAP communities can draw lessons from each other in striking the right balance.

4. Tech sees no development sector silos

One of the things that makes MERL Tech such an exciting conference, is the deliberate mixing of tech nerds with MERL nerds. It’s pretty unique in its dual targeting of both types of professionals who share a common purpose of social impact (where as conferences like ICT4D cast a wider net looking at application of technology to broader development issues). And, though we MERL professionals like to think of design and iteration as squarely within our wheelhouse, being in a room full of tech experts can quickly remind you that our adaptation game has a lot of room to grow. We talk about user-centered design in TAP, but when the tech crowd was asked in plenary “would you even think of designing software or an app without knowing who was going to use it?” they responded with a loud and exuberant laugh.

Tech has long employed systematic approaches to user-centered design, prototyping, iteration, and adaptation, all of which can offer compelling lessons to guide MERL practices and methods. Though we know Context is King, it is exhilarating to know that the tech specialists assembled at the conference work across traditional silos of development work (from health to corruption, and everything in between). End users are, of course, crucial to the final product but the life cycle process and steps follow a regular pattern, regardless of the topic area or users.

The second-generation wave in TAP similarly moved away from project-specific, fragmented, or siloed planning and learning towards a focus on collective approaches and long-term, more organic engagement.

American Evaluation Association President, Kathy Newcomer, quipped that maybe an ‘Academy Awards for Adaptation’ could inspire better informed and more adept evolutions to strategy as circumstances and context shift around us. Adding to this, and borrowing from the tech community, we wonder where we can build more room to beta test, follow real demand, and fail fast. Are we looking towards other sectors and industries enough or continuing to reinvent the wheel?

Alison left thinking:

  • Concepts and practices are colliding across the overlapping MERL, tech, and TAP worlds! In leading the Transparency and Accountability Initiative’s learning strategy, and supporting our work on data use for accountability, I often find myself toggling between different meanings of ‘data’, ‘data users’, and tech applications that can enable both of these themes in our strategy. These worlds don’t have to be compatible all the time, and concepts don’t have to compute immediately (I am personally still working out hypothetical blockchain applications for my MERL work!). But this collision of worlds is a helpful reminder that there are many perspectives to draw from in tackling accountable governance outcomes.
  • Maturity models come in all shapes and sizes, as we saw in the creative depictions created at MERL Tech that included, steps, arrows, paths, circles, cycles, and carrots! And the transparency and accountability field is collectively pursuing a next generation of more effective practice that will take unique turns for different accountability actors and outcomes. Regardless of what our organizational or programmatic models look like, MERL Tech reminded me that champions of continuous improvement are needed at all stages of the model – in MERL, in tech for development, and in the TAP field.

Megan left thinking:

  • That I am beginning to feel like I’m a Dr. Seuss book. We talked ‘big data’, ‘small data’, ‘lean data’, and ‘thick data’. Such jargon-filled conversations can be useful for communicating complex concepts simply with others. Ah, but this is also the problem. This shorthand glosses over the nuances that explain what we actually mean. Jargon is also exclusive—it clearly defines the limits of your community and makes it difficult for newcomers. In TAP, I can’t help but see missed opportunities for connecting our work to other development sectors. How can health outcomes improve without holding governments and service providers accountable for delivering quality healthcare? How can smallholder farmers expect better prices without governments budgeting for and building better roads? Jargon is helpful until it divides us up. We have collective, global problems and we need to figure out how to talk to each other if we’re going to solve them.
  • In general, I’m observing a trend towards organic, participatory, and inclusive processes—in MERL, in TAP, and across the board in development and governance work. This is, almost universally speaking, a good thing. In MERL, a lot of this movement is a backlash to randomistas and imposing The RCT Gold Standard to social impact work. And, while I confess to being overjoyed that the “RCT-or-bust” mindset is fading out, I can’t help but think we’re on a slippery slope. We need scientific rigor, validation, and objective evidence. There has to be a line between ‘asking some good questions’ and ‘conducting an evaluation’. Collectively, we are working to eradicate unjust systems and eliminate poverty, and these issues require not just our best efforts and intentions, but workable solutions. Listen to Freakonomic’s recent podcast When Helping Hurts and commit with me to find ways to keep participatory and inclusive evaluation techniques rigorous and scientific, too.

[1] https://channels.theinnovationenterprise.com/articles/ai-in-developing-countries

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Six priorities for the MERL Tech community

by Linda Raftree, MERL Tech Co-organizer

IMG_4636Participants at the London MERL Tech conference in February 2017 crowdsourced a MERL Tech History timeline (which I’ve shared in this post). Building on that, we projected out our hopes for a bright MERL Tech Future. Then we prioritized our top goals as a group (see below). We’ll aim to continue building on these as a sector going forward and would love more thoughts on them.

  1. Figure out how to be responsible with digital data and not put people, communities, vulnerable groups at risk. Subtopics included: share data with others responsibly without harming anyone; agree minimum ethical standard for MERL and data collection; agree principles for minimizing data we collect so that only essential data is captured, develop duty of care principles for MERL Tech and digital data; develop ethical data practices and policies at organization levels; shift the power balance so that digital data convenience costs are paid by orgs, not affected populations; develop a set of quality standards for evaluation using tech
  2. Increase data literacy across the sector, at individual level and within the various communities where we are working.
  3. Overcome the extraction challenge and move towards true downward accountability. Do good user/human centered design and planning together, be ‘leaner’ and more user-focused at all stages of planning and MERL. Subtopics included: development of more participatory MERL methods; bringing consensus decision-making to participatory MERL; realizing the potential of tech to shift power and knowledge hierarchies; greater use of appreciative inquiry in participatory MERL; more relevant use of tech in MERL — less data, more empowering, less extractive, more used.
  4. Integrate MERL into our daily opfor blogerations to avoid the thinking that it is something ‘separate;’ move it to the core of operations management and make sure we have the necessary funds to do so; demystify it and make it normal! Subtopics included that: we’ve stopped calling “MERL” a “thing” and the norm is to talk about monitoring as part of operations; data use is enabling real-time coordination; no more paper based surveys.
  5. Improve coordination and interoperability as related to data and tools, both between organizations and within organizations. Subtopics included: more interoperability; more data-sharing platforms; all data with suitable anonymization is open; universal exchange of machine readable M&E Data (e.g., standards? IATI? a platform?); sector-wide IATI compliance; tech solutions that enable sharing of qualitative and quantitative data; systems of use across agencies; e.g., to refer feedback; coordination; organizations sharing more data; interoperability of tools. It was emphasized that donors should incentivize this and ensure that there are resources to manage it.
  6. Enhance user-driven and accessible tech that supports impact and increases efficiency, that is open source and can be built on, and that allows for interoperability and consistent systems of measurement and evaluation approaches.

In order to move on these priorities, participants felt we needed better coordination and sharing of tools and lessons among the NGO community. This could be through a platform where different innovations and tools are appropriately documented so that donors and organizations can more easily find good practice, useful tools and get a sense of ‘what’s out there’ and what it’s being used for. This might help us to focus on implementing what is working where, when, why and how in M&E (based on a particular kind of context) rather than re-inventing the wheel and endlessly pushing for new tools.

Participants also wanted to see MERL Tech as a community that is collaborating to shape the field and to ensure that we are a sector that listens, learns, and adopts good practices. They suggested hosting MERL Tech events and conferences in ‘the South;’ and building out the MERL Tech community to include greater representation of users and developers in order to achieve optimal tools and management processes.

What do you think – have we covered it all? What’s missing?

Thoughts from MERL Tech UK

merltech_uk-2016Post by Christopher Robert, Dobility (Survey CTO)

MERL Tech UK was held in London this week. It was a small, intimate gathering by conference standards (just under 100 attendees), but jam-packed full of passion, accumulated wisdom, and practical knowledge. It’s clear that technology is playing an increasingly useful role in helping us with monitoring, evaluation, accountability, research, and learning – but it’s also clear that there’s plenty of room for improvement. As a technology provider, I walked away with both more inspiration and more clarity for the road ahead.

Some highlights:

  • I’ve often felt that conferences in the ICT4D space have been overly-focused on what’s sexy, shiny, and new over what’s more boring, practical, and able to both scale and sustain. This conference was markedly different: it exceeded even the tradition of prior MERL Tech conferences in shifting from the pathology of “pilotitus” to a more hard-nosed focus on what really works.
  • There was more talk of data responsibility, which I took as another welcome sign of maturation in the space. This idea encompasses much beyond data security and the honoring of confidentiality assurances that we at Dobility/SurveyCTO have long championed, and it amounted to a rare delight: rather than us trying to push greater ethical consideration on others, for once we felt that our peers were pushing us to raise the bar even further. My own ideas in terms of data responsibility were challenged, and I came to realize that data security is just one piece of a larger ethical puzzle.
  • There are far fewer programs and projects re-inventing the wheel in terms of technology, which is yet another welcome sign of maturation. This is helping more resources to flow into the improvement and professionalization of a small but diverse set of technology platforms. Too much donor money still seems to be spent on technologies that have effective, well-established, and sustainable options available, but it’s getting better.
  • However, it’s clear that there are still plenty of ways to re-invent the wheel, and plenty of opportunities for greater collaboration and learning in the space. Most organizations are having to go it alone in terms of procuring and managing devices, training and supporting field teams, designing and monitoring data-collection activities, organizing and managing collected data, and more. Some larger international organizations who adopted digital technologies early have built up some impressive institutional capacity – but every organization still has its gaps and challenges, later adopters don’t have that historical capacity from which to draw, and smaller organizations don’t have the same kind of centralized institutional capacity.
  • Fortunately, MERL Tech organizers and participants like Oxfam GB and World Bank DIME have not only built tremendous internal capacity, but also been extremely generous in thinking through how to share that capacity with others. They share via their blogs and participation in conferences like this, and they are always thinking about new and more effective ways to share. That’s both heartening and inspiring.

I loved the smaller, more intimate nature of MERL Tech UK, but I have quickly come to somewhat regret that it wasn’t substantially larger. My first London day post-MERL-Tech was spent visiting with some other SurveyCTO users, including a wonderfully-well-attended talk on data quality at the Zoological Society of London, a meeting with some members of Imperial College London’s Schistosomiasis Control Initiative, and a discussion about some new University of Cambridge efforts to improve data and research on rare diseases in the UK. Later today, I’ll meet with some members of the TUMIKIA project team at the London School of Hygiene and Tropical Medicine, and in retrospect I now wish that all of these others had been at MERL Tech. I’m trying to share lessons as best I can, but it’s obvious that so many other organizations could both contribute to and profit from the kinds of conversations and sharing that were happening at MERL Tech.

Personally, I’ve always been distrustful of product user conferences as narrow, ego-driven, sales-and-marketing kinds of affairs, but I’m suddenly seeing how a SurveyCTO user conference could make real (social) sense. Our users are doing such incredible things, learning so much in the process, building up so much capacity – and so many of them are also willing to share generously with others. The key is providing mechanisms for that sharing to happen. At Dobility, we’ve just kept our heads down and stayed focused on providing and supporting affordable, accessible technology, but now I’m seeing that we could play a greater role in facilitating greater progress in the space. With thousands of SurveyCTO projects now in over 130 countries, the amount of learning – and the potential social benefits to sharing more – is enormous. We’ll have to think about how we can get better and better about helping. And please comment here if you have ideas for us!

Thanks again to Oxfam GB, Comic Relief, and everybody else who made MERL Tech UK possible. It was a wonderful event.