Tag Archives: research

MERL Tech and the World of ICT Social Entrepreneurs (WISE)

by Dale Hill, an economist/evaluator with over 35 years experience in development and humanitarian work. Dale led the session on “The growing world of ICT Social Entrepreneurs (WISE): Is social Impact significant?” at MERL Tech DC 2018.

Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.
Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.

What happens when evaluators trying to build bridges with new private sector actors meet real social entrepreneurs? A new appreciation for the dynamic “World of ICT Social Entrepreneurs (WISE)” and the challenges they face in marketing, pricing, and financing (not to mention measurement of social impact.)

During this MERL Tech session on WISE, Dale Hill, evaluation consultant, presented grant funded research on measurement of social impact of social entrepreneurship ventures (SEVs) from three perspectives. She then invited five ICT company CEOs to comment.

The three perspectives are:

  • the public: How to hold companies accountable, particularly if they have chosen to be legal or certified “benefit corporations”?
  • the social entrepreneurs, who are plenty occupied trying to reach financial sustainability or profit goals, while also serving the public good; and
  • evaluators, who see the important influence of these new actors, but know their professional tools need adaptation to capture their impact.

Dale’s introduction covered overlapping definitions of various categories of SEVs, including legally defined “benefit corporations”, and “B Corps”, which are intertwined with the options of certification available to social entrepreneurs. The “new middle” of SEVs are on a spectrum between for-profit companies on one end and not-for profit organizations on the other. Various types of funders, including social impact investors, new certification agencies, and monitoring and evaluation (M&E) professionals, are now interested in measuring the growing social impact of these enterprises. A show of hands revealed that representatives of most of these types of actors were present at the session.

The five social entrepreneur panelists all had ICT businesses with global reach, but they varied in legal and certification status and the number of years operating (1 to 11). All aimed to deploy new technologies to non-profit organizations or social sector agencies on high value, low price terms. Some had worked in non-profits in the past and hoped that venture capital rather than grant funding would prove easier to obtain. Others had worked for Government and observed the need for customized solutions, which required market incentives to fully develop.

The evaluator and CEO panelists’ identification of challenges converged in some cases:

  • maintaining affordability and quality when using market pricing
  • obtaining venture capital or other financing
  • worry over “mission drift” – if financial sustainability imperatives or shareholder profit maximization preferences prevail over founders’ social impact goals; and
  • the still present digital divide, when serving global customers (insufficient bandwidth, affordability issues, limited small business capital in some client countries.

New issues raised by the CEOs (and some social entrepreneurs in the audience) included:

  • the need to provide incentives to customers to use quality assurance or security features of software, to avoid falling short of achieving the SEV’s “public good” goals;
  • the possibility of hostile takeover, given high value of technological innovations;
  • the fact that mention of a “social impact goal” was a red flag to some funders who then went elsewhere to seek profit maximization.

There was also a rich discussion on the benefits and costs of obtaining certification: it was a useful “branding and market signal” to some consumers, but a negative one to some funders; also, it posed an added burden on managers to document and report social impact, sometimes according to guidelines not in line with their preferences.

Surprises?

a) Despite the “hype”, social impact investment funding proved elusive to the panelists. Options for them included: sliding scale pricing; establishment of a complementary for-profit arm; or debt financing;

b) Many firms were not yet implementing planned monitoring and evaluation (M&E) programs, despite M&E being one of their service offerings; and

c) The legislation on reporting social impact of benefit corporations among the 31 states varies considerably, and the degree of enforcement is not clear.

A conclusion for evaluators: Social entrepreneurs’ use of market solutions indeed provides an evolving, dynamic environment which poses more complex challenges for measuring social impact, and requires new criteria and tools, ideally timed with an understanding of market ups and downs, and developed with full participation of the business managers.

Submit your session ideas for MERL Tech London by Nov 10th!

MERL Tech London

Please submit a session idea, register to attend, or reserve a demo table for MERL Tech London, on March 20-21, 2018, for in-depth sharing and exploration of what’s happening across the multidisciplinary monitoring, evaluation, research and learning field.

Building on MERL Tech London 2017, we will engage 200 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn the theories of MERL technology into effective practice that delivers real insight and learning in our sector.

MERL Tech London 2018

Digital data and new media and information technologies are changing MERL practices. The past five years have seen technology-enabled MERL growing by leaps and bounds, including:

  • Adaptive management and ‘developmental evaluation’
  • Faster, higher quality data collection.
  • Remote data gathering through sensors and self-reporting by mobile.
  • Big Data and social media analytics
  • Story-triggered methodologies

Alongside these new initiatives, we are seeing increasing documentation and assessment of technology-enabled MERL initiatives. Good practice guidelines and new frameworks are emerging and agency-level efforts are making new initiatives easier to start, build on and improve.

The swarm of ethical questions related to these new methods and approaches has spurred greater attention to areas such as responsible data practice and the development of policies, guidelines and minimum ethical frameworks and standards for digital data.

Please submit a session idea, register to attend, or reserve a demo table for MERL Tech London to discuss all this and more! You’ll have the chance to meet, learn from, debate with 150-200 of your MERL Tech peers and to see live demos of new tools and approaches to MERL.

Submit Your Session Ideas Now!

Like previous conferences, MERL Tech London will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are particularly interested in:

  • Case studies: Sharing end-to-end experiences/learning from a MERL Tech process
  • MERL Tech 101: How-to use a MERL Tech tool or approach
  • Methods & Frameworks: Sharing/developing/discussing methods and frameworks for MERL Tech
  • Data: Big, large, small, quant, qual, real-time, online-offline, approaches, quality, etc.
  • Innovations: Brand new, untested technologies or approaches and their application to MERL(Tech)
  • Debates: Lively discussions, big picture conundrums, thorny questions, contentious topics related to MERL Tech
  • Management: People, organizations, partners, capacity strengthening, adaptive management, change processes related to MERL Tech
  • Evaluating MERL Tech: comparisons or learnings about MERL Tech tools/approaches and technology in development processes
  • Failures: What hasn’t worked and why, and what can be learned from this?
  • Demo Tables: to share MERL Tech approaches, tools, and technologies
  • Other topics we may have missed!

Session Submission Deadline: Friday, November 10, 2017.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early December and, if selected, you will be asked to submit an updated and final session title, summary and outline by Friday, January 19th, 2018.

Register Now!

Please register to attend, or reserve a demo table for MERL Tech London 2018 to examine these trends with an exciting mix of educational keynotes, lightning talks, and group breakouts, including an evening Fail Festival reception to foster needed networking across sectors.

We are charging a modest fee to better allocate seats and we expect to sell out quickly again this year, so buy your tickets or demo tables now. Event proceeds will be used to cover event costs and to offer travel stipends for select participants implementing MERL Tech activities in developing countries.

MERL Tech Maturity Models

by Maliha Khan, a development practitioner in the fields of design, measurement, evaluation and learning. Maliha led the Maturity Model sessions at MERL Tech DC and Linda Raftree, independent consultant and lead organizer of MERL Tech.

MERL Tech is a platform for discussion, learning and collaboration around the intersection of digital technology and Monitoring, Evaluation, Research, and Learning (MERL) in the humanitarian and international development fields. The MERL Tech network is multidisciplinary and includes researchers, evaluators, development practitioners, aid workers, technology developers, data analysts and data scientists, funders, and other key stakeholders.

One key goal of the MERL Tech conference and platform is to bring people from diverse backgrounds and practices together to learn from each other and to coalesce MERL Tech into a more cohesive field in its own right — a field that draws from the experiences and expertise of these various disciplines. MERL Tech tends to bring together six broad communities:

  • traditional M&E practitioners, who are interested in technology as a tool to help them do their work faster and better;
  • development practitioners, who are running ICT4D programs and beginning to pay more attention to the digital data produced by these tools and platforms;
  • business development and strategy leads in organizations who want to focus more on impact and keep their organizations up to speed with the field;
  • tech people who are interested in the application of newly developed digital tools, platforms and services to the field of development, but may lack knowledge of the context and nuance of that application
  • data people, who are focused on data analytics, big data, and predictive analytics, but similarly may lack a full grasp of the intricacies of the development field
  • donors and funders who are interested in technology, impact measurement, and innovation.

Since our first series of Technology Salons on ICT and M&E in 2012 and the first MERL Tech conference in 2014, the aim has been to create stronger bridges between these diverse groups and encourage the formation of a new field with an identity of its own — In other words, to move people beyond identifying as, say, an “evaluator who sometimes uses technology,” and towards identifying as a member of the MERL Tech space (or field or discipline) with a clearer understanding of how these various elements work together and play off one another and how they influence (and are influenced by) the shifts and changes happening in the wider ecosystem of international development.

By building and strengthening these divergent interests and disciplines into a field of their own, we hope that the community of practitioners can begin to better understand their own internal competencies and what they, as a unified field, offered to international development. This is a challenging prospect, as beyond their shared use of technology to gather, analyze, and store data and an interest in better understanding how, when, why, where, (etc.) these tools work for MERL and for development/humanitarian programming, there aren’t many similarities between participants.

At the MERL Tech London and MERL Tech DC conferences in 2017, we made a concerted effort to get to the next level in the process of creating a field. In London in February, participants created a timeline of technology and MERL and identified key areas that the MERL Tech community could work on strengthening (such as data privacy and security frameworks and more technological tools for qualitative MERL efforts). At MERL Tech DC, we began trying to understand what a ‘maturity model’ for MERL Tech might look like.

What do we mean by a ‘maturity model’?

Broadly, maturity models seek to qualitatively assess people/culture, processes/structures, and objects/technology to craft a predictive path that an organization, field, or discipline can take in its development and improvement.

Initially, we considered constructing a “straw” maturity model for MERL Tech and presenting it at the conference. The idea was that our straw model’s potential flaws would spark debate and discussion among participants. In the end, however, we decided against this approach because (a) we were worried that our straw model would unduly influence people’s opinions, and (b) we were not very confident in our own ability to construct a good maturity model.

Instead, we opted to facilitate a creative space over three sessions to encourage discussion on what a maturity model might look like, and what it might contain. Our vision for these sessions was to get participants to brainstorm in mixed groups containing different types of people- we didn’t want small subsets of participants to create models independently without the input of others.

In the first session, “Developing a MERL Tech Maturity Model”, we invited participants to consider what a maturity model might look like. Could we begin to imagine a graphic model that would enable self-evaluation and allow informed choices about how to best develop competencies, change and adjust processes and align structures in organizations to optimize using technology for MERL or indeed other parts of the development field?

In the second session, “Where do you sit on the Maturity Model?” we asked participants to use the ideas that emerged from our brainstorm in the first session to consider their own organizations and work, and compare them against potential maturity models. We encouraged participants to assess themselves using green (young sapling) to yellow (somewhere in the middle) and red (mature MERL Tech ninja!) and to strike up a conversation with other people in the breaks on why they chose that color.

In our third session, “Something old, something new”, we consolidated and synthesized the various concepts participants had engaged with throughout the conference. Everyone was encouraged to reflect on their own learning, lessons for their work, and what new ideas or techniques they may have picked up on and might use in the future.

The Maturity Models

As can be expected, when over 300 people take marker and crayons to paper, many a creative model emerges. We asked the participants to gallery walk the models over the next day during the breaks and vote on their favorite models.

We won’t go into detail of what all the 24 the models showed, but there were some common themes that emerged from the ones that got the most votes – almost all maturity models include dimensions (elements, components) and stages, and a depiction of potential progression from early stages to later stages across each dimension. They all also showed who the key stakeholders or players were, and some had some details on what might be expected of them at different stages of maturity.

Two of the models (MERLvana and the Data Appreciation Maturity Model – DAMM) depicted the notion that reaching maturity was never really possible and the process was an almost infinite loop. As the presenters explained MERLvana “it’s an impossible to reach the ideal state, but one must keep striving for it, in ever closer and tighter loops with fewer and fewer gains!”

MERLvana
MERLvana
Data
Data Appreciation Maturity Model

“MERL-tropolis” had clearly defined categories (universal understanding, learning culture and awareness, common principles, and programmatic strategy) and the structures/ buildings needed for those (staff, funding, tools, standard operating procedures, skills).

MERLTropolis
MERLTropolis

The most popular was “The Data Turnpike” which showed the route from the start of “Implementation with no data” to the finish line of “Technology, capacity and interest in data and adaptive management” with all the pitfalls along the way (misuse, not timely, low ethics etc) marked to warn of the dangers.

data turnpike
The Data Turnpike

As organizers of the session, we found the exercises both interesting and enlightening, and we hope they helped participants to begin thinking about their own MERL Tech practice in a more structured way. Participant feedback on the session was on polar extremes. Some people loved the exercise and felt that it allowed them to step back and think about how they and their organization were approaching MERL Tech and how they could move forward more systematically with building greater capacities and higher quality work. Some told us that they left with clear ideas on how they would work within their organizations to improve and enhance their MERL Tech practice, and that they had a better understanding of how to go about that. A few did not like that we had asked them to “sit around drawing pictures” and some others felt that the exercise was unclear and that we should have provided a model instead of asking people to create one. [Note: This is an ongoing challenge when bringing together so many types of participants from such diverse backgrounds and varied ways of thinking and approaching things!]

We’re curious if others have worked with “maturity models” and if they’ve been applied in this way or to the area of MERL Tech before. What do you think about the models we’ve shared? What is missing? How can we continue to think about this field and strengthen our theory and practice? What should we do at MERL Tech London in March 2018 and beyond to continue these conversations?

Five lessons learned from applying design thinking to data use

by Amanda Makulec, Data Visualization Lead, Excella Consulting and Barb Knittel, Research, Monitoring & Evaluation Advisor, John Snow Inc. Amanda and Barb led “How the Simpsons Make Data Use Happen” at MERL Tech DC.

MERL-DesignforDataUse 1

Workshopping ways to make data use happen.

Human centered design isn’t a new concept. We’ve heard engineers, from aerospace to software, quietly snicker as they’ve seen the enthusiasm for design thinking explode within the social good space in recent years. “To start with the end user in mind? Of course! How else would you create a product someone wants to use?”

However, in our work designing complex health information systems, dashboards, and other tools and strategies to improve data use, the idea of starting with the end user does feel relatively new.

Thinking back to graduate school nearly ten years ago, dashboard design classes focused on the functional skills, like how to use a pivot table in Excel, not on the complex processes of gathering user requirements to design something that could not only delight the end user, but be co-designed with them.

As part of designing for data use and data visualization design workshops, we’ve collaborated with design firms to find new ways to crack the nut of developing products and processes that help decisionmakers use information. Using design thinking tools like ranking exercises, journey maps, and personas has helped users identify and find innovative ways to address critical barriers to data use.

If you’re thinking about integrating design thinking approaches into data-centered projects, here are our five key considerations to take into account before you begin:

  1. Design thinking is a mindset, not a workshop agenda. When you’re setting out to incorporate design thinking into your work, consider what that means throughout the project lifecycle. From continuous engagement and touchpoints with your data users to
  1. Engage the right people – you need a diverse range of perspectives and experiences to uncover problems and co-create solutions. This means thinking of the usual stakeholders using the data at hand, but also engaging those adjacent to the data. In health information systems, this could be the clinicians reporting on the registers, the mid-level managers at the district health office, and even the printer responsible for distributing paper registers.
  1. Plan for the long haul. Don’t limit your planning and projections of time, resources, and end user engagement to initial workshops. Coming out of your initial design workshops, you’ll likely have prototypes that require continued attention to functionally build and implement.
  1. Focus on identifying and understanding the problem you’ll be solving. You’ll never be able to solve every problem and overcome every data use barrier in one workshop (or even in one project). Work with your users to develop a specific focus and thoroughly understand the barriers and challenges from their perspectives so you can tackle the most pressing issues (or choose deliberately to work on longer term solutions to the largest impediments).
  1. The journey matters as much as the destination. One of the greatest ah-ha moments coming out of these workshops has been from participants who see opportunities to change how they facilitate meetings or manage teams by adopting some of the activities and facilitation approaches in their own work. Adoption of the prototypes shouldn’t be your only metric of success.

The Designing for Data Use workshops were funded by (1) USAID and implemented by the MEASURE Evaluation project and (2) the Global Fund through the Data Use Innovations Fund. Matchboxology was the design partner for both sets of workshops, and John Snow Inc. was the technical partner for the Data Use Innovations sessions. Learn more about the process and learning from the MEASURE Evaluation workshops in Applying User Centered Design to Data Use Challenges: What we Learned and see our slides from our MERL Tech session “The Simpsons, Design, and Data Use” to learn more.

MERL Tech Round Up | October 2, 2017

We’ll be experimenting with a monthly round-up of MERL Tech related content (bi-weekly if there’s enough to fill a post). Let us know if it’s useful! We aim to keep it manageable and varied, rather than a laundry list of every possible thing. The format, categories, and topics will evolve as we see how it goes and what the appetite is.

If you have anything you’d like to share or see featured, feel free to send it on over or post on Twitter using the #MERLTech hashtag.

On the MERL Tech Blog:

Big Data in Evaluation – Michael Bamberger discusses the future of development evaluation in the age of Big Data and ways to build bridges between evaluators and Big Data analysis. Rick Davies (Monitoring and Evaluation News) raises some great points in the comments (and Michael replies).

Experiences with Mobile case management for multi-dimensional accountability from Oxfam and Survey CTO.

Thoughts on MERL Tech Maturity Models & Next Generation Transparency & Accountability from Megan Colner (Open Society Foundations) and Alison Miranda (Transparency and Accountability Initiative).

The best learning at MERL Tech DC came from sharing failures from Ambika Samarthya-Howard (Praekelt.org).

We’ll be posting more MERL Tech DC summaries and wrap-up posts over the next month or two. We’re also gearing up for MERL Tech London coming up in March 2018. Stay tuned for more information on that.

Stuff we’re reading / watching:

New research (Making All Voices Count research team) on ICT-mediated citizen engagement. What makes it transformative? 

Opportunities and risks in emerging technologies, including white papers on Artificial IntelligenceAlgorithmic Accountability; and Control of Personal Data (The Web Foundation).

Research on Privacy, Security, and Digital Inequality: How Technology Experiences and Resources Vary by Socioeconomic Status, Race, and Ethnicity in the United States from Mary Madden (Data & Society).

Tools, frameworks and guidance we’re bookmarking:

A framework for evaluating inclusive technology, technology for social impact and ICT4D programming (SIMLab) and an example of its application. The framework is open source, so you can use and adapt it!

survey tool and guidance for assessing women’s ICT access and use (FHI 360’s mSTAR project).  (Webinar coming up on Oct 10th)

Series on data management (DAI) covering 1) planning and collecting data; 2) managing and storing data; and 3) getting value and use out of the data that’s collected through analysis and visualization.

Events and training:

Webinar on using ICT in monitoring and evaluation of education programming for refugee populations (USAID and INEE). Recording and presentations from the Sep 28th event here.

Webinar on assessing women’s ICT access and use, Oct 10th (Nethope, USAID and mSTAR/FHI 360.

Let us know of upcoming events we should feature.

Jobs:

Send us vacancies for MERL Tech-related jobs, consultants, RFPs and we’ll help spread the word.

Failures are the way forward

By Ambika Samarthya-Howard, Head of Communications at Praekelt.org. This post also appears on the Praekelt.org blog.

Marc Mitchell, President of D-Tree International, gives his Lightning Talk: When the Control Group Wins.
Marc Mitchell, President of D-Tree International, gives his Lightning Talk: “When the Control Group Wins.”

Attending conferences often reminds me of dating: you put your best foot forward and do yourself up, and hide the rest for a later time. I always found it refreshing when people openly put their cards on the table.

I think that’s why I especially respected and enjoyed my experience at MERL Tech in DC last week. One of the first sessions I went to was a World Cafe style break out exploring how to be inclusive in M&E tech in the field. The organisations, like Global Giving and Keystone, posed hard questions about community involvement in data collection at scale, and how to get people less familiar or with less access to technology involved in the process. They didn’t have any of the answers. They wanted to learn from us.

This was followed by lightning talks after lunch where organisations gave short talks.  One organisation spoke very openly about how much money and time they were wasting on their data collection technologies. Another organisation confessed their lack of structure and organisation, and talked about collaborating with organisations like DataKind to make sense of their data. Anahi Ayala Iacucci from Internews did a presentation on the pitfalls and realities of M&E: “we all skew the results in order to get the next round of funding.” She fondly asked us to work together so we could “stop farting in the wind”. D-Tree International spoke about a trial around transport vouchers for pregnant women in Zanzibar, and how the control group that did not receive any funding actually did better.  They had to stop funding the vouchers.

The second day I attended an entire session where we looked at existing M&E reports available online to critique their deficiencies and identify where the field was lacking in knowledge dissemination. As a Communications person, looking at the write-ups of the data ironically gave me instant insight into ways forward and where gaps could be filled — which I believe is exactly what the speakers of the session intended. When you can so clearly see why and how things aren’t working, it actually inspires a different approach and way of working.

I was thoroughly impressed with the way people shared at MERL Tech. When you see an organisation able to talk so boldly about its learning curves or gaps, you respect their work, growth, and learnings.  And that is essentially the entire purpose of a conference.

Back to dating… and partnerships. Sooner or later, if the relationship works out, your partner is going to see you in the a.m. for who you really are. Why not cut to the chase, and go in with your natural look?  Then you can take the time to really do any of the hard work together, on the same footing.

Mobile Case Management for Multi-Dimensional Accountability

This is a cross-post from Christopher Robert of Dobility. It was originally published September 13 on the SurveyCTO blog.

At MERL Tech DC 2017, Oxfam’s Emily Tomkys Valteri and I teamed up to lead a session on Mobile case management for multi-dimensional accountability. This blog post shares some highlights from that session. [Note: session slides are available here]

Background

In their Your Word Counts project, Oxfam is collaborating with local and global partners to capture, analyze, and respond to community feedback data using a mobile case management tool. The goal is to inform Oxfam’s Middle East humanitarian response and give those affected by crisis a voice for improved support and services. This project is a scale-up of an earlier pilot project, and both the pilot and the scale-up have been supported by the Humanitarian Innovation Fund.

Oxfam’s use of SurveyCTO’s case-management features has been innovative, and they have been helping to support improvements in the core technology. In this session, we discussed both the core technology and the broader organizational and logistical challenges that Oxfam has encountered in the field.

Mobile case management: an introduction 

In standard applications of mobile data collection, enumerators, inspectors, program officers, or others use a mobile phone or tablet to collect data. Whether they quietly observe things, interview people, or inspect facilities, they ultimately enter some kind of data into a mobile device. In systems like SurveyCTO, data-collection officially begins when they click a Fill Blank Formbutton and choose a digital form to fill out.

Mobile data collection

Mobile case management is much the same, but the process begins with cases and then proceeds to forms. As far as the core technology is concerned, a case might be a clinic, a school, a water point, a household – pretty much any unit that’s meaningful in the given context. Instead of choosing Fill Blank Form and choosing a form, users in the field choose Manage Cases and then choose a particular case from a list that’s filtered specifically for that user (e.g., to include only schools in their area); once they select a case, they then select one of the forms that is outstanding for that case.

Mobile case management

Behind the scenes, the case list is really just a spreadsheet. It includes columns for the unique case ID, the label that should be used to identify the case to users, the list of forms that should be filled for the case, and the users and/or user roles that should see the case listed in their case list. Importantly, the case list is not static: any form can update or add a case, and thus as users fill forms the case list can be dynamically revised and extended. (In SurveyCTO, the case list is simply a server dataset: it can be manually uploaded as a .csv, attached to forms, and updated just like any other dataset.)

Mobile case management: case list

Oxfam’s innovative use case: Your Word Counts 

Oxfam accountability feedback loop

Oxfam accountability feedback loop. Diagram credit: Oxfam GB.

In Oxfam’s Your Word Counts project, cases represent any kind of feedback from the community. Volunteers and program staff carry mobile phones and log feedback as new cases whenever they interact with community members; technical teams then work to resolve feedback within a week, filling out new forms to update cases as their status changes; and program staff then close the loop with the original community members when possible, before closing the case. Because the data is all available in a single electronic system, in-country, regional, and even global teams can then report on and analyze both the community feedback and the responses over time.

There have been some definite successes in piloting and early scale-up:

  • By listening to community members, recording their feedback, and following up, the community feedback system has helped to build trust.
  • The digital process of recording referrals, updates, and eventually responses has been rapid, speeding responsiveness to feedback overall.
  • Since all digital forms can be updated easily, the system is dynamic and flexible enough to adapt as programs or needs change.
  • The solution appears to be low-cost, scalable, and sustainable.

There have been both organizational and logistical challenges, however. For example:

  • For a system like this to truly be effective, fundamental responsibility for accountability must be shared organization-wide. While MEAL officers (monitoring, evaluation, accountability, and learning officers) can help to set up and manage accountability systems, technical teams, program teams, and senior leadership ultimately have to share ownership and responsibility in order for the system to function and sustain.
  • Globally-predefined feedback categories turned out not to fit well with early deployment contexts, and so the program team needed to re-think how to most effectively categorize feedback. (See Oxfam’s blog post on the subject.)
  • In dynamic in-country settings, staff turnover can be high, posing major logistical and sustainability challenges for systems of all kinds.
  • While community members can add and update cases offline, ultimately an Internet connection is required to synchronize case lists with a central server. In some settings, access to office Internet has been a challenge.
  • Ideally, cases would be easily referred across agencies working in a particular setting, but some agencies have been reluctant to buy into shared digital systems.

Oxfam’s MEAL team is exploring ways to facilitate a broader accountability culture throughout the organization. In country programs, for example, MEAL coordinators are looking to use office whiteboards to track key indicators of feedback performance and engage staff in discussions of what those indicators mean for them. More broadly, Oxfam is looking to highlight best practices in responding and acting on feedback and seeking other ways to incentivize teams in this area.

Oxfam’s work is ongoing, and you can follow their progress on their project blog.

Mobile case management: Where it’s going 

While Oxfam works to build and support both systems and culture for accountability in their humanitarian response programs, we at Dobility are working to improve the core technology. With Oxfam’s feedback and support, we are currently working to improve the user interface used to filter and browse case lists, both on devices (in the field) and on the web (in the office). We are also working to improve the user interface for those setting up and managing these kinds of case-management system. If you have specific ideas, please share them by commenting below!

Maturity Models: Visualizing Progress Towards Next-Generation Transparency and Accountability

photo-sep-08-768x953By Alison Miranda (TAI) and Megan Colnar (Open Society Foundation). This is a cross-post of a piece published on September 17th on the Transparency and Accountability Initiative’s blog.

How can we assess progress on a second-generation way of working in the transparency, accountability and participation (TAP) field? Monitoring, evaluation, research, and learning (MERL) maturity models can provide some inspiration. The 2017 MERL Tech conference in Washington, DC was a two-day bonanza of lightening talks, plenary sessions, and hands-on workshops among participants who use technology for MERL.

Here are key conference takeaways from two MEL practitioners in the TAP field.

1. Making open data useful

Several MERL Tech sessions resonated deeply with the TAP field’s efforts to transition from fighting for transparent and open data towards linking open data to accountability and governance outcomes. Development Gateway and InterAction drew on findings from “Avoiding Data Graveyards” as we explored progress and challenges for International Aid Transparency Initiative (IATI) data use cases. While transparency is an important value, what is gained (or lost) in data use for collaboration when there are many different potential data consumers?

A partnership between Freedom House and DataKind is moving the Freedom in the World study towards a more transparent display of index sub-indicators, and building a more robust – and usable! – data set by reformatting and integrating their data and other secondary big data sets. What could such an initiative yield for the Extractive Industry Transparency Initiative (EITI), for example, if equivalent data sets were available?

And finally, as TAP practitioners are keenly aware, power and politics can overshadow evidence in decision making. Another Development Gateway presentation reminded us that it is important to work with data producers and users to identify decisions that are (or can be) data-driven, and to recognize when other factors are driving decisions. (The incentives to supply open data is whole other can of worms!)

Drawing on our second-generation TAP approach, more work is needed for the TAP and MERL fields to move from “open data everywhere, all of the time” to planning for, and encouraging more effective data use.

2. Tech for MERL for improved policy, practice, and outcomes

Among our favorite moments at MERL Tech was when Dobility Founder and CEO Christopher Robert remarked that “the most interesting innovations at MERL Tech aren’t the new, cutting-edge technology developments, but generic technology applied in innovative ways.” Unsurprising for a tech company focused on using affordable technology to enable quality data collection for social impact, but a refreshing reminder amidst the talk of ‘AI’, ‘chatbots’, and ‘blockchains’ for development coursing through the conference.

The TAP field is certainly not a stranger to employing technology from apps to curb trade corruption in Nigeria to Citizen Helpdesks in Nepal, Liberia, and Mali to crowdsourced political campaign expenditure monitoring in Bolivia, but our second-generation TAP insights remind us technology tools are not an end in themselves. MERL and technology are our means for collecting effective data, generating important insights and learning, building larger movements, and gathering context-specific evidence on transparency and accountability.

We are undoubtedly on the precipice of revolutionary technological advancements that can be readily (and maybe even affordably) deployed[1] to solve complex global challenges, but they will still be tools and not solutions.

3. Balancing inclusion and participation with efficiency and accuracy

We explored a constant conundrum for MERL: how to balance inclusion and participation with efficiency and accuracy. Girl Effect and Praekelt Foundation took “mixed methods” research to another level, combining online and offline efforts to understand user needs of adolescent girls and to support user-developed digital media content. Their iterative process showcased an effective way to integrate tech into the balancing act of inclusive – and holistic – design, paired with real-time data use.

This session on technology in citizen generated data brought to light two case studies of how tech can both help and hinder this balancing act. The World Café discussions underscored the importance of planning for – and recognizing the constraints on – feedback loops. And provided us a helpful reminder that MERL and tech professionals are often considering different “end users” in their design work!

So, which is it – balancing act or zero-sum game between inclusion and efficiency? The MERL community has long applied participatory methods. And tech solutions abound that can help with efficiency, accuracy, and inclusion. Indeed, the second-generation TAP focus on learning and collaboration is grounded in effective data use – but there are many potential “end users” to consider. These principles and practices can force uncomfortable compromises – particularly in the face of finite resources and limited data availability – but they are not at odds with each other. Perhaps the MERL and TAP communities can draw lessons from each other in striking the right balance.

4. Tech sees no development sector silos

One of the things that makes MERL Tech such an exciting conference, is the deliberate mixing of tech nerds with MERL nerds. It’s pretty unique in its dual targeting of both types of professionals who share a common purpose of social impact (where as conferences like ICT4D cast a wider net looking at application of technology to broader development issues). And, though we MERL professionals like to think of design and iteration as squarely within our wheelhouse, being in a room full of tech experts can quickly remind you that our adaptation game has a lot of room to grow. We talk about user-centered design in TAP, but when the tech crowd was asked in plenary “would you even think of designing software or an app without knowing who was going to use it?” they responded with a loud and exuberant laugh.

Tech has long employed systematic approaches to user-centered design, prototyping, iteration, and adaptation, all of which can offer compelling lessons to guide MERL practices and methods. Though we know Context is King, it is exhilarating to know that the tech specialists assembled at the conference work across traditional silos of development work (from health to corruption, and everything in between). End users are, of course, crucial to the final product but the life cycle process and steps follow a regular pattern, regardless of the topic area or users.

The second-generation wave in TAP similarly moved away from project-specific, fragmented, or siloed planning and learning towards a focus on collective approaches and long-term, more organic engagement.

American Evaluation Association President, Kathy Newcomer, quipped that maybe an ‘Academy Awards for Adaptation’ could inspire better informed and more adept evolutions to strategy as circumstances and context shift around us. Adding to this, and borrowing from the tech community, we wonder where we can build more room to beta test, follow real demand, and fail fast. Are we looking towards other sectors and industries enough or continuing to reinvent the wheel?

Alison left thinking:

  • Concepts and practices are colliding across the overlapping MERL, tech, and TAP worlds! In leading the Transparency and Accountability Initiative’s learning strategy, and supporting our work on data use for accountability, I often find myself toggling between different meanings of ‘data’, ‘data users’, and tech applications that can enable both of these themes in our strategy. These worlds don’t have to be compatible all the time, and concepts don’t have to compute immediately (I am personally still working out hypothetical blockchain applications for my MERL work!). But this collision of worlds is a helpful reminder that there are many perspectives to draw from in tackling accountable governance outcomes.
  • Maturity models come in all shapes and sizes, as we saw in the creative depictions created at MERL Tech that included, steps, arrows, paths, circles, cycles, and carrots! And the transparency and accountability field is collectively pursuing a next generation of more effective practice that will take unique turns for different accountability actors and outcomes. Regardless of what our organizational or programmatic models look like, MERL Tech reminded me that champions of continuous improvement are needed at all stages of the model – in MERL, in tech for development, and in the TAP field.

Megan left thinking:

  • That I am beginning to feel like I’m a Dr. Seuss book. We talked ‘big data’, ‘small data’, ‘lean data’, and ‘thick data’. Such jargon-filled conversations can be useful for communicating complex concepts simply with others. Ah, but this is also the problem. This shorthand glosses over the nuances that explain what we actually mean. Jargon is also exclusive—it clearly defines the limits of your community and makes it difficult for newcomers. In TAP, I can’t help but see missed opportunities for connecting our work to other development sectors. How can health outcomes improve without holding governments and service providers accountable for delivering quality healthcare? How can smallholder farmers expect better prices without governments budgeting for and building better roads? Jargon is helpful until it divides us up. We have collective, global problems and we need to figure out how to talk to each other if we’re going to solve them.
  • In general, I’m observing a trend towards organic, participatory, and inclusive processes—in MERL, in TAP, and across the board in development and governance work. This is, almost universally speaking, a good thing. In MERL, a lot of this movement is a backlash to randomistas and imposing The RCT Gold Standard to social impact work. And, while I confess to being overjoyed that the “RCT-or-bust” mindset is fading out, I can’t help but think we’re on a slippery slope. We need scientific rigor, validation, and objective evidence. There has to be a line between ‘asking some good questions’ and ‘conducting an evaluation’. Collectively, we are working to eradicate unjust systems and eliminate poverty, and these issues require not just our best efforts and intentions, but workable solutions. Listen to Freakonomic’s recent podcast When Helping Hurts and commit with me to find ways to keep participatory and inclusive evaluation techniques rigorous and scientific, too.

[1] https://channels.theinnovationenterprise.com/articles/ai-in-developing-countries

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Community-led mobile research–What could it look like?

Adam Groves, Head of Programs at On Our Radar, gave a presentation at MERL Tech London in February where he elaborated on a new method for collecting qualitative ethnographic data remotely.

The problem On Our Radar sought to confront, Adam declares, is the cold and impenetrable bureaucratic machinery of complex organizations. To many people, the unresponsiveness and inhumanity of the bureaucracies that provide them with services is dispiriting, and this is a challenge to overcome for anyone that wants to provide a quality service.

On Our Radar’s solution is to enable people to share their real-time experiences of services by recording audio and SMS diaries with their basic mobile phones. Because of the intimacy they capture, these first-person accounts have the capacity to grab the people behind services and make them listen to and experience the customer’s thoughts and feelings as they happened.

Responses obtained from audio and SMS diaries are different from those obtained from other qualitative data collection methods because, unlike solutions that crowdsource feedback, these diaries contain responses from a small group of trained citizen reporters that share their experiences in these diaries over a sustained period of time. The product is a rich and textured insight into the reporters’ emotions and priorities. One can track their journeys through services and across systems.

On Our Radar worked with British Telecom (BT) to implement this technique. The objective was to help BT understand how their customers with dementia experience their services. Over a few weeks, forty people living with dementia recorded audio diaries about their experiences dealing with big companies.

Adam explained how the audio diary method was effective for this project:

  • Because diaries and dialogues are in real time, they captured emotional highs and lows (such as the anxiety of picking up the phone and making a call) that would not be recalled in post fact interviews.
  • Because diaries are focused on individuals and their journeys instead of on discrete interactions with specific services, they showed how encountering seemingly unrelated organizations or relationships impacted users’ experiences of BT. For example, cold calls became terrifying for people with dementia and made them reluctant to answer the phone for anyone.
  • Because this method follows people’s experiences over time, it allows researchers to place individual pain points and problems in the context of a broader experience.
  • Because the data is in first person and in the moment, it moved people emotionally. Data was shared with call center staff and managers, and they found it compelling. It was an emotional human story told in one’s own words. It invited decision makers to walk in other people’s shoes.

On Our Radar’s future projects include working in Sierra Leone with local researchers to understand how households are changing their practices post-Ebola and a major piece of research with the London School of Hygiene and Tropical Medicine in Malaysia and the Philippines to gain insight on people’s understanding of their health systems.

For more, find a video of Adam’s original presentation below!