MERL Tech News

MERL Tech and the World of ICT Social Entrepreneurs (WISE)

by Dale Hill, an economist/evaluator with over 35 years experience in development and humanitarian work. Dale led the session on “The growing world of ICT Social Entrepreneurs (WISE): Is social Impact significant?” at MERL Tech DC 2018.

Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.
Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.

What happens when evaluators trying to build bridges with new private sector actors meet real social entrepreneurs? A new appreciation for the dynamic “World of ICT Social Entrepreneurs (WISE)” and the challenges they face in marketing, pricing, and financing (not to mention measurement of social impact.)

During this MERL Tech session on WISE, Dale Hill, evaluation consultant, presented grant funded research on measurement of social impact of social entrepreneurship ventures (SEVs) from three perspectives. She then invited five ICT company CEOs to comment.

The three perspectives are:

  • the public: How to hold companies accountable, particularly if they have chosen to be legal or certified “benefit corporations”?
  • the social entrepreneurs, who are plenty occupied trying to reach financial sustainability or profit goals, while also serving the public good; and
  • evaluators, who see the important influence of these new actors, but know their professional tools need adaptation to capture their impact.

Dale’s introduction covered overlapping definitions of various categories of SEVs, including legally defined “benefit corporations”, and “B Corps”, which are intertwined with the options of certification available to social entrepreneurs. The “new middle” of SEVs are on a spectrum between for-profit companies on one end and not-for profit organizations on the other. Various types of funders, including social impact investors, new certification agencies, and monitoring and evaluation (M&E) professionals, are now interested in measuring the growing social impact of these enterprises. A show of hands revealed that representatives of most of these types of actors were present at the session.

The five social entrepreneur panelists all had ICT businesses with global reach, but they varied in legal and certification status and the number of years operating (1 to 11). All aimed to deploy new technologies to non-profit organizations or social sector agencies on high value, low price terms. Some had worked in non-profits in the past and hoped that venture capital rather than grant funding would prove easier to obtain. Others had worked for Government and observed the need for customized solutions, which required market incentives to fully develop.

The evaluator and CEO panelists’ identification of challenges converged in some cases:

  • maintaining affordability and quality when using market pricing
  • obtaining venture capital or other financing
  • worry over “mission drift” – if financial sustainability imperatives or shareholder profit maximization preferences prevail over founders’ social impact goals; and
  • the still present digital divide, when serving global customers (insufficient bandwidth, affordability issues, limited small business capital in some client countries.

New issues raised by the CEOs (and some social entrepreneurs in the audience) included:

  • the need to provide incentives to customers to use quality assurance or security features of software, to avoid falling short of achieving the SEV’s “public good” goals;
  • the possibility of hostile takeover, given high value of technological innovations;
  • the fact that mention of a “social impact goal” was a red flag to some funders who then went elsewhere to seek profit maximization.

There was also a rich discussion on the benefits and costs of obtaining certification: it was a useful “branding and market signal” to some consumers, but a negative one to some funders; also, it posed an added burden on managers to document and report social impact, sometimes according to guidelines not in line with their preferences.

Surprises?

a) Despite the “hype”, social impact investment funding proved elusive to the panelists. Options for them included: sliding scale pricing; establishment of a complementary for-profit arm; or debt financing;

b) Many firms were not yet implementing planned monitoring and evaluation (M&E) programs, despite M&E being one of their service offerings; and

c) The legislation on reporting social impact of benefit corporations among the 31 states varies considerably, and the degree of enforcement is not clear.

A conclusion for evaluators: Social entrepreneurs’ use of market solutions indeed provides an evolving, dynamic environment which poses more complex challenges for measuring social impact, and requires new criteria and tools, ideally timed with an understanding of market ups and downs, and developed with full participation of the business managers.

Tools, tips and templates for making Responsible Data a reality

by David Leege, CRS; Emily Tomkys, Oxfam GB; Nina Getachew, mSTAR/FHI 360; and Linda Raftree, Independent Consultant/MERL Tech; who led the session “Tools, tips and templates for making responsible data a reality.

The data lifecycle.
The data lifecycle.

For this year’s MERL Tech DC, we teamed up to do a session on Responsible Data. Based on feedback from last year, we knew that people wanted less discussion on why ethics, privacy and security are important, and more concrete tools, tips and templates. Though it’s difficult to offer specific do’s and don’ts, since each situation and context needs individualized analysis, we were able to share a lot of the resources that we know are out there.

To kick off the session, we quickly explained what we meant by Responsible Data. Then we handed out some cards from Oxfam’s Responsible Data game and asked people to discuss their thoughts in pairs. Some of the statements that came up for discussion included:

  • Being responsible means we can’t openly share data – we have to protect it
  • We shouldn’t tell people they can withdraw consent for us to use their data when in reality we have no way of doing what they ask
  • Biometrics are a good way of verifying who people are and reducing fraud

Following the card game we asked people to gather around 4 tables with a die and a print out of the data lifecycle where each phase corresponded to a number (Planning = 1, collecting = 2, storage = 3, and so on…). Each rolled the die and, based on their number, told a “data story” of an experience, concern or data failure related to that phase of the lifecycle. Then the group discussed the stories.

For our last activity, each of us took a specific pack of tools, templates and tips and rotated around the 4 tables to share experiences and discuss practical ways to move towards stronger responsible data practices.

Responsible data values and principles

David shared Catholic Relief Services’ process of developing a responsible data policy, which they started in 2017 by identifying core values and principles and how they relate to responsible data. This was based on national and international standards such as the Humanitarian Charter including the Humanitarian Protection Principles and the Core and Minimum Standards as outlined in Sphere Handbook Protection Principle 1; the Protection of Human Subjects, known as the “Common Rule” as laid out in the Department of Health and Human Services Policy for Protection of Human Research Subjects; and the Digital Principles, particularly Principle 8 which mandates that organizations address privacy and security.

As a Catholic organization, CRS follows the principles of Catholic social teaching, which directly relate to responsible data in the following ways:

  • Sacredness and dignity of the human person – we will respect and protect an individual’s personal data as an extension of their human dignity;
  • Rights and responsibilities – we will balance the right to be counted and heard with the right to privacy and security;
  • Social nature of humanity – we will weigh the benefits and risks of using digital tools, platforms and data;
  • Common good – we will open data for the common good only after minimizing the risks;
  • Subsidiarity – we will prioritize local ownership and control of data for planning and decision-making;
  • Solidarity – we will work to educate inform and engage our constituents in responsible data approaches;
  • Option for the poor – we will take a preferential option for protecting and securing the data of the poor; and
  • Stewardship – we will responsibly steward the data that is provided to us by our constituents.

David shared a draft version of CRS’ responsible data values and principles.

Responsible data policy, practices and evaluation of their roll-out

Oxfam released its Responsible Program Data Policy in 2015. Since then, they have carried out six pilots to explore how to implement the policy in a variety of countries and contexts. Emily shared information on these these pilots and the results of research carried out by the Engine Room called Responsible Data at Oxfam: Translating Oxfam’s Responsible Data Policy into practice, two years on. The report concluded that the staff that have engaged with Oxfam’s Responsible Data Policy find it both practically relevant and important. One of the recommendations of this research showed that Oxfam needed to increase uptake amongst staff and provide an introductory guide to the area of responsible data.  

In response, Oxfam created the Responsible Data Management pack, (available in English, Spanish, French and Arabic), which included the game that was played in today’s session along with other tools and templates. The card game introduces some of the key themes and tensions inherent in making responsible data decisions. The examples on the cards are derived from real experiences at Oxfam and elsewhere, and they aim to generate discussion and debate. Oxfam’s training pack also includes other tools, such as advice on taking photos, a data planning template, a poster of the data lifecycle and general information on how to use the training pack. Emily’s session also encouraged discussion with participants about governance and accountability issues like who in the organisation manages responsible data and how to make responsible data decisions when each context may require a different action.

Emily shared the following resources:

A packed house for the responsible data session.
A packed house for the responsible data session.

Responsible data case studies

Nina shared early results of four case studies mSTAR is conducting together with Sonjara for USAID. The case studies are testing a draft set of responsible data guidelines, determining whether they are adequate for ‘on the ground’ situations and if projects find them relevant, useful and usable. The guidelines were designed collaboratively, based on a thorough review and synthesis of responsible data practices and policies of USAID and other international development and humanitarian organizations. To conduct the case studies, Sonjara, Nina and other researchers visited four programs which are collecting large amounts of potentially sensitive data in Nigeria, Kenya and Uganda. The researchers interviewed a broad range of stakeholders and looked at how the programs use, store, and manage personally identifiable data (PII). Based on the research findings, adjustments are being made to the guidelines. It is anticipated that they will be published in October.

Nina also talked about CALP/ELAN’s data sharing tipsheets, which include a draft data-sharing agreement that organizations can adapt to their own contracting contracting documents. She circulated a handout which identifies the core elements of the Fair Information Practice Principles (FIPPs) that are important to consider when using PII data.  

Responsible data literature review and guidelines

Linda mentioned that a literature review of responsible data policy and practice has been done as part of the above mentioned mSTAR project (which she also worked on). The literature review will provide additional resources and analysis, including an overview of the core elements that should be included in organizational data guidelines, an overview of USAID policy and regulations, emerging legal frameworks such as the EU’s General Data Protection Regulation (GDPR), and good practice on how to develop guidelines in ways that enhance uptake and use. The hope is that both the Responsible Data Literature Review and the of Responsible Data Guidelines will be suitable for adopting and adapting by other organizations. The guidelines will offer a set of critical questions and orientation, but that ethical and responsible data practices will always be context specific and cannot be a “check-box” exercise given the complexity of all the elements that combine in each situation. 

Linda also shared some tools, guidelines and templates that have been developed in the past few years, such as Girl Effect’s Digital Safeguarding Guidelines, the Future of Privacy Forum’s Risk-Benefits-Harms framework, and the World Food Program’s guidance on Conducting Mobile Surveys Responsibly.

More tools, tips and templates

Check out this responsible data resource list, which includes additional tools, tips and templates. It was developed for MERL Tech London in February 2017 and we continue to add to it as new documents and resources come out. After a few years of advocating for ‘responsible data’ at MERL Tech to less-than-crowded sessions, we were really excited to have a packed room and high levels of interest this year!   

How to buy M&E software and not get bamboozled

by Josh Mandell, a Director at DevResults where he leads strategy and business development. Josh can be reached at josh@devresults.com.

While there is no way to guarantee that M&E software will solve all of your problems or make all of your colleagues happy, there absolutely are things you can do during the discovery, procurement, and contracts stages to mitigate against the risk of getting bamboozled.

#1 – Trust no one. Test everything.

Most development practitioners I speak with are balancing a heavy load of client work, internal programmatic and BD support, and other organization initiatives. I can appreciate that time is scarce and testing software you may not buy could feel like a giant waste of time.

However, when it comes to reducing uncertainty and building confidence in your decision, the single most productive use of your time is spent testing. When you don’t test, what evidence do you have to base your decision on? The vendor’s marketing and proposal materials. Don’t take the BD guy’s word for it and whatever you do, don’t trust screenshots, brochures, or proposals. Like a well-curated social media profile, marketing collateral gives you a sense for what’s possible, but probably isn’t the most accurate reflection of reality. If you really want to understand usability, performance, and culture fit, you simply need to see for yourself.

We have found that the organizations that take the time to identify and test what they’ll actually be doing in DevResults are much better off than those who buy based on what they see in documentation and presentations, or based on someone else’s recommendation.

And it makes our lives easier too! We may have to spend a little more time upfront in the discovery and procurement phases, but by properly setting expectations early on, we have to provide far less support over the long-term. This makes for smoother, lower-cost implementations and happier customers.

#2 – Document what success looks like in plain language.

We obviously need contracts for defining the scope of work, payment terms, SLA, and other legalese, but the reality is that the people leading procurement and contracts are often not the people leading the day to day data operations.

Contracts are also typically dense and hard to use as a point of reference for frequent, human communication. So, it’s incredibly important that the implementation leads themselves define what success looks like in their own words and that is what drives the implementation.

It took us years to figure this out, but we’ve taken the lesson to heart. What we do now with each of our engagements is create an Implementation Charter that documents, in the words of the implementation leads, things like a summary baseline, roles and responsibilities, and a list of desired outcomes, i.e. ‘what success looks like.’ We then use the charter as the primary point of reference for determining whether or not we’re doing a good job and we evaluate ourselves against the charter quarterly.

Similar to the point about testing above, we have found this practice to dramatically increase transparency, properly set expectations, and establish more effective channels for communication, all of which are crucial in enterprise software implementations.

#3 – Plan for the long-haul and create the right financial incentives. Spread out the payments.

Whether at the project or organizational levels, M&E software implementations are long-term efforts. Unlike custom, external-facing websites where the bulk of work is done up front and the rest is mostly maintenance, enterprise software is constantly evolving. Rapidly changing technology and industry trends, shifting user requirements, and quality user experience all require persistent attention and ongoing development.

Your contract and payment structure should reflect that reality.

The easiest way to achieve this alignment is to spread the payments out over time. I’m not going to get into the merits of a software as a service (SaaS) business model here (we’ll be putting another post out on that in the coming weeks), but suffice to say that you get better service when your technology partner needs to continuously earn your money month after month and year after year.

This not only shifts the focus from checking boxes in a contract to delivering actual utility for users over the long-term, but it also hedges against the prospect of paying for unused software (or even paying for vaporware, as in the case of the BMGF case against Saama).

We know from experience that shifting to a new way of doing things can be difficult. We used to be a custom-web development shop and we did pretty well in that old model. The transition to a SaaS offering was painful because we had to work harder to earn our money and expectations went up dramatically. Nonetheless, we know the pain has been worth it because our customers are holding us to a different standard and it’s forcing us to deliver the best product we’re capable of. As a result, we’ll not only have happier customers, but a stronger, more sustainable business doing what we love.

Stop the bamboozling.

If you have any tips or recommendations for buying software, please share those in the comments below, or feel free to reach out to me directly. We’re always looking to share what we know and learn from others. Good luck!

MERL Tech London is coming up on March 20-21, 2018 — Submit your session ideas or register to attend!

Submit your session ideas for MERL Tech London by Nov 10th!

MERL Tech London

Please submit a session idea, register to attend, or reserve a demo table for MERL Tech London, on March 20-21, 2018, for in-depth sharing and exploration of what’s happening across the multidisciplinary monitoring, evaluation, research and learning field.

Building on MERL Tech London 2017, we will engage 200 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn the theories of MERL technology into effective practice that delivers real insight and learning in our sector.

MERL Tech London 2018

Digital data and new media and information technologies are changing MERL practices. The past five years have seen technology-enabled MERL growing by leaps and bounds, including:

  • Adaptive management and ‘developmental evaluation’
  • Faster, higher quality data collection.
  • Remote data gathering through sensors and self-reporting by mobile.
  • Big Data and social media analytics
  • Story-triggered methodologies

Alongside these new initiatives, we are seeing increasing documentation and assessment of technology-enabled MERL initiatives. Good practice guidelines and new frameworks are emerging and agency-level efforts are making new initiatives easier to start, build on and improve.

The swarm of ethical questions related to these new methods and approaches has spurred greater attention to areas such as responsible data practice and the development of policies, guidelines and minimum ethical frameworks and standards for digital data.

Please submit a session idea, register to attend, or reserve a demo table for MERL Tech London to discuss all this and more! You’ll have the chance to meet, learn from, debate with 150-200 of your MERL Tech peers and to see live demos of new tools and approaches to MERL.

Submit Your Session Ideas Now!

Like previous conferences, MERL Tech London will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are particularly interested in:

  • Case studies: Sharing end-to-end experiences/learning from a MERL Tech process
  • MERL Tech 101: How-to use a MERL Tech tool or approach
  • Methods & Frameworks: Sharing/developing/discussing methods and frameworks for MERL Tech
  • Data: Big, large, small, quant, qual, real-time, online-offline, approaches, quality, etc.
  • Innovations: Brand new, untested technologies or approaches and their application to MERL(Tech)
  • Debates: Lively discussions, big picture conundrums, thorny questions, contentious topics related to MERL Tech
  • Management: People, organizations, partners, capacity strengthening, adaptive management, change processes related to MERL Tech
  • Evaluating MERL Tech: comparisons or learnings about MERL Tech tools/approaches and technology in development processes
  • Failures: What hasn’t worked and why, and what can be learned from this?
  • Demo Tables: to share MERL Tech approaches, tools, and technologies
  • Other topics we may have missed!

Session Submission Deadline: Friday, November 10, 2017.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early December and, if selected, you will be asked to submit an updated and final session title, summary and outline by Friday, January 19th, 2018.

Register Now!

Please register to attend, or reserve a demo table for MERL Tech London 2018 to examine these trends with an exciting mix of educational keynotes, lightning talks, and group breakouts, including an evening Fail Festival reception to foster needed networking across sectors.

We are charging a modest fee to better allocate seats and we expect to sell out quickly again this year, so buy your tickets or demo tables now. Event proceeds will be used to cover event costs and to offer travel stipends for select participants implementing MERL Tech activities in developing countries.

12 ways to ensure your data management implementation doesn’t become a dumpster fire

By Jason Rubin, PCI; Kate Mueller, Dev Results; and Mike Klein, ISG. They lead the session on “One system to rule them all? Balancing organization-wide data standards and project data needs.

Dumpster FireLet’s face it: failed information system implementations are not uncommon in our industry, and as a result, we often have a great deal of skepticism toward new tools and processes.

We addressed this topic head-on during our 2017 MERL Tech session, One system to rule them all?

The session discussed the tension between the need for enterprise data management solutions that can be used across the entire organization and solutions that meet the needs of specific projects. The three of us presented our lessons learned on this topic from our respective roles as M&E advisor, M&E software provider, and program implementer.

We then asked attendees to provide a list of their top do’s and don’ts related to their own experiences – and then reviewed the feedback to identify key themes.

Here’s a rundown on the themes that emerged from participants’ feedback:

Organizational Systems

Think of these as systems broadly—not M&E specific. For example: How do HR practices affect technology adoption? Does your organization have a federated structure that makes standard indicator development difficult? Do you require separate reporting for management and donor partners? These are all organizational systems that need to be properly considered before system selection and implementation. Top takeaways from the group include these insights to help you ensure your implementation goes smoothly:

1. Form Follows Function: This seems like an obvious theme, but since we received so much feedback about folks’ experiences, it bears repeating: define your goals and purpose first, then design a system to meet those, not the other way around. Don’t go looking for a solution that doesn’t address an existing problem. This means that if the ultimate goal for a system is to improve field staff data collection, don’t build a system to improve data visualization.

2. HR & Training: One of the areas our industry seems to struggle with is long-term capacity building and knowledge transfer around new systems. Suggestions in this theme were that training on information systems become embedded in standard HR processes with ongoing knowledge sharing and training of field staff, and putting a priority on hiring staff with adequate skill mixes to make use of information systems.

3. Right-Sized Deployment for Your Organization: There were a number of horror stories around organizations that tried to implement a single system simultaneously across all projects and failed because they bit off more than they could chew, or because the selected tool really didn’t meet a majority of their organization’s projects’ needs. The general consensus here was that small pilots, incremental roll-outs, and other learn-and-iterate approaches are a best practice. As one participant put it: Start small, scale slowly, iterate, and adapt.

M&E Systems

We wanted to get feedback on best and worst practices around M&E system implementations specifically—how tools should be selected, necessary planning or analysis, etc.

4. Get Your M&E Right: Resoundingly, participants stressed that a critical component of implementing an M&E information system is having well-organized M&E, particularly indicators. We received a number of comments about creating standardized indicators first, auditing and reconciling existing indicators, and so on.

5. Diagnose Your Needs: Participants also chorused the need for effective diagnosis of the current state of M&E data and workflows and what the desired end-state is. Feedback in this theme focused on data, process, and tool audits and putting more tool-selection power in M&E experts’ hands rather than upper management or IT.

6. Scope It Out: One of the flaws each of us has seen in our respective roles is having too generalized or vague of a sense of why a given M&E tool is being implemented in the first place. All three of us talked about the need to define the problem and desired end state of an implementation. Participants’ feedback supported this stance. One of the key takeaways from this theme was to define who the M&E is actually for, and what purpose it’s serving: donors? Internal management? Local partner selection/management? Public accountability/marketing?

Technical Specifications

The first two categories are more about the how and why of system selection, roll-out, and implementation. This category is all about working to define and articulate what any type of system needs to be able to do.

7. UX Matters: It seems like a lot of folks have had experience with systems that aren’t particularly user-friendly. We received a lot of feedback about consulting users who actually have to use the system, building the tech around them rather than forcing them to adapt, and avoiding “clunkiness” in tool interfaces. This feels obvious but is, in fact, often hard to do in practice.

8. Keep It Simple, Stupid: This theme echoed the Right-Sized Deployment for Your Organization: take baby steps; keep things simple; prioritize the problems you want to solve; and don’t try to make a single tool solve all of them at once. We might add to this: many organizations have never had a successful information system implementation. Keeping the scope and focus tight at first and getting some wins on those roll-outs will help change internal perception of success and make it easier to implement broader, more elaborate changes long-term.

9. Failing to Plan Is Planning to Fail: The consensus in feedback was that it pays to take more time upfront to identify user/system needs and figure out which are required and which are nice to have. If interoperability with other tools or systems is a requirement, think about it from day one. Work directly with stakeholders at all levels to determine specs and needs; conduct internal readiness assessments to see what the actual needs are; and use this process to identify hierarchies of permissions and security.

Change Management

Last, but not least, there’s how systems will be introduced and rolled out to users. We got the most feedback on this section and there was a lot of overlap with other sections. This seems to be the piece that organizations struggle with the most.

10. Get Buy-in/Identify Champions: Half the feedback we received on change management revolved around this theme. For implementations to be successful, you need both a top-down approach (buy-in from senior leadership) and a bottom-up approach (local champions/early adopters). To help facilitate this buy-in, participants suggested creating incentives (especially for management), giving local practitioners ownership, including programs and operations in the process, and not letting the IT department lead the initiative. The key here is that no matter which group the implementation ultimately benefits the most, having everyone on the same page understanding the implementation goals and why the organization needs it are key.

11. Communicate: Part of how you get buy-in is to communicate early and often. Communicate the rationale behind why tools were selected, what they’re good—and bad—at, what the value and benefits of the tool are, and transparency in the roll-out/what it hopes to achieve/progress towards those goals. Consider things like behavior change campaigns, brown bags, etc.

12. Shared Vision: This is a step beyond communication: merely telling people what’s going on is not enough. There must be a larger vision of what the tool/implementation is trying to achieve and this, particularly, needs to be articulated. How will it benefit each type of user? Shared vision can help overcome people’s natural tendencies to resist change, hold onto “their” data, or cover up failures or inconsistencies.

MERL Tech Maturity Models

by Maliha Khan, a development practitioner in the fields of design, measurement, evaluation and learning. Maliha led the Maturity Model sessions at MERL Tech DC and Linda Raftree, independent consultant and lead organizer of MERL Tech.

MERL Tech is a platform for discussion, learning and collaboration around the intersection of digital technology and Monitoring, Evaluation, Research, and Learning (MERL) in the humanitarian and international development fields. The MERL Tech network is multidisciplinary and includes researchers, evaluators, development practitioners, aid workers, technology developers, data analysts and data scientists, funders, and other key stakeholders.

One key goal of the MERL Tech conference and platform is to bring people from diverse backgrounds and practices together to learn from each other and to coalesce MERL Tech into a more cohesive field in its own right — a field that draws from the experiences and expertise of these various disciplines. MERL Tech tends to bring together six broad communities:

  • traditional M&E practitioners, who are interested in technology as a tool to help them do their work faster and better;
  • development practitioners, who are running ICT4D programs and beginning to pay more attention to the digital data produced by these tools and platforms;
  • business development and strategy leads in organizations who want to focus more on impact and keep their organizations up to speed with the field;
  • tech people who are interested in the application of newly developed digital tools, platforms and services to the field of development, but may lack knowledge of the context and nuance of that application
  • data people, who are focused on data analytics, big data, and predictive analytics, but similarly may lack a full grasp of the intricacies of the development field
  • donors and funders who are interested in technology, impact measurement, and innovation.

Since our first series of Technology Salons on ICT and M&E in 2012 and the first MERL Tech conference in 2014, the aim has been to create stronger bridges between these diverse groups and encourage the formation of a new field with an identity of its own — In other words, to move people beyond identifying as, say, an “evaluator who sometimes uses technology,” and towards identifying as a member of the MERL Tech space (or field or discipline) with a clearer understanding of how these various elements work together and play off one another and how they influence (and are influenced by) the shifts and changes happening in the wider ecosystem of international development.

By building and strengthening these divergent interests and disciplines into a field of their own, we hope that the community of practitioners can begin to better understand their own internal competencies and what they, as a unified field, offered to international development. This is a challenging prospect, as beyond their shared use of technology to gather, analyze, and store data and an interest in better understanding how, when, why, where, (etc.) these tools work for MERL and for development/humanitarian programming, there aren’t many similarities between participants.

At the MERL Tech London and MERL Tech DC conferences in 2017, we made a concerted effort to get to the next level in the process of creating a field. In London in February, participants created a timeline of technology and MERL and identified key areas that the MERL Tech community could work on strengthening (such as data privacy and security frameworks and more technological tools for qualitative MERL efforts). At MERL Tech DC, we began trying to understand what a ‘maturity model’ for MERL Tech might look like.

What do we mean by a ‘maturity model’?

Broadly, maturity models seek to qualitatively assess people/culture, processes/structures, and objects/technology to craft a predictive path that an organization, field, or discipline can take in its development and improvement.

Initially, we considered constructing a “straw” maturity model for MERL Tech and presenting it at the conference. The idea was that our straw model’s potential flaws would spark debate and discussion among participants. In the end, however, we decided against this approach because (a) we were worried that our straw model would unduly influence people’s opinions, and (b) we were not very confident in our own ability to construct a good maturity model.

Instead, we opted to facilitate a creative space over three sessions to encourage discussion on what a maturity model might look like, and what it might contain. Our vision for these sessions was to get participants to brainstorm in mixed groups containing different types of people- we didn’t want small subsets of participants to create models independently without the input of others.

In the first session, “Developing a MERL Tech Maturity Model”, we invited participants to consider what a maturity model might look like. Could we begin to imagine a graphic model that would enable self-evaluation and allow informed choices about how to best develop competencies, change and adjust processes and align structures in organizations to optimize using technology for MERL or indeed other parts of the development field?

In the second session, “Where do you sit on the Maturity Model?” we asked participants to use the ideas that emerged from our brainstorm in the first session to consider their own organizations and work, and compare them against potential maturity models. We encouraged participants to assess themselves using green (young sapling) to yellow (somewhere in the middle) and red (mature MERL Tech ninja!) and to strike up a conversation with other people in the breaks on why they chose that color.

In our third session, “Something old, something new”, we consolidated and synthesized the various concepts participants had engaged with throughout the conference. Everyone was encouraged to reflect on their own learning, lessons for their work, and what new ideas or techniques they may have picked up on and might use in the future.

The Maturity Models

As can be expected, when over 300 people take marker and crayons to paper, many a creative model emerges. We asked the participants to gallery walk the models over the next day during the breaks and vote on their favorite models.

We won’t go into detail of what all the 24 the models showed, but there were some common themes that emerged from the ones that got the most votes – almost all maturity models include dimensions (elements, components) and stages, and a depiction of potential progression from early stages to later stages across each dimension. They all also showed who the key stakeholders or players were, and some had some details on what might be expected of them at different stages of maturity.

Two of the models (MERLvana and the Data Appreciation Maturity Model – DAMM) depicted the notion that reaching maturity was never really possible and the process was an almost infinite loop. As the presenters explained MERLvana “it’s an impossible to reach the ideal state, but one must keep striving for it, in ever closer and tighter loops with fewer and fewer gains!”

MERLvana
MERLvana
Data
Data Appreciation Maturity Model

“MERL-tropolis” had clearly defined categories (universal understanding, learning culture and awareness, common principles, and programmatic strategy) and the structures/ buildings needed for those (staff, funding, tools, standard operating procedures, skills).

MERLTropolis
MERLTropolis

The most popular was “The Data Turnpike” which showed the route from the start of “Implementation with no data” to the finish line of “Technology, capacity and interest in data and adaptive management” with all the pitfalls along the way (misuse, not timely, low ethics etc) marked to warn of the dangers.

data turnpike
The Data Turnpike

As organizers of the session, we found the exercises both interesting and enlightening, and we hope they helped participants to begin thinking about their own MERL Tech practice in a more structured way. Participant feedback on the session was on polar extremes. Some people loved the exercise and felt that it allowed them to step back and think about how they and their organization were approaching MERL Tech and how they could move forward more systematically with building greater capacities and higher quality work. Some told us that they left with clear ideas on how they would work within their organizations to improve and enhance their MERL Tech practice, and that they had a better understanding of how to go about that. A few did not like that we had asked them to “sit around drawing pictures” and some others felt that the exercise was unclear and that we should have provided a model instead of asking people to create one. [Note: This is an ongoing challenge when bringing together so many types of participants from such diverse backgrounds and varied ways of thinking and approaching things!]

We’re curious if others have worked with “maturity models” and if they’ve been applied in this way or to the area of MERL Tech before. What do you think about the models we’ve shared? What is missing? How can we continue to think about this field and strengthen our theory and practice? What should we do at MERL Tech London in March 2018 and beyond to continue these conversations?

Five lessons learned from applying design thinking to data use

by Amanda Makulec, Data Visualization Lead, Excella Consulting and Barb Knittel, Research, Monitoring & Evaluation Advisor, John Snow Inc. Amanda and Barb led “How the Simpsons Make Data Use Happen” at MERL Tech DC.

MERL-DesignforDataUse 1

Workshopping ways to make data use happen.

Human centered design isn’t a new concept. We’ve heard engineers, from aerospace to software, quietly snicker as they’ve seen the enthusiasm for design thinking explode within the social good space in recent years. “To start with the end user in mind? Of course! How else would you create a product someone wants to use?”

However, in our work designing complex health information systems, dashboards, and other tools and strategies to improve data use, the idea of starting with the end user does feel relatively new.

Thinking back to graduate school nearly ten years ago, dashboard design classes focused on the functional skills, like how to use a pivot table in Excel, not on the complex processes of gathering user requirements to design something that could not only delight the end user, but be co-designed with them.

As part of designing for data use and data visualization design workshops, we’ve collaborated with design firms to find new ways to crack the nut of developing products and processes that help decisionmakers use information. Using design thinking tools like ranking exercises, journey maps, and personas has helped users identify and find innovative ways to address critical barriers to data use.

If you’re thinking about integrating design thinking approaches into data-centered projects, here are our five key considerations to take into account before you begin:

  1. Design thinking is a mindset, not a workshop agenda. When you’re setting out to incorporate design thinking into your work, consider what that means throughout the project lifecycle. From continuous engagement and touchpoints with your data users to
  1. Engage the right people – you need a diverse range of perspectives and experiences to uncover problems and co-create solutions. This means thinking of the usual stakeholders using the data at hand, but also engaging those adjacent to the data. In health information systems, this could be the clinicians reporting on the registers, the mid-level managers at the district health office, and even the printer responsible for distributing paper registers.
  1. Plan for the long haul. Don’t limit your planning and projections of time, resources, and end user engagement to initial workshops. Coming out of your initial design workshops, you’ll likely have prototypes that require continued attention to functionally build and implement.
  1. Focus on identifying and understanding the problem you’ll be solving. You’ll never be able to solve every problem and overcome every data use barrier in one workshop (or even in one project). Work with your users to develop a specific focus and thoroughly understand the barriers and challenges from their perspectives so you can tackle the most pressing issues (or choose deliberately to work on longer term solutions to the largest impediments).
  1. The journey matters as much as the destination. One of the greatest ah-ha moments coming out of these workshops has been from participants who see opportunities to change how they facilitate meetings or manage teams by adopting some of the activities and facilitation approaches in their own work. Adoption of the prototypes shouldn’t be your only metric of success.

The Designing for Data Use workshops were funded by (1) USAID and implemented by the MEASURE Evaluation project and (2) the Global Fund through the Data Use Innovations Fund. Matchboxology was the design partner for both sets of workshops, and John Snow Inc. was the technical partner for the Data Use Innovations sessions. Learn more about the process and learning from the MEASURE Evaluation workshops in Applying User Centered Design to Data Use Challenges: What we Learned and see our slides from our MERL Tech session “The Simpsons, Design, and Data Use” to learn more.

The Good, the Bad, and the Ugly of Using IATI Results Data

This is a cross-post from Taryn Davis of Development Gateway. The original was published here on September 19th, 2017. Taryn and Reid Porter led the “Making open data on results useful” session at MERL Tech DC.

It didn’t surprise me when I learned that — when Ministry of Finance officials conduct trainings on the Aid Management Platform for Village Chiefs, CSOs and citizens throughout the districts of Malawi — officials are almost immediately asked:

“What were the results of these projects? What were the outcomes?”

It didn’t just matter what development organizations said they would do — it also mattered what they actually did.

We’ve heard the same question echoed by a number of agriculture practitioners interviewed as part of the Initiative for Open Ag Funding.  When asked what information they need to make better decisions about where and how to implement their own projects, many replied:

“We want to know — if [others] were successful — what did they do? If they weren’t successful, what shouldn’t we do?”

This interest in understanding what went right (or wrong) came not from wanting to point fingers, but from genuine desire to learn. In considering how to publish and share data, the importance of — and interest in! — learning cannot be understated.

At MERL Tech DC earlier this month, we decided to explore the International Aid Transparency Initiative (IATI) format,  currently being used by organizations and governments globally for publishing aid and results data. For this hands-on exercise, we printed different types of projects from the D-Portal website, including any evaluation documents included in the publication. We then asked participants to answer the following questions about each project:

  1. What were the successes of the project?
  2. What could be replicated?
  3. What are the pitfalls to be avoided?
  4. Where did it fail?

Taryn Davis leading participants through using IATI results data at MERLTech DC

We then discussed whether participants were (or were not) able to answer these questions with the data provided. Here is the Good, the Bad, and the Ugly of what participants shared:

The Good

  1. Many were impressed that this data — particularly the evaluation documents — were even shared and made public, not hidden behind closed doors.
  2. For those analyzing evaluation documents, the narrative was helpful for answering our four questions, versus having just the indicators without any context.
  3. One attendee noted that this data would be helpful in planning project designs for business development purposes.

The Bad

  1. There were challenges with data quality — for example, some data were missing units, making it difficult to identify — was the number “50” a percent, a dollar amount, or another unit?
  2. Some found the organizations’ evaluation formats easier to understand than what was displayed on D-portal. Others were given evaluations with a more complex format, making it difficult to identify key takeaways.  Overall, readability varied, and format matters. Sometimes less columns is more ( readable). There is a fine line between not enough information (missing units), and a fire hose of information (gigantic documents).
  3. Since the attachments included more content in narrative format, they were more helpful in answering our four questions than just the indicators that were entered in the IATI standard.
  4. There were no visualizations for a quick takeaway on project success. A visual aid would help understand “successes” and “failures” quicker without having spend as much time digging and comparing, and could then spend more time looking at specific cases and focusing on the narrative.
  5. Some data was missing time periods, making it hard to know how relevant it would be for those interested in using the data.
  6. Data was often disorganized, and included spelling mistakes.

The Ugly

  1. Reading data “felt like reading the SAT”: challenging to comprehend.
  2. The data and documents weren’t typically forthcoming about challenges and lessons learned.
  3. Participants weren’t able to discern any real, tangible learning that could be practically applied to other projects.

Fortunately, the “Bad” elements can be relatively easily addressed. We’ve spent time reviewing results data for organizations published in IATI, providing feedback to improve data quality, and to make the data cleaner and easier to understand.

However, the “ugly” elements  are really key for organizations that want to share their results data. To move beyond a “transparency gold star,” and achieve shared learning and better development, organizations need to ask themselves:

“Are we publishing the right information, and are we publishing it in a usable format?”

As we noted earlier, it’s not just the indicators that data users are interested in, but how projects achieved (or didn’t achieve) those targets. Users want to engage in the “L” in Monitoring, Evaluation, and Learning (MERL). For organizations, this might be as simple as reporting “Citizens weren’t interested in adding quinoa to their diet so they didn’t sell as much as expected,” or “The Village Chief was well respected and supported the project, which really helped citizens gain trust and attend our trainings.”

This learning is important both for organizations internally, enabling them to understand and learn from the data; it’s also important for the wider development community. In hindsight, what do you wish you had known about implementing an irrigation project in rural Tanzania before you started? That’s what we should be sharing.

In order to do this, we must update our data publishing formats (and mindsets) so that we can answer questions like, “How did this project succeed? What can be replicated? What are the pitfalls to avoid? Where did it fail?” Answering these kinds of questions — and enabling actual learning — should be a key goal for all project and programs; and it should not feel like an SAT exam every time we do so.

Image Credit: Reid Porter, InterAction

MERL Tech Round Up | October 2, 2017

We’ll be experimenting with a monthly round-up of MERL Tech related content (bi-weekly if there’s enough to fill a post). Let us know if it’s useful! We aim to keep it manageable and varied, rather than a laundry list of every possible thing. The format, categories, and topics will evolve as we see how it goes and what the appetite is.

If you have anything you’d like to share or see featured, feel free to send it on over or post on Twitter using the #MERLTech hashtag.

On the MERL Tech Blog:

Big Data in Evaluation – Michael Bamberger discusses the future of development evaluation in the age of Big Data and ways to build bridges between evaluators and Big Data analysis. Rick Davies (Monitoring and Evaluation News) raises some great points in the comments (and Michael replies).

Experiences with Mobile case management for multi-dimensional accountability from Oxfam and Survey CTO.

Thoughts on MERL Tech Maturity Models & Next Generation Transparency & Accountability from Megan Colner (Open Society Foundations) and Alison Miranda (Transparency and Accountability Initiative).

The best learning at MERL Tech DC came from sharing failures from Ambika Samarthya-Howard (Praekelt.org).

We’ll be posting more MERL Tech DC summaries and wrap-up posts over the next month or two. We’re also gearing up for MERL Tech London coming up in March 2018. Stay tuned for more information on that.

Stuff we’re reading / watching:

New research (Making All Voices Count research team) on ICT-mediated citizen engagement. What makes it transformative? 

Opportunities and risks in emerging technologies, including white papers on Artificial IntelligenceAlgorithmic Accountability; and Control of Personal Data (The Web Foundation).

Research on Privacy, Security, and Digital Inequality: How Technology Experiences and Resources Vary by Socioeconomic Status, Race, and Ethnicity in the United States from Mary Madden (Data & Society).

Tools, frameworks and guidance we’re bookmarking:

A framework for evaluating inclusive technology, technology for social impact and ICT4D programming (SIMLab) and an example of its application. The framework is open source, so you can use and adapt it!

survey tool and guidance for assessing women’s ICT access and use (FHI 360’s mSTAR project).  (Webinar coming up on Oct 10th)

Series on data management (DAI) covering 1) planning and collecting data; 2) managing and storing data; and 3) getting value and use out of the data that’s collected through analysis and visualization.

Events and training:

Webinar on using ICT in monitoring and evaluation of education programming for refugee populations (USAID and INEE). Recording and presentations from the Sep 28th event here.

Webinar on assessing women’s ICT access and use, Oct 10th (Nethope, USAID and mSTAR/FHI 360.

Let us know of upcoming events we should feature.

Jobs:

Send us vacancies for MERL Tech-related jobs, consultants, RFPs and we’ll help spread the word.

Failures are the way forward

By Ambika Samarthya-Howard, Head of Communications at Praekelt.org. This post also appears on the Praekelt.org blog.

Marc Mitchell, President of D-Tree International, gives his Lightning Talk: When the Control Group Wins.
Marc Mitchell, President of D-Tree International, gives his Lightning Talk: “When the Control Group Wins.”

Attending conferences often reminds me of dating: you put your best foot forward and do yourself up, and hide the rest for a later time. I always found it refreshing when people openly put their cards on the table.

I think that’s why I especially respected and enjoyed my experience at MERL Tech in DC last week. One of the first sessions I went to was a World Cafe style break out exploring how to be inclusive in M&E tech in the field. The organisations, like Global Giving and Keystone, posed hard questions about community involvement in data collection at scale, and how to get people less familiar or with less access to technology involved in the process. They didn’t have any of the answers. They wanted to learn from us.

This was followed by lightning talks after lunch where organisations gave short talks.  One organisation spoke very openly about how much money and time they were wasting on their data collection technologies. Another organisation confessed their lack of structure and organisation, and talked about collaborating with organisations like DataKind to make sense of their data. Anahi Ayala Iacucci from Internews did a presentation on the pitfalls and realities of M&E: “we all skew the results in order to get the next round of funding.” She fondly asked us to work together so we could “stop farting in the wind”. D-Tree International spoke about a trial around transport vouchers for pregnant women in Zanzibar, and how the control group that did not receive any funding actually did better.  They had to stop funding the vouchers.

The second day I attended an entire session where we looked at existing M&E reports available online to critique their deficiencies and identify where the field was lacking in knowledge dissemination. As a Communications person, looking at the write-ups of the data ironically gave me instant insight into ways forward and where gaps could be filled — which I believe is exactly what the speakers of the session intended. When you can so clearly see why and how things aren’t working, it actually inspires a different approach and way of working.

I was thoroughly impressed with the way people shared at MERL Tech. When you see an organisation able to talk so boldly about its learning curves or gaps, you respect their work, growth, and learnings.  And that is essentially the entire purpose of a conference.

Back to dating… and partnerships. Sooner or later, if the relationship works out, your partner is going to see you in the a.m. for who you really are. Why not cut to the chase, and go in with your natural look?  Then you can take the time to really do any of the hard work together, on the same footing.