MERL Tech News

Digital Data Collection and the Maturing of a MERL Technology

by Christopher Robert, CEO of Dobility (Survey CTO). This post was originally published on March 15, 2018, on the Survey CTO blog.

Digital data collection: stakeholders and complex relationships

Needs, markets, and innovation combine to produce technological change. This is as true in the international development sector as it is anywhere else. And within that sector, it’s as true in the broad category of MERL (monitoring and evaluation, research, and learning) technologies as it is in the narrower sub-category of digital data collection technologies. Here, I’ll consider the recent history of digital data collection technology as an example of MERL technology maturation – and as an example, more broadly, of the importance of market structure in shaping the evolution of a technology.

My basic observation is that, as digital data collection technology has matured, the same stakeholders have been involved – but the market structure has changed their relative power and influence over time. And it has been these very changes in power and influence that have changed the cost and nature of the technology itself.

First, when it comes to digital data collection in the development context, who are the stakeholders?

  • Donors. These are the primary actors who fund development work, evaluation of development policies and programs, and related research. There are mega-actors like USAID, Gates, and the UN agencies, but also many other charities, philanthropies, and public or nonprofit actors, from Catholic Charities to the U.S. Centers for Disease Control and Prevention.
  • Developers. These are the designers and software engineers involved in producing technology in the space. Some are students or university faculty, some are consultants, many work full-time for nonprofits or businesses in the space. (While some work on open-source initiatives in a voluntary capacity, that seems quite uncommon in practice. The vast majority of developers working on open-source projects in the space get paid for that work.)
  • Consultants and consulting agencies.These are the technologists and other specialists who help research and program teams use technology in the space. For example, they might help to set up servers and program digital survey instruments.
  • Researchers. These are the folks who do the more rigorous research or impact evaluations, generally applying social-science training in public health, economics, agriculture, or other related fields.
  • M&E professionals.These are the people responsible for program monitoring and evaluation. They are most often part of an implementing program team, but it’s also not uncommon to share more centralized (and specialized) M&E teams across programs or conduct outside evaluations that more fully separate some M&E activities from the implementing program team.
  • IT professionals.These are the people responsible for information technology within those organizations implementing international development programs and/or carrying out MERL activities.
  • Program beneficiaries. These are the end beneficiaries meant to be aided by international development policies and programs. The vast majority of MERL activities are ultimately concerned with learning about these beneficiaries.

Digital data collection stakeholders

These different stakeholders have different needs and preferences, and the market for digital data collection technologies has changed over time – privileging different stakeholders in different ways. Two distinct stages seem clear, and a third is coming into focus:

  1. The early days of donor-driven pilots and open source. These were the days of one-offs, building-your-own, and “pilotitis,” where donors and developers were effectively in charge and there was a costly additional layer of technical consultants between the donors/developers and the researchers and M&E professionals who had actual needs in the field. Costs were high, and some combination of donor and developer preferences reigned supreme.
  2. Intensifying competition in program-adopted professional products.Over time, professional products emerged that began to directly market to – and serve – researchers and M&E professionals. Costs fell with economies of scale, and the preferences of actual users in the field suddenly started to matter in a more direct, tangible, and meaningful way.
  3. Intensifying competition in IT-adopted professional products.Now that use of affordable, accessible, and effective data-collection technology has become ubiquitous, it’s natural for IT organizations to begin viewing it as a kind of core organizational infrastructure, to be adopted, supported, and managed by IT. This means that IT’s particular preferences and needs – like scale, standardization, integration, and compliance – start to become more central, and costs unfortunately rise.

While I still consider us to be in the glory days of the middle stage, where costs are low and end-users matter most, there are still plenty of projects and organizations living in that first stage of more costly pilots, open source projects, and one-offs. And I think that the writing’s very much on the wall when it comes to our progression toward the third stage, where IT comes to drive the space, innovation slows, and end-user needs are no longer dominant.

Full disclosure: I myself have long been a proponent of the middle phase, and I am proud that my social enterprise has been able to help graduate thousands of users from that costly first phase. So my enthusiasm for the middle phase began many years ago and in fact helped to launch Dobility.

THE EARLY DAYS OF DONOR-DRIVEN PILOTS AND OPEN SOURCE

Digital data collection stage 1 (the early days)

In the beginning, there were pioneering developers, patient donors, and program or research teams all willing to take risks and invest in a better way to collect data from the field. They took cutting-edge technologies and found ways to fit them into some of the world’s most difficult, least-cutting-edge settings.

In these early days, it mattered a lot what could excite donors enough to open their checkbooks – and what would keep them excited enough to keep the checks coming. So the vital need for large and ongoing capital injections gave donors a lot of influence over what got done.

Developers also had a lot of sway. Donors couldn’t do anything without them, and they also didn’t really know how to actively manage them. If a developer said “no, that would be too hard or expensive” or even “that wouldn’t work,” what could the donor really say or do? They could cut off funding, but that kind of leverage only worked for the big stuff, the major milestones and the primary objectives. For that stuff, donors were definitely in charge. But for the hundreds or thousands of day-to-day decisions that go into any technology solution, it was the developers effectively in charge.

Actual end-users in the field – the researchers and M&E professionals who were piloting or even trying to use these solutions – might have had some solid ideas about how to guide the technology development, but they had essentially no levers of control. In practice, the solutions being built by the developers were often so technically-complex to configure and use that there was an additional layer of consultants (technical specialists) sitting between the developers and the end-users. But even if there wasn’t, the developers’ inevitable “no, sorry, that’s not feasible,” “we can’t realistically fit that into this release,” or simple silence was typically the end of the story for users in the field. What could they do?

Unfortunately, without meaning any harm, most developers react by pushing back on whatever is contrary to their own preferences (I say this as a lifelong developer myself). Something might seem like a hassle, or architecturally unclean, and so a developer will push back, say it’s a bad idea, drag their heels, even play out the clock. In the past five years of Dobility, there have been hundreds of cases where a developer has said something to the effect of “no, that’s too hard” or “that’s a bad idea” to things that have turned out to (a) take as little as an hour to actually complete and (b) provide massive amounts of benefit to end-users. There’s absolutely no malice involved, it’s just the way most of them/us are.

This stage lasted a long time – too long, in my view! – and an entire industry of technical consultants and paid open-source contributors grew up around an approach to digital data collection that didn’t quite embrace economies of scale and never quite privileged the needs or preferences of actual users in the field. Costs were high and complaints about “pilotitis” grew louder.

INTENSIFYING COMPETITION IN PROGRAM-ADOPTED PROFESSIONAL PRODUCTS

Digital data collection stage 2 (the glory days)

But ultimately, the protagonists of the early days succeeded in establishing and honing the core technologies, and in the process they helped to reveal just how much was common across projects of different kinds, even across sectors. Some of those protagonists also had the foresight and courage to release their technologies with the kinds of permissive open-source licenses that would allow professionalization and experimentation in service and support models. A new breed of professional products directly serving research, program, and M&E teams was born – in no small part out of a single, tremendously-successful open-source project, Open Data Kit (ODK).

These products tended to be sold directly to end-users, and were increasingly intended for those end-users to be able to use themselves, without the help of technical staff or consultants. For traditionalists of the first stage, this was a kind of heresy: it was considered gauche at best and morally wrong at worst to charge money for technology, and it was seen as some combination of impossible and naive to think that end-users could effectively deploy and manage these technologies without technical assistance.

In fact, the new class of professional products were not designed to be used entirely without assistance. But they were designed to require as little assistance as possible, and the assistance came with the product instead of being provided by a separate (and separately-compensated) internal or external team.

A particularly successful breed of products came to use a “Software as a Service” (SaaS) model that streamlined both product delivery and support, ramping up economies of scale and driving down costs in the process (like SurveyCTO). When such products offered technical support free-of-charge as part of the purchase or subscription price, there was a built-in incentive to improve the product: since tech support was so costly to deliver, improving the product such that it required less support became one of the strongest incentives driving product development. Those who adopted the SaaS model not only had to earn every dollar of revenue from end-users, but they had to keep earning that revenue month in, month out, year in, year out, in order to retain business and therefore the revenue needed to pay the bills. (Read about other SaaS benefits for M&E in this recent DevResults post.)

It would be difficult to overstate the importance of these incentives to improve the product and earn revenue from end-users. They are nothing short of transformative. Particularly once there is active competition among vendors, users are squarely in charge. They control the money, their decisions make or break vendors, and so their preferences and needs are finally at the center.

Now, in addition to the “it’s heresy to charge money or think that end-users can wield this kind of technology” complaints that used to be more common, there started to be a different kind of complaint: there are too many solutions! It’s too overwhelming, how many digital data collection solutions there are now. Some go so far as to decry the duplication of effort, to claim that the free market is inefficient or failing; they suggest that donors, consultants, or experts be put back in charge of resource allocation, to re-impose some semblance of sanity to the space.

But meanwhile, we’ve experienced a kind of golden age in terms of who can afford digital data collection technology, who can wield it effectively, and in what kinds of settings. There are a dizzying number of solutions – but most of them cater to a particular type of need, or have optimized their business model in a particular sort of way. Some, like us, rely nearly 100% on subscription revenues, others fund themselves more primarily from service provision, others are trying interesting ways to cross-subsidize from bigger, richer users so that they can offer free or low-cost options to smaller, poorer ones. We’ve overcome pilotitis, economies of scale are finally kicking in, and I think that the social benefits have been tremendous.

INTENSIFYING COMPETITION IN IT-ADOPTED PROFESSIONAL PRODUCTS

Digital data collection stage 3 (the coming days)

It was the success of the first stage that laid the foundation for the second stage, and so too it has been the success of the second stage that has laid the foundation for the third: precisely because digital data collection technology has become so affordable, accessible, and ubiquitous, organizations are increasingly thinking that it should be IT departments that procure and manage that technology.

Part of the motivation is the very proliferation of options that I mentioned above. While economics and the historical success of capitalism has taught us that a marketplace thriving with competition is most often a very good thing, it’s less clear that a wide variety of options is good within any single organization. At the very least, there are very good reasons to want to standardize some software and processes, so that different people and teams can more effortlessly share knowledge and collaborate, and so that there can be some economies of scale in training, support, and compliance.

Imagine if every team used its own product and file format for writing documents, for example. It would be a total disaster! The frictions across and between teams would be enormous. And as data becomes more and more core to the operations of more organizations – the way that digital documents became core many years ago – it makes sense to want to standardize and scale data systems, to streamline integrations, just for efficiency purposes.

Growing compliance needs only up the ante. The arrival of the EU’s General Data Protection Regulation (GDPR) this year, for example, raises the stakes for EU-based (or even EU-touching) organizations considerably, imposing stiff new data privacy requirements and steep penalties for violations. Coming into compliance with GDPR and other data-security regulations will be effectively impossible if IT can’t play a more active role in the procurement, configuration, and ongoing management of data systems; and it will be impractical for IT to play such a role for a vast array of constantly-shifting technologies. After all, IT will require some degree of stability and scale.

But if IT takes over digital data collection technology, what changes? Does the golden age come to an end?

Potentially. And there are certainly very good reasons to worry.

First, changing who controls the dollars – who’s in charge of procurement – threatens to entirely up-end the current regime, where end-users are directly in charge and their needs and preferences are catered to by a growing body of vendors eager to earn their business.

It starts with the procurement process itself. When IT is in charge, procurement processes are long, intensive, and tend to result in a “winner take all” contract. After all, it makes sense that IT departments would want to take their time and choose carefully; they tend to be choosing solutions for the organization as a whole (or at least for some large class of users within the organization), and they most often intend to choose a solution, invest heavily in it, and have it work for as long as possible.

This very natural and appropriate method that IT uses to procure is radically different from the method used by research, program, and M&E teams. And it creates a radically different dynamic for vendors.

Vendors first have to buy into the idea of investing heavily in these procurement processes – which some may simply choose not to do. Then they have to ask themselves, “what do these IT folks care most about?” In order to win these procurements, they need to understand the core concerns driving the purchasing decision. As in the old saying “nobody ever got fired for choosing IBM,” safety, stability, and reputation are likely to be very important. Compliance issues are likely to matter a lot too, including the vendor’s established ability to meet new and evolving standards. Integrations with corporate systems are likely to count for a lot too (e.g., integrating with internal data and identity-management systems).

Does it still matter how well the vendor meets the needs of end-users within the organization? Of course. But note the very important shift in the dynamic: vendors now have to get the IT folks to “yes” and so would be quite right to prioritize meeting their particular needs. Nobody will disagree that end-users ultimately matter, but meanwhile the focus will be on the decision-makers. The vendors that meet the decision-makers’ needs will live, the others will die. That’s simply one aspect of how a free market works.

Note also the subtle change in dynamic once a vendor wins a contract: the SaaS model where vendors had to re-earn every customer’s revenue month in, month out, is largely gone now. Even if the contract is formally structured as a subscription or has lots of exit options, the IT model for technology adoption is inherently stickier. There is a lot more lock-in in practice. Solutions are adopted, they’re invested in at large scale, and nobody wants to walk away from that investment. Innovation can easily slow, and nobody wants to repeat the pain of procurement and adoption in order to switch solutions.

And speaking of the pain of the procurement process: costs have been rising. After all, the procurement process itself is extremely costly to the vendor – especially when it loses, but even when it wins. So that’s got to get priced in somewhere. And then all of the compliance requirements, all of the integrations with corporate systems, all of that stuff’s really expensive too. What had been an inexpensive, flexible, off-the-shelf product can easily become far more expensive and far less flexible as it works itself through IT and compliance processes.

What had started out on a very positive note (“let’s standardize and scale, and comply with evolving data regulations”) has turned in a decidedly dystopian direction. It’s sounding pretty bad now, and you wouldn’t be wrong to think “wait, is this why a bunch of the products I use for work are so much more frustrating than the products I use as a consumer?” or “if Microsoft had to re-earn every user’s revenue for Excel, every month, how much better would it be?”

While I don’t think there’s anything wrong with the instinct for IT to take increasing control over digital data collection technologies, I do think that there’s plenty of reason to worry. There’s considerable risk that we lose the deep user orientation that has just been picking up momentum in the space.

WHERE WE’RE HEADED: STRIKING A BALANCE

Digital data collection stage 4 (finding a balance?)

If we don’t want to lose the benefits of a deep user orientation in this particular technology space, we will need to work pretty hard – and be awfully clever – to avoid it. People will say “oh, but IT just needs to consult research, program, and M&E teams, include them in the process,” but that’s hogwash. Or rather, it’s woefully inadequate. The natural power of those controlling resources to bend the world to their preferences and needs is just too powerful for mere consultation or inclusion to overcome.

And the thing is: what IT wants and needs is good. So the solution isn’t just “let’s not let them anywhere near this, let’s keep the end-users in charge.” No, that approach collapses under its own weight eventually, and certainly it can’t meet rising compliance requirements. It has its own weaknesses and inefficiencies.

What we need is an approach – a market structure – that allows the needs of IT and the needs of end-users both to matter to appropriate degrees.

With SurveyCTO, we’re currently in an interesting place: we’re becoming split between serving end-users and serving IT organizations. And I suppose as long as we’re split, with large parts of our revenue coming from each type of decision-maker, we remain incentivized to keep meeting everybody’s needs. But I see trouble on the horizon: the IT organizations can pay more, and more organizations are shifting in that direction… so once a large-enough proportion of our revenue starts coming from big, winner-take-all IT contracts, I fear that our incentives will be forever changed. In the language of economics, I think that we’re currently living in an unstable equilibrium. And I really want the next equilibrium to serve end-users as well as the last one!

Present or lead a session at MERL Tech DC!

Please sign up to present, register to attend, or reserve a demo table for MERL Tech DC 2018 on September 6-7, 2018 at FHI 360 in Washington, DC.

We will engage 300 practitioners from across the development ecosystem for a two-day conference seeking to turn the theories of MERL technology into effective practice that delivers real insight and learning in our sector.

MERL Tech DC 2018, September 6-7, 2018

Digital data and new media and information technologies are changing monitoring, evaluation, research and learning (MERL). The past five years have seen technology-enabled MERL growing by leaps and bounds. We’re also seeing greater awareness and concern for digital data privacy and security coming into our work.

The field is in constant flux with emerging methods, tools and approaches, such as:

  • Adaptive management and developmental evaluation
  • Faster, higher quality data collection
  • Remote data gathering through sensors and self-reporting by mobile
  • Big data, data science, and social media analytics
  • Story-triggered methodologies

Alongside these new initiatives, we are seeing increasing documentation and assessment of technology-enabled MERL initiatives. Good practice guidelines are emerging and agency-level efforts are making new initiatives easier to start, build on and improve.

The swarm of ethical questions related to these new methods and approaches has spurred greater attention to areas such as responsible data practice and the development of policies, guidelines and minimum ethical standards for digital data.

Championing the above is a growing and diversifying community of MERL practitioners, assembling from a variety of fields; hailing from a range of starting points; espousing different core frameworks and methodological approaches; and representing innovative field implementers, independent evaluators, and those at HQ that drive and promote institutional policy and practice.

Please sign up to present, register to attend, or reserve a demo table for MERL Tech DC to experience 2 days of in-depth sharing and exploration of what’s been happening across this cross-disciplinary field, what we’ve been learning, complex barriers that still need resolving, and debate around the possibilities and the challenges that our field needs to address as we move ahead.

Submit Your Session Ideas Now

Like previous conferences, MERL Tech DC will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are looking for a range of topics, including:

  • Experiences and learning at the intersection of MERL and tech
  • Ethics, inclusion, safeguarding, and data privacy
  • Data (big data, data science, data analysis)
  • Evaluation of ICT-enabled efforts
  • The future of MERL
  • Tech-enabled MERL Failures

Visit the session submission page for more detail on each of these areas.

Submission Deadline: Monday, April 30, 2018 (at midnight EST)

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early June and, if selected, you will be asked to submit the final session title, summary and outline by June 30.

Register Now

Please sign up to present or register to attend MERL Tech DC 2018 to examine these trends with an exciting mix of educational keynotes, lightning talks, and group breakouts, including an evening reception and Fail Fest to foster needed networking across sectors and an exploration of how we can learn from our mistakes.

We are charging a modest fee to better allocate seats and we expect to sell out quickly again this year, so buy your tickets or demo tables now. Event proceeds will be used to cover event costs and to offer travel stipends for select participants implementing MERL Tech activities in developing countries.

You can also submit session ideas for MERL Tech Jozi, coming up on August 1-2, 2018! Those are due on March 31st, 2018!

What’s the Deal with Data — Bridging the Data Divide in Development

Written by Ambika Samarthya-Howard, Head of Communications, Praekelt.org. This post was originally published on March 26, 2018, on Medium.

Working on communications at Praekelt.org, I have had the opportunity to see first-hand the power of sharing stories in driving impact and changing attitudes. Over the past month I’ve attended several unrelated events all touching on data, evaluation, and digital development which have reaffirmed the importance of finding common ground to share and communicate data we value.

Storytelling and Data

I recently presented a poster on “Storytelling for Organisational Change” at the University of London’s Behavior Change Conference. Our current evaluations at Praekelt draw on work by the center, which is a game-changer in the field. But I didn’t submit an abstract on our agile, experimental investigations: I was sharing information about how I was using films and our storytelling to create change within the organisation.

After my abstract was accepted, I realized I had to present my findings as a poster. For many practitioners (like myself) we really have no idea what a poster entails. Thankfully I got advice from academics and support from design colleagues to translate my videos, photos, and storytelling deck into a visual form I could pin up. When the printers in New York told me “this is a really great poster”, I started picking up the hint that it was atypical.

Once I arrived at the poster hall at UCL, I could see why. Nearly, if not all, of the posters in the room had charts and numbers and graphs — lots and lots of data points. On the other hand, my poster had almost no “data”. It was colorful, and showed a few engaging images, the story of our human-centered design process, and was accompanied by videos playing on my laptop alongside the booth. It was definitely a departure from the “research” around the room.

This divide between research and practice showed up many times through the conference. For starters, this year, attendees were asked to choose a sticker label based on whether they were in research/ academics or programme/ practitioners. Many of the sessions talked about how to bridge the divide and make research more accessible to practitioners, and take learnings from programme creators to academia.

Thankfully for me, the tight knit group of practitioners felt solace and connection to my chart-less poster, and perhaps the academics a bit of a relief at the visuals as well: we went home with one of the best poster awards at the conference.

Data Parties and Cliques

The London conference was only the beginning of when I became aware of the conversations around the data divide in digital development. “Why are we even using the word data? Does anyone else value it? Does anyone else know what it means?” Anthony Waddell, Chief Innovation Officer of IBI, provocatively put out there at a breakout session at USAID’s Digital Development Forum in Washington. The conference gathered organisations around the United States working in digital development, asking them to consider key points around the evolution of digital development in the next decade — access, inclusivity, AI, and, of course, the role of data.

This specific break-out session was sharing best practices of using and understanding data within organisations, especially amongst programmes teams and country office colleagues. It also expanded to sharing with beneficiaries, governments, and donors. We questioned whose data mattered, why we were valuing data, and how to get other people to care.

Samhir Vasdev, the advisor for Digital Development at IREX, spoke on the panel about MIT’s initiatives and their Data Culture Lab, which shared exercises to help people understand data. He talked about throwing data parties where teams could learn and understand that what they were creating was data, too. The gatherings allow people to explore the data they produce, but perhaps did not get a chance to interrogate. The real purpose is to understand what new knowledge their own data tells them, or what further questions the data challenges them to explore. “Data parties a great way to encourage teams to explore their data and transform it into insights or questions that they can use directly in their programs.”

Understanding data can be empowering. But being shown the road forward doesn’t necessarily means that’s the road participants can or will take. As Vasdev noted, “ “Exercises like this come with their own risks. In some cases, when working with data together with beneficiaries who themselves produced that information, they might begin demanding results or action from their data. You have to be prepared to manage these expectations or connect them with resources to enable meaningful action.” One can imagine the frustration if participants saw their data leading to the need for a new clinic, yet a clinic never got built.

Big Data, Bias, and M&E

Opening the MERL (Monitoring, Evaluation, Research, and Learning) Tech Conference in London, Andre Clark, Effectiveness and Learning Adviser at Bond, spoke about the increasing importance of data in development in his keynote. Many of the voices in the room resonated with the trends and concerns I’ve observed over the last month. Is data the answer? How is it the answer?André Clarke’s keynote at MERL Tech

“The tool is not going to solve your problem,” one speaker said during the infamous off-the-record Fail Fest where attendees present on their failures to learn from each other’s mistakes. The speaker shared examples of a new reporting initiative which hadn’t panned out as expected. She noted that “we initially thought tech would help us work faster and more efficiently, but now we are clearly seeing the importance of quality data over timely data”. Although digital data may be better and faster, that does not mean it’s solving the original problem.

In using data to evaluate problems, we have to make sure we are under no illusions that we are actually dealing with core issues at hand. For examples, during my talk on Social Network Analysis we discussed both the opportunities and challenges of using the quantitative process in M&E. The conference consistently emphasized the importance of slower, and deeper processes as opposed to faster, and shorter ones driven by technology.

This holds true for how data is used in M&E practices. For example, I attended a heated debate on the role of “big data” in M&E and whether the convergence was inevitable. As one speaker mentioned, “if you close your eyes and forget the issue at hand is big data, you could feel like it was about any other tool used in M&E”. The problems around data collection, bias, inaccessibility, language, and tools were there in M&E regardless of big data or small data.

Other core issues raised were power dynamics, inclusivity, and the fact that technology is made by people and therefore it is not neutral. As Anahi Ayala Iacucci, Senior Director of Humanitarian Programs at Internews, said explicitly “we are biased, and so we are building biased tools.” In her presentation, she talked about how technology mediates and alters human relationships. If we take the slower and deeper approach we will have an ability to really explore biases and understand the value and complications of data.

“Evaluators don’t understand data, and then managers and public don’t understand evaluation talk,” Maliha Khan of Daira said, bringing it back to my original concerns about translation and bridging gaps in the space. Many of the sessions sought to address this problem, a nice example being Cooper Smith’s Kuunika project in Malawi that used local visual illustrations to accompany their survey questions on tablets. Another speaker pushed for us to move into the measurement space, as opposed to monitoring, which has the potential to be a page we can all agree on.

As someone who feels responsible for not only communicating our work externally, but sharing knowledge amongst our programmes internally, where did all this leave me? I think I’ll take my direction from Anna Maria Petruccelli, Data Analyst at Comic Relief, who spoke about how rather than organisations committing to being data-driven, they could be committed to being data-informed.

To go even further with this advice, at Praekelt we make the distinction between data-driven and evidence-driven, where the latter acknowledges the need to attend to research design and emphasize quality, not just quantity. Evidence encompasses the use of data but includes the idea that not all data are equal, that when interpreting data we attend to both the source of data and research design.

I feel confident that turning our data into knowledge, regardless of how we choose to use it and being aware of how bias informs the way we do, can be the first step forward on a unified journey. I also think this new path forward will leverage the power of storytelling to make data accessible, and organisations better informed. It’s a road less traveled, yes, but hopefully that will make all the difference.

If you are interested in joining this conversation, we encourage you to submit to the first ever MERL Tech Jozi. Abstracts due March 31st.

MERL Tech London: What’s Your Organisation’s Take on Data Literacy, Privacy and Ethics?

 It first appeared here on March 26th, 2018.

ICTs and data are increasingly being used for monitoring, evaluation, research and learning (MERL). MERL Tech London was an open space for practitioners, techies, researchers and decision makers to discuss their good and not so good experiences. This blogpost is a reflection of the debates that took place during the conference.

Is data literacy still a thing?

Data literacy is “the ability to consume for knowledge, produce coherently and think critically about data.” The perception of data literacy varies depending on the stakeholder’s needs. Being data literate for an M&E team, for example, means possessing statistics skills including collecting and combining large data sets. Program team requires different level of data literacy: the competence to carefully interpret and communicate meaningful stories using processed data (or information) to reach the target audiences.

Data literacy is – and will remain – a priority in development. The current debate is no longer about whether an organisation should use data or not. It’s rather how well the organisation can use data to achieve their objectives. Yet, organisation’s efforts are often concentrated in just one part of the information value chain, data collection. Data collection in itself is not the end goal. Data has to be processed into information and knowledge for making informed decisions and actions.

This doesn’t necessary imply that the decision making is purely based on data, nor that data can replace the role of decision makers. Quite the opposite: data-informed decision making strikes balance between expertise and information. It also takes data limitations into account. Nevertheless, one can’t become a data-informed organisation without being data literate.

What’s your organisation’s data strategy?

The journey of becoming a data-informed organisation can take some time. Poor data quality, duplication efforts and underinvestment are classic obstacles requiring a systematic solution (see Tweet). The commitment from senior management team should be secured for that. Data team has to be established. Staff members need access to relevant data platforms and training. More importantly, the organisation has to embrace the cultural change towards valuing evidence and acting on positive and negative findings

Marten Schoonman@mato74
Responsible data handling workgroup: mindmapping the relevant subjects @MERLTech

Organisations seek to balance between (data) demands and priorities. Some invest hundreds of thousands dollars for setting up a data team to articulate the organisation’s needs and priorities, as well as to mobilise technical support. A 3-5 years strategic plan is created to coordinate efforts between country offices.

Others take a more modest approach. They recruit few data scientists to support MERL activities of analysing particularly large amounts of project data. The data scientist role evolves along the project growth. In both cases, leadership is the key driver for shifting the culture towards becoming a data-informed organisation.

Should an organisation use certain data because it can?

The organisation working with data usually faces challenges around privacy, legality, ethics and grey areas, such as bias and power dynamics between data collectors and their target groups. The use of biometric data in humanitarian settings is an example where all these tensions collide. Biometric data, e.g. fingerprint, iris scan, facial recognition – is powerful, yet invasive. While proven beneficial, biometric data is vulnerable to data breach and misuse, e.g. profiling and tracking. The practice raises critical questions: does the target group, e.g. refugees, have the option to refuse handling over their sensitive personal data? If so, will they still be entitled to receive aid assistance? To what extent the target group is aware how their sensitive personal data will be used and shared, including in the unforeseen circumstances?

The people’s privacy, safety and security are main priorities in any data work. The organisation should uphold the highest standards and set an example. In those countries where regulatory frameworks are lagging behind data and technology, organisations shouldn’t abuse their power. When the risk of using a certain data outweighs the benefits, or in doubt, the organisation should take a pause and ask itself some necessary questions from the perspective of its target groups. Oxfam which dismissed – following two years of internal discussions and intensive research – the idea of using biometric data in any of their project should be seen as a positive example.

To conclude, the benefits of data can only be realised when an organisation enjoys visionary leadership, sufficient capacity and upholds its principles. No doubts, this is easier being said than done; it requires time and patience. All these efforts, however, are necessary for a high-achieving organisations.

More reading:

**Save the date for MERL Tech Jozi coming up on Aug 1-2!  Session ideas are due this Friday (March 31st).

Please Submit Session Ideas for MERL Tech Jozi

We’re thrilled to announce that we’re organizing MERL TEch Jozi for August of 2018!

Please submit your session ideas or reserve your demo table now, to explore what’s happening with innovation, digital data, and new technologies across the monitoring, evaluation, research, and learning (MERL) fields.

MERL Tech Jozi will be in Johannesburg, South Africa, August 1-2, 2018!

At MERL Tech Jozi, we’ll build on earlier MERL Tech conferences in DC and London, engaging 100 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn theories of MERL technology into effective practices that deliver real insight and learning in our sector.

MERL Tech is a lively, interactive, community-driven conference.  We’re actively seeking a diverse set of practitioners in monitoring, evaluation, research, learning, program implementation, management, data science, and technology to lead every session.

Submit your session ideas now.

We’re looking for sessions that focus on:

  • Discussions around good practice and evidence-based review
  • Innovative MERL approaches that incorporate technology
  • Future-focused thought provoking ideas and examples
  • Conversations about ethics, inclusion, and responsible policy and practice in MERL Tech
  • Exploration of complex MERL Tech challenges and emerging good practice
  • Workshop sessions with practical, hands-on exercises and approaches
  • Lightning Talks to showcase new ideas or to share focused results and learning
Submission Deadline: Saturday, March 31, 2018.

Session submissions are reviewed and selected by our steering committee. Presenters and session leads will have priority access to MERL Tech tickets. We will notify you whether your session idea was selected in late April and if selected, you will be asked to submit the final session title, summary and detailed session outline by June 1st, 2018

If you’d prefer to showcase your technology tool or platform to MERL Tech participants, you can reserve your demo table here.

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone through our Code of Conduct.

MERL Tech Jozi is organized by Kurante and supported by the following sponsors. Contact Linda Raftree if you’d like to be a sponsor of MERL Tech Jozi too.

 

 

 

MERL Tech London 2018 Agenda is out!

We’ve been working hard over the past several weeks to finish up the agenda for MERL Tech London 2018, and it’s now ready!

We’ve got workshops, panels, discussions, case studies, lightning talks, demos, community building, socializing, and an evening reception with a Fail Fest!

Topics range from mobile data collection, to organizational capacity, to learning and good practice for information systems, to data science approaches, to qualitative methods using mobile ethnography and video, to biometrics and blockchain, to data ethics and privacy and more.

You can search the agenda to find the topics, themes and tools that are most interesting, identify sessions that are most relevant to your organization’s size and approach, pick the session methodologies that you prefer (some of us like participatory and some of us like listening), and to learn more about the different speakers and facilitators and their work.

Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here!)

View the MERL Tech London schedule & directory.

 

DataDay TV: MERL Tech Edition

What data superpower would you ask for? How would you describe data to your grandparents? What’s the worst use of data you’ve come across? 

These are a few of the questions that TechChange’s DataDay TV Show tackles in its latest episode.

The DataDay Team (Nick Martin, Samhir Vasdev, and Priyanka Pathak) traveled to MERL Tech DC last September to ask attendees some tough data-related questions. They came away with insightful, unusual, and occasionally funny answers….

If you’re a fan of discussing data, technology and MERL, join us at MERL Tech London on March 19th and 20th. 

Tickets are going fast, so be sure to register soon if you’d like to attend!

If you want to take your learning to the next level with a full-blown course, TechChange has a great 2018 schedule, including topics like blockchain, AI, digital health, data visualization, e-learning, and more. Check out their course catalog here.

What about you, what data superpower would you ask for?

 

Self-service data collection with the most vulnerable

This is a summary of a Lightning Talk presented by Salla Mankinen, Good Return, at MERL Tech London in 2017. 

When collecting data from the most vulnerable target groups, organizations often use methods such as guesstimating, interviewing done by enumerators, SMS, or IVR. The organization Good Return created a smart phone and tablet app that allowed vulnerable groups to interact directly with the data collection tool, without training or previous exposure to any technology.

At MERL Tech London in February 2017, Salla Mankinen shared Good Return’s experiences with using tablets for self-service check in at village training centers in Cambodia.

“Our challenge was whether we could have app-based, self-service data collection for the most vulnerable and in the most remote locations,” she said. “And could there be a journey from technology illiteracy to technology confidence” in the process?

The team created a voice and image based application that worked even for those who had little technology knowledge. It collected data from village participants such as “Why did you miss the last training session?” or “Do you have any money left this week?”

By the end of the exercise, 72% of participants felt confident with the app and 83% said they felt a lot more confident with technology in general.

Watch Salla’s presentation here or take a look at her slides here!

Register now for MERL Tech London, March 19-20, 2018!

Moving from “evaluation” to “impact management”

by Richa Verma, Resident Entrepreneur at Social Cops. This post originally appeared on the Social Cops blog on August 28, 2017.

When I say that Impact Evaluation is history, I mean it. Some people will question this. After all, Impact Evaluation just became mainstream in the last decade, driven by great improvements in experimental design methods like randomized control trials (RCTs). So how can I say that it’s already a thing of the past? It’s not Impact Evaluation’s fault. The world changed.

Methodologies like RCTs came from medical science, where you can give patients a pill and assess its impact with randomized trials. However, development is not a space where one pill will work for everyone. In development, the patients change faster, the illness evolves faster, and the pill needs to keep pace with both the patients and the illness. That’s where Impact Management comes in.

What Is Impact Management?

New Philanthropy Capital‘s 2017 Global Innovation in Measurement and Evaluation Report counts Impact Management as one of the top 7 innovations of 2017.

So what is Impact Management? Let me first explain what it is not. It’s not a one-time evaluation. It’s not collecting data for answering a limited set of questions. It’s not a separate activity from your program. It’s not just monitoring and evaluation.

It’s a way of making data-driven decisions at every step of your program. It’s about keeping a pulse on your program every day and finding new questions to answer, rather than just focusing on specific questions predetermined by your monitoring and evaluation team or funders.

“The question that’s being asked more and more is, ‘How does evaluation feed into better management decisions?’ That’s a shift from measurement of impact, to measurement for impact.”
– Megan Campbell (Feedback Labs)

How Does Impact Management Work?

Impact Management uses the basic components of monitoring and evaluation, but with an outlook shift. It involves frequent data collection, regular reporting and monitoring of your data, and iteratively updating your program indicators and metrics as data comes in and the program changes.

Impact Management differs from Impact Assessment in that it promotes course correction on a daily basis. Organizations collect data on their programs as they conduct activities, analyze that information on a regular basis, and make changes to the program.

With an outlook that encourages frequent changes, as if you were trading in stocks, organizations will have the ability to A/B test their programs with real-time data to make decisions immediately; rather than wait to compare and contrast two different surveys. They can test out new things and make changes as they receive data in servers, even at the end of the day rather than waiting for the official year-end review. It becomes a way of deciding how they should execute a program daily rather than only seeing strategic changes through.

“[Data collection] should be ongoing — it’s a value driver not a compliance requirement.”
– Tom Adams (Acumen)

In many ways, this is how decisions are made on Wall Street or Dalal Street in India. Analysts don’t wait until the end of the year to make investments by reviewing annual reports. They watch daily as the market fluctuates and strike as soon as they see new potential.

Impact Management works exactly the same. You should strive to increase your impact as soon as opportunity arrives, rather than waiting for a year-end external evaluation or approval.

How Can You Implement Impact Management?

To make Impact Management possible, switch from static data files to a flexible data system.

Today, most of your program officers and even your beneficiaries are armed with mini-computers in their pockets (read: smartphones). Leverage these to create a network of data ingestion devices, continuously tracking and measuring the impact of your programs. Use mobile data collection apps to add forms, deploy them to the field, and reach out not just to your field force but also your beneficiaries — not just at the end of the month or quarter, but as frequently as possible.

Then don’t let this data sit in Excel files. Use today’s technologies to create your own data management system, one that will link your beneficiaries, connect your programs, and answer queries. Have someone with an analytical bent look at this data regularly, or draw on machine power to analyze this data and generate meaningful insights or reports in real time.

“We’re moving away from a static data world, where you work on datasets, and you write reports, to a dynamic data world where data is always being generated and created and it helps you do your job better.”
– Andrew Means (beyond.uptake)

Lastly, it’s crucial to tie this flexible data system back to your decisions. Make real-time data — rather than guesses or last year’s data — the basis of every program decision and the foundation of even weekly catch-ups. And don’t hesitate to test out new things. Data will tell you whether something worked or not.

Many of our partners are using our platform to make Impact Management possible and track their programs in real time. The platform lets them create and tweak data collection forms, and monitor incoming data in real time on their computer, in regular reports, or even on map-based dashboards. They are asking new questions about how their programs are doing and answering them with data.

If we really want to create the best development programs, we’ll have to think differently and use evidence not just once every month or year, but as we make crucial decisions every day. All backed by the tenets of Impact Management: test, fail, improve, repeat.

Join us at MERL Tech London on March 19-20 – where we’ll be debating this topic!

MERL Tech 101: Google forms

by Daniel Ramirez-Raftree, MERL Tech volunteer

In his MERL Tech DC session on Google Forms, Samhir Vesdev from IREX led a hands-on workshop on Google Forms and laid out some of the software’s capabilities and limitations. Much of the session focused on Google Forms’ central concepts and the practicality of building a form.

At its most fundamental level, a form is made up of several sections, and each section is designed to contain a question or prompt. The centerpiece of a section is the question cell, which is, as one would imagine, the cell dedicated to the question. Next to the question cell there is a drop down menu that allows one to select the format of the question, which ranges from multiple-choice to short answer.


At the bottom right hand corner of the section you will find three dots arranged vertically. When you click this toggle, a drop-down menu will appear. The options in this menu vary depending on the format of the question. One common option is to include a few lines of description, which is useful in case the question needs further elaboration or instruction. Another is the data validation option, which restricts the kinds of text that a respondent can input. This is useful in the case that, for example, the question is in a short answer format but the form administrators need the responses to be limited numerals for the sake of analysis.

The session also covered functions available in the “response” tab, which sits at the top of the page. Here one can find a toggle labeled “accepting responses” that can be turned off or on depending on the needs for the form.

Additionally, in the top right corner this tab, there are three dots arranged vertically, and this is the options menu for this tab. Here you will find options such as enabling email notifications for each new response, which can be used in case you want to be alerted when someone responds to the form. Also in this drop down, you can click “select response destination” to link the Google Form with Google Sheets, which simplifies later analysis. The green sheets icon next to the options drop-down will take you to the sheet that contains the collected data.

Other capabilities in Google Forms include the option for changing the color scheme, which you can access by clicking the palette icon at the top of the screen. Also, by clicking the settings button at the top of the screen you can limit the response amount to restrict people’s ability to skew the data by submitting multiple responses, or you can enable response editing after submission to allow respondents to go in and correct their response after submitting it.

Branching is another important tool in Google Forms. It can be used in the case that you want a particular response to a question (say, a multiple choice question) to lead the respondent to another related question only if they respond in a certain way.

For example, if in one section you ask “did you like the workshop?” with the answer options being “yes” and “no,” and if you want to know what they didn’t like about the workshop only if they answer “no,” you can design the sheet to take the respondent to a section with the question “what didn’t you like about the workshop?” only in the case that they answer “no,” and then you can design the sheet to bring the respondent back to the main workflow after they’ve answered this additional question.

To do this, create at least two new sections (by clicking “add section” in the small menu to the right of the sections), one for each path that a person’s response will lead them down. Then, in the options menu on the lower right hand side select “go to section based on answer” and using the menu that appears, set the path that you desire.

These are just some of the tools that Google Forms offers, but with just these it is possible to build an effective form to collect the data you need. Samhir ended with a word of caution that Google has been known to shut down popular apps, so you should be wary about building an organization strategy around Google Forms.