MERL Tech News

Evaluating ICT4D projects against the Digital Principles

By Laura Walker McDonald,  This post was originally published on the Digital Impact Alliance’s Blog on March 29, 2018.

As I have written about elsewhere, we need more evidence of what works and what doesn’t in the ICT4D and tech for social change spaces – and we need to hold ourselves to account more thoroughly and share what we know so that all of our work improves. We should be examining how well a particular channel, tool or platform works in a given scenario or domain; how it contributes to development goals in combination with other channels and tools; how the team selected and deployed it; whether it is a better choice than not using technology or using a different sort of technology; and whether or not it is sustainable.

At SIMLab, we developed our Framework for Monitoring and Evaluation of Technology in Social Change projects to help implementers to better measure the impact of their work. It offers resources towards a minimum standard of best practice which implementers can use or work toward, including on how to design and conduct evaluations. With the support of the Digital Impact Alliance (DIAL), the resource is now finalized and we have added new evaluation criteria based on the Principles for Digital Development.

Last week at MERL Tech London, DIAL was able to formally launch this product by sharing a 2-page summary available at the event and engaging attendees in a conversation about how it could be used. At the event, we joined over 100 organizations to discuss Monitoring, Evaluation, Research and Learning related to technology used for social good.

Why evaluate?

Evaluations provide snapshots of the ongoing activity and the progress of a project at a specific point in time, based on systematic and objective review against certain criteria. They may inform future funding and program design; adjust current program design; or to gather evidence to establish whether a particular approach is useful. They can be used to examine how, and how far, technology contributes to wider programmatic goals. If set up well, your program should already have evaluation criteria and research questions defined, well before it’s time to commission the evaluation.

Evaluation criteria provide a useful frame for an evaluation, bringing in an external logic that might go beyond the questions that implementers and their management have about the project (such as ‘did our partnerships on the ground work effectively?’ or ‘how did this specific event in the host country affect operations?’) to incorporate policy and best practice questions about, for example, protection of target populations, risk management, and sustainability. The criteria for an evaluation could be any set of questions that draw on an organization’s mission, values, principles for action; industry standards or other best practice guidance; or other thoughtful ideas of what ‘good’ looks like for that project or organization. Efforts like the Principles for Digital Development can set useful standards for good practice, and could be used as evaluation criteria.

Evaluating our work, and sharing learning, is radical – and critically important

While the potential for technology to improve the lives of vulnerable people around the world is clear, it is also evident that these improvements are not keeping pace with the advances in the sector. Understanding why requires looking critically at our work and holding ourselves to account. There is still insufficient evidence of the contribution technology makes to social change work. What evidence there is often is not shared or the analysis doesn’t get to the core issues. Even more important, the learnings from what has not worked and why have not been documented and absorbed.

Technology-enabled interventions succeed or fail based on their sustainability, business models, data practices, choice of communications channel and technology platform; organizational change, risk models, and user support – among many other factors. We need to build and examine evidence that considers these issues and that tells us what has been successful, what has failed, and why. Holding ourselves to account against standards like the Principles is a great way to improve our practice, and honor our commitment to the people we seek to help through our work.

Using the Digital Principles as evaluation criteria

The Principles for Digital Development are a set of living guidance intended to help practitioners succeed in applying technology to development programs. They were developed, based on some pre-existing frameworks, by a working group of practitioners and are now hosted by the Digital Impact Alliance.

These nine principles could also form a useful set of evaluation criteria, not unlike OECD evaluation criteria, or Sphere standards. Principles overlap, so data can be used to examine more than one criterion, and ot every evaluation would need to consider all of the Digital Principles.

Below are some examples of Digital Principles and sample questions that could initiate, or contribute to, an evaluation.

Design with the User: Great projects are designed with input from the stakeholders and users who are central to the intended change. How far did the team design the project with its users, based on their current tools, workflows, needs and habits, and work from clear theories of change and adaptive processes?

Understand the Existing Ecosystem: Great projects and programs are built, managed, and owned with consideration given to the local ecosystem. How far did the project work to understand the local, technology and broader global ecosystem in which the project is situated? Did it build on existing projects and platforms rather than duplicating effort? Did the project work sensitively within its ecosystem, being conscious of its potential influence and sharing information and learning?

Build for Sustainability: Great projects factor in the physical, human, and financial resources that will be necessary for long-term sustainability. How far did the project: 1) think through the business model, ensuring that the value for money and incentives are in place not only during the funded period but afterwards, and 2) ensure that long-term financial investments in critical elements like system maintenance and support, capacity building, and monitoring and evaluation are in place? Did the team consider whether there was an appropriate local partner to work through, hand over to, or support the development of, such as a local business or government department?

Be Data Driven: Great projects fully leverage data, where appropriate, to support project planning and decision-making. How far did the project use real-time data to make decisions, use open data standards wherever possible, and collect and use data responsibly according to international norms and standards?

Use Open Standards, Open Data, Open Source, and Open Innovation: Great projects make appropriate choices, based on the circumstances and the sensitivity of their project and its data, about how far to use open standards, open the project’s data, use open source tools and share new innovations openly. How far did the project: 1) take an informed and thoughtful approach to openness, thinking it through in the context of the theory of change and considering risk and reward, 2) communicate about what being open means for the project, and 3) use and manage data responsibly according to international norms and standards?

For a more complete set of guidance, see the complete Framework for Monitoring and Evaluating Technology, and the more nuanced and in-depth guidance on the Principles, available on the Digital Principles website.

Technologies in monitoring and evaluation | 5 takeaways

Bloggers: Martijn Marijnis and Leonard Zijlstra. This post originally appeared on the ICCO blog on April 3, 2018.
.

Technologies in monitoring and evaluation | 5 takeaways

On March 19 and 20 ICCO participated in the MERL Tech 2018 in London. The conference explores the possibilities of technology in monitoring, evaluation, learning and research in development. About 200 like-minded participants from various countries participated. Key issues on the agenda were data privacy, data literacy within and beyond your organization, human-centred monitoring design and user-driven technologies. Interesting practices where shared, amongst others in using blockchain technologies and machine learning. Here are our most important takeaways:

1)  In many NGOs data gathering still takes place in silo’s

Oxfam UK shared some knowledgeable insights and practical tips in putting in place an infrastructure that combines data: start small and test, e.g. by building up a strong country use case; discuss with and learn from others; ensure privacy by design and make sure senior leadership is involved. ICCO Cooperation currently faces a similar challenge, in particular in combining our household data with our global result indicators.

2)  Machine learning has potential for NGOs

While ICCO recently started to test machine learning in the food security field (see this blog) other organisations showcased interesting examples: the Wellcome Trust shared a case where they tried to answer the following question: Is the organization informing and influencing policy and if so, how? Wellcome teamed up their data lab and insight & analysis team and started to use open APIs to pull data in combination with natural language processing to identify relevant cases of research supported by the organization. With their 8.000 publications a year this would be a daunting task for a human team. First, publications linked to Wellcome funding were extracted from a European database (EPMC) in combination with end of grant reports. Then WHO’s reference section was scraped to see if and to what extent WHO’s policy was influenced and to identify potential interesting cases for Wellcome’s policy team.

3)  Use a standardized framework for digital development

See digitalpinciples.org. It gives – amongst others – practical guidelines on how to use open standards and open data, how data can be reused, how privacy and security can be addressed, how users can and should be involved in using technologies in development projects. It is a useful framework for evaluating your design.

4)  Many INGOs get nervous these days about blockchain technology

What is it, a new hype or a real game changer? For many it is just untested technology with high risks and little upside for the developing world. But, for example INGOs working in agriculture value chains or in humanitarian relief operations, its potential is definitely consequential enough to merit a closer look. It starts with the underlying principle, that users of a so-called blockchain can transfer value, or assets, between each other without the need for a trusted intermediary. The running history of the transactions is called the blockchain, and each transaction is called a block. All transactions are recorded in a ledger that is shared by all users of a blockchain.

The upside of blockchain applications is the considerable time and money saving aspect of it. Users rely on this shared ledger to provide a transparent view into the details of the assets or values, including who owns them, as well as descriptive information such as quality or location. Smallholder farmers could benefit (e.g. real-time payment on delivery, access to credit), so can international sourcing companies (e.g. traceability of produce without certification), banks (e.g. cost-reductions, risk-reduction), as much as refugees and landless (e.g. registration, identification). Although we haven’t yet seen large-scale adoption of blockchain technology in the development sector, investors like the Bill and Melinda Gates Foundation and various venture capitalists are paying attention to this space.

But one of the main downsides or challenges for blockchain, like with agricultural technology at large, is connecting the technology to viable business models and compelling use cases. With or without tested technology, this is hard enough as it is and requires innovation, perseverance and focus on real value for the end-user; ICCO’s G4AW projects gain experience with blockchain.

5)  Start thinking about data-use incentives

Over the years, ICCO has made significant investments in monitoring & evaluation and data skills training. Yet limited measurable results of increased data use can be seen, like in many other organizations. US-based development consultant Cooper&Smith shared revealing insights into data usage incentives. It tested three INGOs working across five regions globally. The hypothesis was, that better alignment of data-use training incentives leads to increased data use later on. They looked at both financial and non-financial rewards that motivate individuals to behave in a particular way. Incentives included different training formats (e.g. individual, blended), different hardware (e.g. desktop, laptop, mobile phone), recognition (e.g. certificate, presentation at a conference), forms of feedback & support (e.g. one-on-one, peer group) and leisure time during the training (e.g. 2 hours/week, 12 hours/week). Data use was referred to as the practice of collecting, managing, analyzing and interpreting data for making program policy and management decisions.

They found considerable differences in appreciation of the attributes. For instance, respondents overwhelmingly prefer a certificate in data management, but instead they currently receive primarily no recognition or only recognition from their supervisor. Or  one region prefers a certificate while the other prefers attending an international conference as reward. Or that they prefer one-on-one feedback but instead they receive only peer-2-peer support. The lesson here is, that while most organizations apply a ‘one-size fits all’-reward system (or have no reward system at all), this study points at the need to develop a culturally sensitive and geographically smart reward system to see real increase in data usage.

For many NGOs the data revolution has just begun, but we are underway!

What Are Your ICT4D Challenges? Take a DIAL Survey to Learn What Helps and Hurts Us All

By Laura Walker McDonald, founder of BetterLab.io. Originally posted on ICT Works on March 26, 2018.

DIAL ICT4D Survey

When it comes to the impact and practice of our ICT4D work, we’re long on stories and short on evidence. My previous organization, SIMLab, developed Frameworks on Context Analysis andMonitoring and Evaluation of technology projects to try and tackle the challenge at that micro level.

But we also have little aggregated data about the macro trends and challenges of our growing sector. That’s led the Digital Impact Alliance (DIAL) to conduct an entirely new kind of data-gathering exercise, and one that would add real quantitative data to what we know about what it’s like to implement projects and develop platforms.

Please help us gather new insights from more voices

Please take our survey on the reality of delivering services to vulnerable populations in emerging markets using digital tools. We’re looking for experiences from all of DIAL’s major stakeholder groups:

  • NGO leaders from the project site to the boardroom;
  • Technology experts;
  • Platform providers and mobile network operators;
  • Governments and donors.

We’re adding to this survey with findings with in-depth interviews with 50 people from across those groups.

Please forward this survey!

We want to hear from those whose voices aren’t usually heard by global consultation and research processes. We know that the most innovative work in our space happens in projects and collaborations in the Global South – closest to the underserved communities who are our highest priority.

Please forward this survey to we can hear from those innovators, from the NGOs, government ministries, service providers and field offices who are doing the important work of delivering digital-enabled services to communities, every day.

It’s particularly important that we hear from colleagues in government, who may be supporting digital development projects in ways far removed from the usual digital development conversation.

Why should I take and share the survey?

We’ll use the data to help measure the impact of what we do – this will be a baseline for indicators of interest to DIAL. But it will provide a unique opportunity for you to help us build a unique snapshot of the challenges and opportunities you face in your work, in funding, designing, or delivering these services.

You’ll be answering questions we don’t believe are asked enough – about your partnerships, about how you cover your costs, and about the technical choices you’re making, specific to the work you do – whether you’re a businessperson, NGO worker, technologist, donor, or government employee.

How do I participate?

Please take the survey here. It will take 15-20 minutes to complete, and you’ll be answering questions, among others, about how you design and procure digital projects; how easy and how cost-effective they are to undertake; and what you see as key barriers. Your response can be anonymous.

To thank you for your time, if you leave us your email, we’ll share our findings with you and invite you into the conversation about the results. We’ll also be sharing our summary findings with the community.

We hope you’ll help us – and share this link with others.

Please help us get the word out about our survey, and help us gather more and better data about how our ecosystem really works.

Feedback Report from MERL Tech London 2018

MERL Tech London happened on March 19-20, 2018. Here are some highlights from session level feedback and the post-conference survey on the overall MERL Tech London experience.

If you attended MERL Tech London, please get in touch if you have any further questions about the feedback report or if you would like us to send you detailed (anonymized) feedback about a session you led. Please also be sure to send us your blog posts & session summaries so that we can post them on MERL Tech News!

Background on the data

  • 54 participants (~27%) filled out the post-conference survey via Google Forms.
  • 59 (~30%) rated and/or commented on individual sessions via the Sched app. Participants chose from three ’emoji’ options: a happy face 🙂 , a neutral face 😐 , and a sad face 🙁 . Participants could also leave their comments on individual sessions.
  • We received 616 session ratings/comments via Sched. Some people rated the majority of sessions they attended; others only rated 1-2 sessions.
  • Some reported that they did not feel comfortable rating sessions in the Sched app because they were unclear about whether session leads and the public could see the rating. In future, we will let participants know that only Sched administrators can see the identity of commenters and the ratings given to sessions.
  • We do not know if there is an overlap between those who filled out Sched and those that fed back via Google Forms because the Google Forms survey is anonymous.

Overall feedback

Here’s how survey participants rated the overall experience:

Breakout sessions– 137 ratings: 69% 🙂 30% 😐 and 13% 🙁

Responses were fairly consistent across both Sched ratings and Google Forms (the form asked people to identify their favorite session). Big data and data science sessions stand out with the highest number of favorable ratings and comments. General Data Protection Regulation (GDPR) and responsible data made an important showing, as did the session on participatory video in evaluation.

Sessions with favorable comments tended to include or combine elements of:

  • an engaging format
  • good planning and facilitation
  • strong levels of expertise
  • clear and understandable language and examples
  • and strategic use of case studies to point at a bigger picture that is replicable to other situations.

Below are the breakout sessions that received the most favorable ratings and comments overall. (Plenty of other sessions were also rated well but did not make the “top-top.”)

Session title

Comments

Be it resolved: In the near future, conventional evaluation and big data will be successfully integrated Brilliant session! Loved the format! Fantastic to have such experts taking part. Really appreciated the facilitation and that there was a time at the end for more open questions/discussion.
Innovative Use of Theory-Based and Data Science Evaluation Approaches Most interesting talk of the day (maybe more for the dedicated evaluation practitioners), very practical and easy to understand and I’m really looking forward to hearing more about the results as the work progresses!
Unpacking How Change Happened (or Didn’t): Participatory Video and Most Significant Change Right amout of explanation and using case studies to illustrate points and respond to questions rather than just stand alone case studies.
GDPR – What Is It and What Do We Do About It? GDPR and what we do about it – Great presentation starting off with some historical background, explaining with clarity how this new legislation is a rights-based approach and concluding on how for Oxfam this is not a compliance project but a modification in data culture. Amazing, innovative and the speaker knew his area very well.
The Best of Both Worlds? Combining Data Science and Traditional M&E to Understand Impact I learned so much from this session and was completely inspired by the presenter and the content. Clear – well paced – honest – open – collaborative and packed with really good insight. Amazing.
Big Data, Adaptive Management, and the Future of MERL Quite a mixed bag of presenters, with focus on different pieces on the overall topic. Speakers from Novometrics was particularly engaging and stimulated some good discussion.
Blockchain: Getting Past the Hype and Considering its Value for MERL Great group with good facilitation. Open ended question left lots of room for discussion without bias towards particular outcome. Learned lots and not just about blockchain.
LEAP, and How to Bring Data to Life in Your Organization Really great session, highly interactive, rich in concepts clearly and convincingly explained. No questions were left unanswered. Very insightful suggestions shared between the presenters/facilitators and the audience. Should be on the agenda of next MERL Tech Conference as well.
Who Watches the Watchers? Good Practice for Ethical MERL(Tech)

 

I came out with some really helpful material. Collaborative session and good workshop participants willing to share and mind map. Perhaps the lead facilitator could have been a bit more contextual. Not always clear. But, our table session was really helpful and output useful.
The GDPR is coming! Now what?! Practical Steps to Help You Get Ready Good session. Appreciated the handouts….

What could session leads improve on?

We also had a few sessions that were ranked closer to 😐 (somewhere around a 6 or 6.5 on a scale of 1-10). Why did participants rate some sessions lower?

  • “Felt like a product pitch”
  • “Title was misleading”
  • Participatory activity was unclear
  • Poor time management
  • “Case studies did not expand to learning for the sector – too much ‘this is what we did’ and not enough ‘this is what it means.””
  • Poor facilitation/moderation
  • “Too unstructured, meandering”
  • Low energy
  • “Only a chat among panelists, very little time for Q&A. No space to engage”

Additionally, some sessions had participants with very diverse levels of expertise and varied backgrounds and expectations, which seemed to affect session ratings.

Lightning Talks– 182 ratings: 74% 🙂 22% 😐 4% 🙁

Lightning talks consistently get the highest ratings at MERL Tech, and this year was no exception. As one participant said, “my favorite sessions were the lightning talks because they gave a really quick overview of really concrete uses of technology in M&E work. This really helped in getting an overview of the type of activities and projects that were going on.”  Participants rated almost all the lightning talks positively.

Plenary sessions– 192 ratings: 77% 🙂 21% 😐 and 2% 🙁

Here we include:  the welcome, discussions on MERL Tech priorities on Day 1, opening talks on both days, summary on Day 1, panel with donors, the closing ‘fishbowl’, and the Fail Fest.

Opening Talks:

  • People appreciated André’ Clarke’s stage setting talk on Day 1. “Clear, accessible and thoughtful.” “Nice deck!”
  • Anahi Ayala Iacucci’s opening talk on Day 2 was a hit: “Great keynote! Anahi is very engaging but also her content was really rich. Useful that she used a lot of examples and provided a lot of history.” And “Like Anahi says ‘The question to ask is what does technology do _to_ development, rather than what can technology do _for_ development.'”

Deep Dive into Priorities for the Sector:

  • Most respondents enjoyed the self-directed conversations around the various topics.
  • “Great way to set the tone for the following sessions….” “Some really valuable and practical insights shared.” “Great group, very interesting discussion, good way to get to know a few people.”

Fail Fest:

  • The Fail Fest was enjoyed by virtually everyone. “Brilliantly honest! Well done for having created the space and thank you to those who shared so openly.” “Awesome! Anahi definitely stole the show. What an amzing way to share learning, so memorable. Again, one to steal….” “I thought this was fun way to end the first day. All the presenters were really good and the session was well framed and facilitated by Wayan.”

Fishbowl:

  • There were mixed reactions to the “Fish Bowl”
  • “Great session and way to close the event!” “Fascinating – especially insights from Michael and Veronica.” “Not enough people volunteered to speak.” “Some speakers went on too long.”

Lunchtime Demos – 23 ratings: 52% 🙂 34% 😐 and 13% 🙁

We know that many MERL Tech participants are wary of being “sold” to. Feedback from past conferences has been that participants don’t like sales pitches disguised as breakout sessions and lightning talks. So, this year we experimented with the idea of lunchtime demo sessions. The idea was that these optional sessions would allow people with a specific interest in a tool or product to have dedicated time with the tool creators for a demo or Q&A. We hoped that doing demo sessions separately from breakout sessions would make the nature of the sessions clear. Judging from the feedback, we didn’t hit the mark. We’ll try to do better next time!

What went wrong?

  • Timing: “The schedule was too tight.” “Give more time to the lunch time demo sessions or change the format. I missed the Impact Mapper session on day 1 as there was insufficient time to eat, go for a comfort break and network. This is really disappointing. I suggest a dedicated hour in the programme on the first day to visit all the ICT provider stalls.”
  • Content: “Demo sessions were more like advertising sessions by respective companies, while nicely titled as if they were to explore topical issues. Demo sessions were all promising the world to us while we know how much challenge technology application faces in real-world. Overall so many demo sessions within a short 2-day conference compromised the agenda”
  • Framing and intent: “I don’t know that framing the lunch sessions as ‘product demos’ makes a ton of sense. Maybe force people to have real case studies or practical (hands-on) sessions, and make them regular sessions? Not sure.” “I think more care needs to be taken to frame the sessions run by the software companies with a proper declarations of interests…. Sessions led by software reps were a little less transparent in that they pitched their product, but through some other topic that people would be interested in. I think that it would be wise to make it a DOI [declaration of intent] that is scripted when people who have an interest declare their interest up front for every panel discussion at the beginning, even if they did a previous one. I think that way the rules would be a little clearer.” 

General Comments

Because we bring such a diverse group together in terms of field, experience, focus and interest, expectations are varied, and we often see conflicting suggestions. Whereas some would like more MERL content, others want more Tech content. Where as some learn a lot, others feel they have heard much of this before. Here are a few of the overall comments from Sched and the Google Form.

Who else should be at MERL Tech?

  • More donors “EU, DFID, MCC, ADB, WB, SIDA, DANIDA, MFA Finland”
  • “People working with governments in developing countries”
  • “People from the ‘field’. It was mentioned in one of the closing comments that the term ‘field’ is outdated and we are not sure what we mean anymore. Wrong. There couldn’t be a more striking difference in discussions during those two days between those with solid field experience and those lacking in it.”
  • “More Brits? There were a lot of Americans that came in from DC…”

Content that participants would like to see in the future

  • More framing: “An opening session that explains what MERL Tech is and all the different ways it’s being or can be used”
  • More specifics on how to integrate technology for specific purposes and for new purposes: “besides just allowing quicker and faster data collection and analysis”
  • More big data/data science: “Anything which combines data science, stats and qualitative research is really interesting for me and seems to be the direction a lot of organisations are going in.”
  • Less big data/data science: “The big data stuff was less relevant to me”
  • More MERL-related sessions: “I have a tech background, so I would personally have liked to have seen more MERL-related sessions.”
  • More tech-related sessions: “It was my first MERLTech, so enjoyed it. I felt that many of the presentations could have been more on-point with respect to the Tech side, rather than the MERL end (or better focus on the integration of the two).”
  • More “R” (Research): Institutional learning and research (evaluations as a subset of research).
  • More “L” Learning treated as topic of it’s own. By this I mean, the capture of tacit knowledge and good practice, use of this learning for adaptive management. Compared to my last MERL Tech, I felt this meeting better featured evaluation, or at least spoke of ‘E’ as its own independent letter. I would like to see this for ‘L.’”
  • More opportunities for smaller organisations to get best practice lessons.
  • More ethics discussions: “Consequences/whether or not we should be using personal data held by privately owned companies (like call details records from telecomms companies)” “The conceptual issues around the power dynamics and biases in data and tech ownership, collection, analysis and use and what it means for the development sector.”
  • Hands-on tutorials “for applying some of the methods people have used would be amazing, although may be beyond the remit of this conference.”
  • Coaching sessions: “one-on-ones to discuss challenges in setting up good M&E systems in smaller organisations – the questions we had, and the challenges we face did not feel like they would have been of relevance to the INGOs in the room.”

Some “Ah ha! Moments”

  • “The whole tech discussion needs to be framed around evaluation practice and theory – it seems to me that people come at this being somewhat data obsessed and driven but not starting from what we want to know and why that might be useful.”
  • “There is still quite a large gap between the data scientist and the M&E world – we really need to think more on how to bridge that gap. Despite the fact that it is recognized I do feel that much of the tech stuff was ‘ because we can’ and not because it is useful and answers to a concrete problem. On the other hand some of the tech was so complex that I also couldn’t assess whether it was really useful and what possible risks could be”
  • “I was surprised to see the scale of the impact GDPR is apparently making. Before the conference, I usually felt that most people didn’t have much of an interest in data privacy and responsible data.”
  • “That people were being honest and critical about tech!”
  • “That the tech world remains data hungry and data obsessed!”
  • “That this group is seriously confused about how tech and MERL can be used effectively as a general business practice.”
  • “This community is learning fast!”
  • “Hot topics like Big Data and Block Chain are only new tools, not silver bullets. Like RCTs a few years ago, we are starting to understand their best use and specific added value.”

Kudos

  • “A v useful conference for a growing sector. Well done!”
  • “Great opportunity for bringing together different sectors – sometimes it felt we were talking across tech, merl, and programming without much clarity of focus or common language but I suppose that shows the value of this space to discuss and work towards a common understanding and debate.”
  • “Small but meaningful to me – watching session leads attend other sessions and actively participating was great. We have such an overlap in purpose and in some cases almost no overlap in skillsets. Really felt like MERLTech was a community taking turns to learn from each other, which is pretty different from the other conferences I’ve been to, where the same people often present the same idea to a slightly different audience each year.”
  • “I loved the vibe. I’m coming to this late in my career but was made to feel welcome. I did not feel like an idiot. I found it so informative and some sessions were really inspiring. It will probably become an annual must go to event for me.”
  • “I was fully blown away. I haven’t learnt so much in a long time during a conference. The mix of types of sessions helps massively make the most of the knowledge in room, so keep up that format in the future.”
  • “I absolutely loved it. It felt so good to be with like minded people who have similar concerns and values….”

Thanks again to everyone who filled out the feedback forms and rated their sessions. This really does help us to adapt and improve. We take your ideas and opinions seriously!

If you’d like to experience MERL Tech, please join us in Johannesburg August 1-2, 2018, or Washington, DC, September 6-7, 2018!  The call for session ideas for MERL Tech DC is open through April 30th – please submit yours now!

Digital Data Collection and the Maturing of a MERL Technology

by Christopher Robert, CEO of Dobility (Survey CTO). This post was originally published on March 15, 2018, on the Survey CTO blog.

Digital data collection: stakeholders and complex relationships

Needs, markets, and innovation combine to produce technological change. This is as true in the international development sector as it is anywhere else. And within that sector, it’s as true in the broad category of MERL (monitoring and evaluation, research, and learning) technologies as it is in the narrower sub-category of digital data collection technologies. Here, I’ll consider the recent history of digital data collection technology as an example of MERL technology maturation – and as an example, more broadly, of the importance of market structure in shaping the evolution of a technology.

My basic observation is that, as digital data collection technology has matured, the same stakeholders have been involved – but the market structure has changed their relative power and influence over time. And it has been these very changes in power and influence that have changed the cost and nature of the technology itself.

First, when it comes to digital data collection in the development context, who are the stakeholders?

  • Donors. These are the primary actors who fund development work, evaluation of development policies and programs, and related research. There are mega-actors like USAID, Gates, and the UN agencies, but also many other charities, philanthropies, and public or nonprofit actors, from Catholic Charities to the U.S. Centers for Disease Control and Prevention.
  • Developers. These are the designers and software engineers involved in producing technology in the space. Some are students or university faculty, some are consultants, many work full-time for nonprofits or businesses in the space. (While some work on open-source initiatives in a voluntary capacity, that seems quite uncommon in practice. The vast majority of developers working on open-source projects in the space get paid for that work.)
  • Consultants and consulting agencies.These are the technologists and other specialists who help research and program teams use technology in the space. For example, they might help to set up servers and program digital survey instruments.
  • Researchers. These are the folks who do the more rigorous research or impact evaluations, generally applying social-science training in public health, economics, agriculture, or other related fields.
  • M&E professionals.These are the people responsible for program monitoring and evaluation. They are most often part of an implementing program team, but it’s also not uncommon to share more centralized (and specialized) M&E teams across programs or conduct outside evaluations that more fully separate some M&E activities from the implementing program team.
  • IT professionals.These are the people responsible for information technology within those organizations implementing international development programs and/or carrying out MERL activities.
  • Program beneficiaries. These are the end beneficiaries meant to be aided by international development policies and programs. The vast majority of MERL activities are ultimately concerned with learning about these beneficiaries.

Digital data collection stakeholders

These different stakeholders have different needs and preferences, and the market for digital data collection technologies has changed over time – privileging different stakeholders in different ways. Two distinct stages seem clear, and a third is coming into focus:

  1. The early days of donor-driven pilots and open source. These were the days of one-offs, building-your-own, and “pilotitis,” where donors and developers were effectively in charge and there was a costly additional layer of technical consultants between the donors/developers and the researchers and M&E professionals who had actual needs in the field. Costs were high, and some combination of donor and developer preferences reigned supreme.
  2. Intensifying competition in program-adopted professional products.Over time, professional products emerged that began to directly market to – and serve – researchers and M&E professionals. Costs fell with economies of scale, and the preferences of actual users in the field suddenly started to matter in a more direct, tangible, and meaningful way.
  3. Intensifying competition in IT-adopted professional products.Now that use of affordable, accessible, and effective data-collection technology has become ubiquitous, it’s natural for IT organizations to begin viewing it as a kind of core organizational infrastructure, to be adopted, supported, and managed by IT. This means that IT’s particular preferences and needs – like scale, standardization, integration, and compliance – start to become more central, and costs unfortunately rise.

While I still consider us to be in the glory days of the middle stage, where costs are low and end-users matter most, there are still plenty of projects and organizations living in that first stage of more costly pilots, open source projects, and one-offs. And I think that the writing’s very much on the wall when it comes to our progression toward the third stage, where IT comes to drive the space, innovation slows, and end-user needs are no longer dominant.

Full disclosure: I myself have long been a proponent of the middle phase, and I am proud that my social enterprise has been able to help graduate thousands of users from that costly first phase. So my enthusiasm for the middle phase began many years ago and in fact helped to launch Dobility.

THE EARLY DAYS OF DONOR-DRIVEN PILOTS AND OPEN SOURCE

Digital data collection stage 1 (the early days)

In the beginning, there were pioneering developers, patient donors, and program or research teams all willing to take risks and invest in a better way to collect data from the field. They took cutting-edge technologies and found ways to fit them into some of the world’s most difficult, least-cutting-edge settings.

In these early days, it mattered a lot what could excite donors enough to open their checkbooks – and what would keep them excited enough to keep the checks coming. So the vital need for large and ongoing capital injections gave donors a lot of influence over what got done.

Developers also had a lot of sway. Donors couldn’t do anything without them, and they also didn’t really know how to actively manage them. If a developer said “no, that would be too hard or expensive” or even “that wouldn’t work,” what could the donor really say or do? They could cut off funding, but that kind of leverage only worked for the big stuff, the major milestones and the primary objectives. For that stuff, donors were definitely in charge. But for the hundreds or thousands of day-to-day decisions that go into any technology solution, it was the developers effectively in charge.

Actual end-users in the field – the researchers and M&E professionals who were piloting or even trying to use these solutions – might have had some solid ideas about how to guide the technology development, but they had essentially no levers of control. In practice, the solutions being built by the developers were often so technically-complex to configure and use that there was an additional layer of consultants (technical specialists) sitting between the developers and the end-users. But even if there wasn’t, the developers’ inevitable “no, sorry, that’s not feasible,” “we can’t realistically fit that into this release,” or simple silence was typically the end of the story for users in the field. What could they do?

Unfortunately, without meaning any harm, most developers react by pushing back on whatever is contrary to their own preferences (I say this as a lifelong developer myself). Something might seem like a hassle, or architecturally unclean, and so a developer will push back, say it’s a bad idea, drag their heels, even play out the clock. In the past five years of Dobility, there have been hundreds of cases where a developer has said something to the effect of “no, that’s too hard” or “that’s a bad idea” to things that have turned out to (a) take as little as an hour to actually complete and (b) provide massive amounts of benefit to end-users. There’s absolutely no malice involved, it’s just the way most of them/us are.

This stage lasted a long time – too long, in my view! – and an entire industry of technical consultants and paid open-source contributors grew up around an approach to digital data collection that didn’t quite embrace economies of scale and never quite privileged the needs or preferences of actual users in the field. Costs were high and complaints about “pilotitis” grew louder.

INTENSIFYING COMPETITION IN PROGRAM-ADOPTED PROFESSIONAL PRODUCTS

Digital data collection stage 2 (the glory days)

But ultimately, the protagonists of the early days succeeded in establishing and honing the core technologies, and in the process they helped to reveal just how much was common across projects of different kinds, even across sectors. Some of those protagonists also had the foresight and courage to release their technologies with the kinds of permissive open-source licenses that would allow professionalization and experimentation in service and support models. A new breed of professional products directly serving research, program, and M&E teams was born – in no small part out of a single, tremendously-successful open-source project, Open Data Kit (ODK).

These products tended to be sold directly to end-users, and were increasingly intended for those end-users to be able to use themselves, without the help of technical staff or consultants. For traditionalists of the first stage, this was a kind of heresy: it was considered gauche at best and morally wrong at worst to charge money for technology, and it was seen as some combination of impossible and naive to think that end-users could effectively deploy and manage these technologies without technical assistance.

In fact, the new class of professional products were not designed to be used entirely without assistance. But they were designed to require as little assistance as possible, and the assistance came with the product instead of being provided by a separate (and separately-compensated) internal or external team.

A particularly successful breed of products came to use a “Software as a Service” (SaaS) model that streamlined both product delivery and support, ramping up economies of scale and driving down costs in the process (like SurveyCTO). When such products offered technical support free-of-charge as part of the purchase or subscription price, there was a built-in incentive to improve the product: since tech support was so costly to deliver, improving the product such that it required less support became one of the strongest incentives driving product development. Those who adopted the SaaS model not only had to earn every dollar of revenue from end-users, but they had to keep earning that revenue month in, month out, year in, year out, in order to retain business and therefore the revenue needed to pay the bills. (Read about other SaaS benefits for M&E in this recent DevResults post.)

It would be difficult to overstate the importance of these incentives to improve the product and earn revenue from end-users. They are nothing short of transformative. Particularly once there is active competition among vendors, users are squarely in charge. They control the money, their decisions make or break vendors, and so their preferences and needs are finally at the center.

Now, in addition to the “it’s heresy to charge money or think that end-users can wield this kind of technology” complaints that used to be more common, there started to be a different kind of complaint: there are too many solutions! It’s too overwhelming, how many digital data collection solutions there are now. Some go so far as to decry the duplication of effort, to claim that the free market is inefficient or failing; they suggest that donors, consultants, or experts be put back in charge of resource allocation, to re-impose some semblance of sanity to the space.

But meanwhile, we’ve experienced a kind of golden age in terms of who can afford digital data collection technology, who can wield it effectively, and in what kinds of settings. There are a dizzying number of solutions – but most of them cater to a particular type of need, or have optimized their business model in a particular sort of way. Some, like us, rely nearly 100% on subscription revenues, others fund themselves more primarily from service provision, others are trying interesting ways to cross-subsidize from bigger, richer users so that they can offer free or low-cost options to smaller, poorer ones. We’ve overcome pilotitis, economies of scale are finally kicking in, and I think that the social benefits have been tremendous.

INTENSIFYING COMPETITION IN IT-ADOPTED PROFESSIONAL PRODUCTS

Digital data collection stage 3 (the coming days)

It was the success of the first stage that laid the foundation for the second stage, and so too it has been the success of the second stage that has laid the foundation for the third: precisely because digital data collection technology has become so affordable, accessible, and ubiquitous, organizations are increasingly thinking that it should be IT departments that procure and manage that technology.

Part of the motivation is the very proliferation of options that I mentioned above. While economics and the historical success of capitalism has taught us that a marketplace thriving with competition is most often a very good thing, it’s less clear that a wide variety of options is good within any single organization. At the very least, there are very good reasons to want to standardize some software and processes, so that different people and teams can more effortlessly share knowledge and collaborate, and so that there can be some economies of scale in training, support, and compliance.

Imagine if every team used its own product and file format for writing documents, for example. It would be a total disaster! The frictions across and between teams would be enormous. And as data becomes more and more core to the operations of more organizations – the way that digital documents became core many years ago – it makes sense to want to standardize and scale data systems, to streamline integrations, just for efficiency purposes.

Growing compliance needs only up the ante. The arrival of the EU’s General Data Protection Regulation (GDPR) this year, for example, raises the stakes for EU-based (or even EU-touching) organizations considerably, imposing stiff new data privacy requirements and steep penalties for violations. Coming into compliance with GDPR and other data-security regulations will be effectively impossible if IT can’t play a more active role in the procurement, configuration, and ongoing management of data systems; and it will be impractical for IT to play such a role for a vast array of constantly-shifting technologies. After all, IT will require some degree of stability and scale.

But if IT takes over digital data collection technology, what changes? Does the golden age come to an end?

Potentially. And there are certainly very good reasons to worry.

First, changing who controls the dollars – who’s in charge of procurement – threatens to entirely up-end the current regime, where end-users are directly in charge and their needs and preferences are catered to by a growing body of vendors eager to earn their business.

It starts with the procurement process itself. When IT is in charge, procurement processes are long, intensive, and tend to result in a “winner take all” contract. After all, it makes sense that IT departments would want to take their time and choose carefully; they tend to be choosing solutions for the organization as a whole (or at least for some large class of users within the organization), and they most often intend to choose a solution, invest heavily in it, and have it work for as long as possible.

This very natural and appropriate method that IT uses to procure is radically different from the method used by research, program, and M&E teams. And it creates a radically different dynamic for vendors.

Vendors first have to buy into the idea of investing heavily in these procurement processes – which some may simply choose not to do. Then they have to ask themselves, “what do these IT folks care most about?” In order to win these procurements, they need to understand the core concerns driving the purchasing decision. As in the old saying “nobody ever got fired for choosing IBM,” safety, stability, and reputation are likely to be very important. Compliance issues are likely to matter a lot too, including the vendor’s established ability to meet new and evolving standards. Integrations with corporate systems are likely to count for a lot too (e.g., integrating with internal data and identity-management systems).

Does it still matter how well the vendor meets the needs of end-users within the organization? Of course. But note the very important shift in the dynamic: vendors now have to get the IT folks to “yes” and so would be quite right to prioritize meeting their particular needs. Nobody will disagree that end-users ultimately matter, but meanwhile the focus will be on the decision-makers. The vendors that meet the decision-makers’ needs will live, the others will die. That’s simply one aspect of how a free market works.

Note also the subtle change in dynamic once a vendor wins a contract: the SaaS model where vendors had to re-earn every customer’s revenue month in, month out, is largely gone now. Even if the contract is formally structured as a subscription or has lots of exit options, the IT model for technology adoption is inherently stickier. There is a lot more lock-in in practice. Solutions are adopted, they’re invested in at large scale, and nobody wants to walk away from that investment. Innovation can easily slow, and nobody wants to repeat the pain of procurement and adoption in order to switch solutions.

And speaking of the pain of the procurement process: costs have been rising. After all, the procurement process itself is extremely costly to the vendor – especially when it loses, but even when it wins. So that’s got to get priced in somewhere. And then all of the compliance requirements, all of the integrations with corporate systems, all of that stuff’s really expensive too. What had been an inexpensive, flexible, off-the-shelf product can easily become far more expensive and far less flexible as it works itself through IT and compliance processes.

What had started out on a very positive note (“let’s standardize and scale, and comply with evolving data regulations”) has turned in a decidedly dystopian direction. It’s sounding pretty bad now, and you wouldn’t be wrong to think “wait, is this why a bunch of the products I use for work are so much more frustrating than the products I use as a consumer?” or “if Microsoft had to re-earn every user’s revenue for Excel, every month, how much better would it be?”

While I don’t think there’s anything wrong with the instinct for IT to take increasing control over digital data collection technologies, I do think that there’s plenty of reason to worry. There’s considerable risk that we lose the deep user orientation that has just been picking up momentum in the space.

WHERE WE’RE HEADED: STRIKING A BALANCE

Digital data collection stage 4 (finding a balance?)

If we don’t want to lose the benefits of a deep user orientation in this particular technology space, we will need to work pretty hard – and be awfully clever – to avoid it. People will say “oh, but IT just needs to consult research, program, and M&E teams, include them in the process,” but that’s hogwash. Or rather, it’s woefully inadequate. The natural power of those controlling resources to bend the world to their preferences and needs is just too powerful for mere consultation or inclusion to overcome.

And the thing is: what IT wants and needs is good. So the solution isn’t just “let’s not let them anywhere near this, let’s keep the end-users in charge.” No, that approach collapses under its own weight eventually, and certainly it can’t meet rising compliance requirements. It has its own weaknesses and inefficiencies.

What we need is an approach – a market structure – that allows the needs of IT and the needs of end-users both to matter to appropriate degrees.

With SurveyCTO, we’re currently in an interesting place: we’re becoming split between serving end-users and serving IT organizations. And I suppose as long as we’re split, with large parts of our revenue coming from each type of decision-maker, we remain incentivized to keep meeting everybody’s needs. But I see trouble on the horizon: the IT organizations can pay more, and more organizations are shifting in that direction… so once a large-enough proportion of our revenue starts coming from big, winner-take-all IT contracts, I fear that our incentives will be forever changed. In the language of economics, I think that we’re currently living in an unstable equilibrium. And I really want the next equilibrium to serve end-users as well as the last one!

Present or lead a session at MERL Tech DC!

Please sign up to present, register to attend, or reserve a demo table for MERL Tech DC 2018 on September 6-7, 2018 at FHI 360 in Washington, DC.

We will engage 300 practitioners from across the development ecosystem for a two-day conference seeking to turn the theories of MERL technology into effective practice that delivers real insight and learning in our sector.

MERL Tech DC 2018, September 6-7, 2018

Digital data and new media and information technologies are changing monitoring, evaluation, research and learning (MERL). The past five years have seen technology-enabled MERL growing by leaps and bounds. We’re also seeing greater awareness and concern for digital data privacy and security coming into our work.

The field is in constant flux with emerging methods, tools and approaches, such as:

  • Adaptive management and developmental evaluation
  • Faster, higher quality data collection
  • Remote data gathering through sensors and self-reporting by mobile
  • Big data, data science, and social media analytics
  • Story-triggered methodologies

Alongside these new initiatives, we are seeing increasing documentation and assessment of technology-enabled MERL initiatives. Good practice guidelines are emerging and agency-level efforts are making new initiatives easier to start, build on and improve.

The swarm of ethical questions related to these new methods and approaches has spurred greater attention to areas such as responsible data practice and the development of policies, guidelines and minimum ethical standards for digital data.

Championing the above is a growing and diversifying community of MERL practitioners, assembling from a variety of fields; hailing from a range of starting points; espousing different core frameworks and methodological approaches; and representing innovative field implementers, independent evaluators, and those at HQ that drive and promote institutional policy and practice.

Please sign up to present, register to attend, or reserve a demo table for MERL Tech DC to experience 2 days of in-depth sharing and exploration of what’s been happening across this cross-disciplinary field, what we’ve been learning, complex barriers that still need resolving, and debate around the possibilities and the challenges that our field needs to address as we move ahead.

Submit Your Session Ideas Now

Like previous conferences, MERL Tech DC will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are looking for a range of topics, including:

  • Experiences and learning at the intersection of MERL and tech
  • Ethics, inclusion, safeguarding, and data privacy
  • Data (big data, data science, data analysis)
  • Evaluation of ICT-enabled efforts
  • The future of MERL
  • Tech-enabled MERL Failures

Visit the session submission page for more detail on each of these areas.

Submission Deadline: Monday, April 30, 2018 (at midnight EST)

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early June and, if selected, you will be asked to submit the final session title, summary and outline by June 30.

Register Now

Please sign up to present or register to attend MERL Tech DC 2018 to examine these trends with an exciting mix of educational keynotes, lightning talks, and group breakouts, including an evening reception and Fail Fest to foster needed networking across sectors and an exploration of how we can learn from our mistakes.

We are charging a modest fee to better allocate seats and we expect to sell out quickly again this year, so buy your tickets or demo tables now. Event proceeds will be used to cover event costs and to offer travel stipends for select participants implementing MERL Tech activities in developing countries.

You can also submit session ideas for MERL Tech Jozi, coming up on August 1-2, 2018! Those are due on March 31st, 2018!

What’s the Deal with Data — Bridging the Data Divide in Development

Written by Ambika Samarthya-Howard, Head of Communications, Praekelt.org. This post was originally published on March 26, 2018, on Medium.

Working on communications at Praekelt.org, I have had the opportunity to see first-hand the power of sharing stories in driving impact and changing attitudes. Over the past month I’ve attended several unrelated events all touching on data, evaluation, and digital development which have reaffirmed the importance of finding common ground to share and communicate data we value.

Storytelling and Data

I recently presented a poster on “Storytelling for Organisational Change” at the University of London’s Behavior Change Conference. Our current evaluations at Praekelt draw on work by the center, which is a game-changer in the field. But I didn’t submit an abstract on our agile, experimental investigations: I was sharing information about how I was using films and our storytelling to create change within the organisation.

After my abstract was accepted, I realized I had to present my findings as a poster. For many practitioners (like myself) we really have no idea what a poster entails. Thankfully I got advice from academics and support from design colleagues to translate my videos, photos, and storytelling deck into a visual form I could pin up. When the printers in New York told me “this is a really great poster”, I started picking up the hint that it was atypical.

Once I arrived at the poster hall at UCL, I could see why. Nearly, if not all, of the posters in the room had charts and numbers and graphs — lots and lots of data points. On the other hand, my poster had almost no “data”. It was colorful, and showed a few engaging images, the story of our human-centered design process, and was accompanied by videos playing on my laptop alongside the booth. It was definitely a departure from the “research” around the room.

This divide between research and practice showed up many times through the conference. For starters, this year, attendees were asked to choose a sticker label based on whether they were in research/ academics or programme/ practitioners. Many of the sessions talked about how to bridge the divide and make research more accessible to practitioners, and take learnings from programme creators to academia.

Thankfully for me, the tight knit group of practitioners felt solace and connection to my chart-less poster, and perhaps the academics a bit of a relief at the visuals as well: we went home with one of the best poster awards at the conference.

Data Parties and Cliques

The London conference was only the beginning of when I became aware of the conversations around the data divide in digital development. “Why are we even using the word data? Does anyone else value it? Does anyone else know what it means?” Anthony Waddell, Chief Innovation Officer of IBI, provocatively put out there at a breakout session at USAID’s Digital Development Forum in Washington. The conference gathered organisations around the United States working in digital development, asking them to consider key points around the evolution of digital development in the next decade — access, inclusivity, AI, and, of course, the role of data.

This specific break-out session was sharing best practices of using and understanding data within organisations, especially amongst programmes teams and country office colleagues. It also expanded to sharing with beneficiaries, governments, and donors. We questioned whose data mattered, why we were valuing data, and how to get other people to care.

Samhir Vasdev, the advisor for Digital Development at IREX, spoke on the panel about MIT’s initiatives and their Data Culture Lab, which shared exercises to help people understand data. He talked about throwing data parties where teams could learn and understand that what they were creating was data, too. The gatherings allow people to explore the data they produce, but perhaps did not get a chance to interrogate. The real purpose is to understand what new knowledge their own data tells them, or what further questions the data challenges them to explore. “Data parties a great way to encourage teams to explore their data and transform it into insights or questions that they can use directly in their programs.”

Understanding data can be empowering. But being shown the road forward doesn’t necessarily means that’s the road participants can or will take. As Vasdev noted, “ “Exercises like this come with their own risks. In some cases, when working with data together with beneficiaries who themselves produced that information, they might begin demanding results or action from their data. You have to be prepared to manage these expectations or connect them with resources to enable meaningful action.” One can imagine the frustration if participants saw their data leading to the need for a new clinic, yet a clinic never got built.

Big Data, Bias, and M&E

Opening the MERL (Monitoring, Evaluation, Research, and Learning) Tech Conference in London, Andre Clark, Effectiveness and Learning Adviser at Bond, spoke about the increasing importance of data in development in his keynote. Many of the voices in the room resonated with the trends and concerns I’ve observed over the last month. Is data the answer? How is it the answer?André Clarke’s keynote at MERL Tech

“The tool is not going to solve your problem,” one speaker said during the infamous off-the-record Fail Fest where attendees present on their failures to learn from each other’s mistakes. The speaker shared examples of a new reporting initiative which hadn’t panned out as expected. She noted that “we initially thought tech would help us work faster and more efficiently, but now we are clearly seeing the importance of quality data over timely data”. Although digital data may be better and faster, that does not mean it’s solving the original problem.

In using data to evaluate problems, we have to make sure we are under no illusions that we are actually dealing with core issues at hand. For examples, during my talk on Social Network Analysis we discussed both the opportunities and challenges of using the quantitative process in M&E. The conference consistently emphasized the importance of slower, and deeper processes as opposed to faster, and shorter ones driven by technology.

This holds true for how data is used in M&E practices. For example, I attended a heated debate on the role of “big data” in M&E and whether the convergence was inevitable. As one speaker mentioned, “if you close your eyes and forget the issue at hand is big data, you could feel like it was about any other tool used in M&E”. The problems around data collection, bias, inaccessibility, language, and tools were there in M&E regardless of big data or small data.

Other core issues raised were power dynamics, inclusivity, and the fact that technology is made by people and therefore it is not neutral. As Anahi Ayala Iacucci, Senior Director of Humanitarian Programs at Internews, said explicitly “we are biased, and so we are building biased tools.” In her presentation, she talked about how technology mediates and alters human relationships. If we take the slower and deeper approach we will have an ability to really explore biases and understand the value and complications of data.

“Evaluators don’t understand data, and then managers and public don’t understand evaluation talk,” Maliha Khan of Daira said, bringing it back to my original concerns about translation and bridging gaps in the space. Many of the sessions sought to address this problem, a nice example being Cooper Smith’s Kuunika project in Malawi that used local visual illustrations to accompany their survey questions on tablets. Another speaker pushed for us to move into the measurement space, as opposed to monitoring, which has the potential to be a page we can all agree on.

As someone who feels responsible for not only communicating our work externally, but sharing knowledge amongst our programmes internally, where did all this leave me? I think I’ll take my direction from Anna Maria Petruccelli, Data Analyst at Comic Relief, who spoke about how rather than organisations committing to being data-driven, they could be committed to being data-informed.

To go even further with this advice, at Praekelt we make the distinction between data-driven and evidence-driven, where the latter acknowledges the need to attend to research design and emphasize quality, not just quantity. Evidence encompasses the use of data but includes the idea that not all data are equal, that when interpreting data we attend to both the source of data and research design.

I feel confident that turning our data into knowledge, regardless of how we choose to use it and being aware of how bias informs the way we do, can be the first step forward on a unified journey. I also think this new path forward will leverage the power of storytelling to make data accessible, and organisations better informed. It’s a road less traveled, yes, but hopefully that will make all the difference.

If you are interested in joining this conversation, we encourage you to submit to the first ever MERL Tech Jozi. Abstracts due March 31st.

MERL Tech London: What’s Your Organisation’s Take on Data Literacy, Privacy and Ethics?

 It first appeared here on March 26th, 2018.

ICTs and data are increasingly being used for monitoring, evaluation, research and learning (MERL). MERL Tech London was an open space for practitioners, techies, researchers and decision makers to discuss their good and not so good experiences. This blogpost is a reflection of the debates that took place during the conference.

Is data literacy still a thing?

Data literacy is “the ability to consume for knowledge, produce coherently and think critically about data.” The perception of data literacy varies depending on the stakeholder’s needs. Being data literate for an M&E team, for example, means possessing statistics skills including collecting and combining large data sets. Program team requires different level of data literacy: the competence to carefully interpret and communicate meaningful stories using processed data (or information) to reach the target audiences.

Data literacy is – and will remain – a priority in development. The current debate is no longer about whether an organisation should use data or not. It’s rather how well the organisation can use data to achieve their objectives. Yet, organisation’s efforts are often concentrated in just one part of the information value chain, data collection. Data collection in itself is not the end goal. Data has to be processed into information and knowledge for making informed decisions and actions.

This doesn’t necessary imply that the decision making is purely based on data, nor that data can replace the role of decision makers. Quite the opposite: data-informed decision making strikes balance between expertise and information. It also takes data limitations into account. Nevertheless, one can’t become a data-informed organisation without being data literate.

What’s your organisation’s data strategy?

The journey of becoming a data-informed organisation can take some time. Poor data quality, duplication efforts and underinvestment are classic obstacles requiring a systematic solution (see Tweet). The commitment from senior management team should be secured for that. Data team has to be established. Staff members need access to relevant data platforms and training. More importantly, the organisation has to embrace the cultural change towards valuing evidence and acting on positive and negative findings

Marten Schoonman@mato74
Responsible data handling workgroup: mindmapping the relevant subjects @MERLTech

Organisations seek to balance between (data) demands and priorities. Some invest hundreds of thousands dollars for setting up a data team to articulate the organisation’s needs and priorities, as well as to mobilise technical support. A 3-5 years strategic plan is created to coordinate efforts between country offices.

Others take a more modest approach. They recruit few data scientists to support MERL activities of analysing particularly large amounts of project data. The data scientist role evolves along the project growth. In both cases, leadership is the key driver for shifting the culture towards becoming a data-informed organisation.

Should an organisation use certain data because it can?

The organisation working with data usually faces challenges around privacy, legality, ethics and grey areas, such as bias and power dynamics between data collectors and their target groups. The use of biometric data in humanitarian settings is an example where all these tensions collide. Biometric data, e.g. fingerprint, iris scan, facial recognition – is powerful, yet invasive. While proven beneficial, biometric data is vulnerable to data breach and misuse, e.g. profiling and tracking. The practice raises critical questions: does the target group, e.g. refugees, have the option to refuse handling over their sensitive personal data? If so, will they still be entitled to receive aid assistance? To what extent the target group is aware how their sensitive personal data will be used and shared, including in the unforeseen circumstances?

The people’s privacy, safety and security are main priorities in any data work. The organisation should uphold the highest standards and set an example. In those countries where regulatory frameworks are lagging behind data and technology, organisations shouldn’t abuse their power. When the risk of using a certain data outweighs the benefits, or in doubt, the organisation should take a pause and ask itself some necessary questions from the perspective of its target groups. Oxfam which dismissed – following two years of internal discussions and intensive research – the idea of using biometric data in any of their project should be seen as a positive example.

To conclude, the benefits of data can only be realised when an organisation enjoys visionary leadership, sufficient capacity and upholds its principles. No doubts, this is easier being said than done; it requires time and patience. All these efforts, however, are necessary for a high-achieving organisations.

More reading:

**Save the date for MERL Tech Jozi coming up on Aug 1-2!  Session ideas are due this Friday (March 31st).

Please Submit Session Ideas for MERL Tech Jozi

We’re thrilled to announce that we’re organizing MERL TEch Jozi for August of 2018!

Please submit your session ideas or reserve your demo table now, to explore what’s happening with innovation, digital data, and new technologies across the monitoring, evaluation, research, and learning (MERL) fields.

MERL Tech Jozi will be in Johannesburg, South Africa, August 1-2, 2018!

At MERL Tech Jozi, we’ll build on earlier MERL Tech conferences in DC and London, engaging 100 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn theories of MERL technology into effective practices that deliver real insight and learning in our sector.

MERL Tech is a lively, interactive, community-driven conference.  We’re actively seeking a diverse set of practitioners in monitoring, evaluation, research, learning, program implementation, management, data science, and technology to lead every session.

Submit your session ideas now.

We’re looking for sessions that focus on:

  • Discussions around good practice and evidence-based review
  • Innovative MERL approaches that incorporate technology
  • Future-focused thought provoking ideas and examples
  • Conversations about ethics, inclusion, and responsible policy and practice in MERL Tech
  • Exploration of complex MERL Tech challenges and emerging good practice
  • Workshop sessions with practical, hands-on exercises and approaches
  • Lightning Talks to showcase new ideas or to share focused results and learning
Submission Deadline: Saturday, March 31, 2018.

Session submissions are reviewed and selected by our steering committee. Presenters and session leads will have priority access to MERL Tech tickets. We will notify you whether your session idea was selected in late April and if selected, you will be asked to submit the final session title, summary and detailed session outline by June 1st, 2018

If you’d prefer to showcase your technology tool or platform to MERL Tech participants, you can reserve your demo table here.

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone through our Code of Conduct.

MERL Tech Jozi is organized by Kurante and supported by the following sponsors. Contact Linda Raftree if you’d like to be a sponsor of MERL Tech Jozi too.

 

 

 

MERL Tech London 2018 Agenda is out!

We’ve been working hard over the past several weeks to finish up the agenda for MERL Tech London 2018, and it’s now ready!

We’ve got workshops, panels, discussions, case studies, lightning talks, demos, community building, socializing, and an evening reception with a Fail Fest!

Topics range from mobile data collection, to organizational capacity, to learning and good practice for information systems, to data science approaches, to qualitative methods using mobile ethnography and video, to biometrics and blockchain, to data ethics and privacy and more.

You can search the agenda to find the topics, themes and tools that are most interesting, identify sessions that are most relevant to your organization’s size and approach, pick the session methodologies that you prefer (some of us like participatory and some of us like listening), and to learn more about the different speakers and facilitators and their work.

Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here!)

View the MERL Tech London schedule & directory.