All posts by Guest Post

Improve Data Literacy at All Levels within Your Humanitarian Programme

This post is by Janna Rous at Humanitarian Data. The original was published here on April 29, 2018

Imagine this picture of data literacy at all levels of a programme:

You’ve got a “donor visit” to your programme. The country director and a project officer accompany the donor on a field trip, and they all visit a household within one of the project communities.  All sat around a cup of tea, they started a discussion about data.  In this discussion, the household members explained what data had been collected and why. The country director explained what had surprised him/her in the data.  And the donor discussed how they made a decision to fund the programme based on the data.  What if no one was surprised at the discussion, or how the data was used, because they’d ALL seen and understood the data process?

Data literacy can mean lots of different things depending on who you are.  It could mean knowing how to:

  • collect, analyze and use data;
  • make sense of data and use it for management
  • validate data, be critical of it,
  • tell good from bad data and knowing how credible it is;
  • ensure everyone is confident talking about data.

IS “IMPROVING DATA LITERACY FOR ALL LEVELS” A TOP PRIORITY FOR THE HUMANITARIAN SECTOR?

“YES” data literacy is a priority!  Poor data literacy is still a huge stumbling block for many people in the sector and needs to be improved at ALL levels – from community households to field workers to senior management to donors.  However, there are a few challenges in how this priority is worded.

IS “LITERACY” THE RIGHT WORD?

Suggesting someone is “illiterate” when it comes to data – that doesn’t sit well with most people.  Many aid workers – from senior HQ staff right down to beneficiaries of a humanitarian programme – are well-educated and successful. Not only are they literate, but most speak 2 or more languages!  So to insinuate “illiteracy” doesn’t feel right.

Illiteracy is insulting…

Many of these same people are not super-comfortable with “data”,  but to ask them if they “struggle” with data, or to suggest they “don’t understand” by claiming they are “data illiterate” is insulting (even if you think it’s true!).

Leadership is enticing…

The language you use is extremely important here.  Instead of “literacy”, should you be talking about “leadership”?  What if you framed it as:  Improving data leadership.  Could you harness the desirability of that skill – leadership – so that workshop and training titles played into people’s egos, instead of attacking their egos?

WHAT CAN YOU DO TO IMPROVE DATA LITERACY (LEADERSHIP) WITHIN YOUR OWN ORGANIZATION?

You might be directly involved with helping to improve data literacy within your own organization.  Here are a few ideas on how to improve general data literacy/leadership:

  • Training and courses around data literacy.

While courses that focus on data analysis using computer programming languages such as [R] or Python exist, it might be better to focus on skills-development on more popular software (such as Excel) which is more sustainable. Due to the high turnover of staff within your sector, complex data analysis cannot normally be sustained once an advanced analyst leaves the field.

  • Donor funding to promote data use and the use of technology.

While the sector should not only rely on donors for pushing the agenda of data literacy forward, money is powerful.  If NGOs and agencies are required to show data literacy in order to receive funding, this will drive a paradigm shift in becoming more data-driven as a sector.  There are still big questions on how to fund interoperable tech systems in the sector to maximize the value of that funding in collaboration between multiple agencies.  However, donors who can provide structures and settings for collaboration will be able to promote data literacy across the sector.

  • Capitalize on “trendy” knowledge – what do people want to know about because it makes them look intelligent?

In 2015/16, everyone wanted to know “how to collect digital data”.  A couple years later, most people had shifted – they wanted to know “how to analyze data” and “make a dashboard”.  Now in 2018, GDPR and “Responsible Data” and “Blockchain” are trending – people want to know about it so they can talk about it.  While “trends” aren’t all we should be focusing on, they can often be the hook that gets people at all levels of our sector interested in taking their first steps forward in data literacy.

DATA LITERACY MEANS SOMETHING DIFFERENT FOR EACH PERSON

Data literacy means something completely different depending on who you are, your perspective within a programme, and what you use data for.

To the beneficiary of a programme…

data literacy might just mean understanding why data is being collected and what it is being used for.  It means having the knowledge and power to give and withhold consent appropriately.

To a project manager…

data literacy might mean understanding indicator targets, progress, and the calculations behind those numbers, in addition to how different datasets relate to one another in a complex setting.  Managers need to understand how data is coming together so that they can ask intelligent questions about their programme dashboards.

To an M&E officer…

data literacy might mean an understanding of statistical methods, random selection methodologies, how significant a result may be, and how to interpret results of indicator calculations.  They may need to understand uncertainty within their data and be able to explain this easily to others.

To the Information Management team…

data literacy might mean understanding how to translate programme calculations into computer code.  They may need to create data collection or data analysis or data visualization tools with an easy-to-understand user-interface.  They may ultimately be relied upon to ensure the correctness of the final “number” or the final “product”.

To the data scientist…

data literacy might mean understanding some very complex statistical calculations, using computer languages and statistical packages to find trends, insights, and predictive capabilities within datasets.

To the management team…

data literacy might mean being able to use data results (graphs, charts, dashboards) to explain needs, results, and impact in order to convince and persuade. Using data in proposals to give a good basis for why a programme should exist or using data to explain progress to the board of directors, or even as a basis for why a new programme should start up….or close down.

To the donor…

data literacy might mean an understanding of a “good” needs assessment vs. a “poor one” in evaluating a project proposal, how to prioritize areas and amounts of funding, how to ask tough questions of an individual partner, how to be suspect of numbers that may be too good to be true, how to evaluate quality vs. quantity, or how to see areas of collaboration between multiple partners.  They need to use data to communicate international priorities to their own wider government, board, or citizens.

Use more precise wording

Data literacy means something different to everyone.  So this priority can be interpreted in many different ways depending on who you are.  Within your organization, frame this priority with a more precise wording.  Here are some examples:

  • Improve everyone’s ability to raise important questions based on data.
  • Let’s get better at discussing our data results.
  • Improve our leadership in communicating the meaning behind data.
  • Develop our skills in analyzing and using data to create an impact.
  • Improve our use of data to inform our decisions.

This blog article was based on a recent session at MERL Tech UK 2018.  Thanks to the many voices who contributed ideas.  I’ve put my own spin on them to create this article – so if you disagree, the ideas are mine.  And if you agree – kudos to the brilliant people at the conference!

****

Register now for MERL Tech Jozi, August 1-2 or MERL Tech DC, September 6-7, 2018 if you’d like to join the discussions in person!

 

Reinventing the flat tire… don’t build what is already built!

by Ricardo Santana, MERL Practitioner

One typical factor that delays many projects in international development is the design and creation from scratch of hardware and software to provide a certain feature or accomplish a task. And, while it is true that in some cases a specific design is required, in most cases the outputs can be achieved through solutions already available in the market.

Why is this important? Because we witness over and over again how budgets are wasted in mismanaged projects and programs, delaying solutions, generating skepticism in funders, beneficiaries and other stakeholders and finally delivering a poor result. It is sad to realize that some of these issues may have been avoided simply using solutions and products already available, proved and at reasonable cost.

Then, what do we do? It is hard to find solutions aimed at international development by just browsing through Internet. During MERL Tech London 2018, the NGO Engineering for Change presented their Solutions Library. (Disclaimer: I have contributed to the library by analysing products, software and tools in different application spaces). In this database it is possible to explore and consult many available solutions that may help tackle a specific challenge or need to deliver a good result.

It doesn’t mean that this is the only place on which to rely for everything, or that projects absolutely need to adapt their processes to what is available. But as a professional responsible for evaluating and optimizing projects and programs in government and international development, I know that is always a good place for consulting on different technologies that are designed to help accelerate the overcoming of social inequalities, increasing access to services or automating and simplifying the monitoring, evaluation, research and learning processes.

Through my collaboration with this platform I came to know many different solutions to perform and effectively manage MERL processes. Some of these include: Magpi, Ushaidi, Epicollect5, RapidPro, mWater, SurveyCTO and VOTO Mobile. Some of these are private and some are OpenSource. Some are for managing disaster scenario, others for making poll, for health or for other services. What is impressive is the variety of solutions.

This was a sweet and sour discovery for me. As many other professionals, I wasted important resources and time developing software that was found in robust and previously tested forms that was in many cases a more cost effective and faster solution. However, knowledge is power and now many solutions are on my radar and I have now developed a clear sense of the need to explore before implement.

And that is my humble advice to any who is responsible of deploying a Monitoring, Evaluation, Research and Learning process within their projects. Before we start working like crazy, as we all do, due to our strong commitment to our responsibilities: take some time to carry out proper research on what platforms and software are already available in the market that may suit your needs and evaluate whether there is something feasible or useful or not before re-building every single thing from scratch. That certainly will foster your effectiveness and optimize your delivery cost and time.

As Mariela said in her MERL Tech Lightning Talk: Don’t reinvent the flat tire! You can submit ideas for the Solutions Library or participate as a solutions reviewer too. You can also find more information on the library and how solutions are vetted here at the Library website.

Register now for MERL Tech Jozi, August 1-2 or MERL Tech DC, September 6-7, 2018 if you’d like to join the discussions in person!

Big data or big hype: a MERL Tech debate

by Shawna Hoffman, Specialist, Measurement, Evaluation and Organizational Performance at the Rockefeller Foundation.

Both the volume of data available at our fingertips and the speed with which it can be accessed and processed have increased exponentially over the past decade.  The potential applications of this to support monitoring and evaluation (M&E) of complex development programs has generated great excitement.  But is all the enthusiasm warranted?  Will big data integrate with evaluation — or is this all just hype?

A recent debate that I chaired at MERL Tech London explored these very questions. Alongside two skilled debaters (who also happen to be seasoned evaluators!) – Michael Bamberger and Rick Davies – we sought to unpack whether integration of big data and evaluation is beneficial – or even possible.

Before we began, we used Mentimeter to see where the audience  stood on the topic:

Once the votes were in, we started.

Both Michael and Rick have fairly balanced and pragmatic viewpoints; however, for the sake of a good debate, and to help unearth the nuances and complexity surrounding the topic, they embraced the challenge of representing divergent and polarized perspectives – with Michael arguing in favor of integration, and Rick arguing against.

“Evaluation is in a state of crisis,” Michael argued, “but help is on the way.” Arguments in favor of the integration of big data and evaluation centered on a few key ideas:

  • There are strong use cases for integration. Data science tools and techniques can complement conventional evaluation methodology, providing cheap, quick, complexity-sensitive, longitudinal, and easily analyzable data.
  • Integration is possible. Incentives for cross-collaboration are strong, and barriers to working together are reducing. Traditionally these fields have been siloed, and their relationship has been characterized by a mutual lack of understanding of the other (or even questioning of the other’s motivations or professional rigor).  However, data scientists are increasingly recognizing the benefits of mixed methods, and evaluators are seeing the potential to use big data to increase the number of types of evaluation that can be conducted within real-world budget, time and data constraints. There are some compelling examples (explored in this UN Global Pulse Report) of where integration has been successful.
  • Integration is the right thing to do.  New approaches that leverage the strengths of data science and evaluation are potentially powerful instruments for giving voice to vulnerable groups and promoting participatory development and social justice.   Without big data, evaluation could miss opportunities to reach the most rural and remote people.  Without evaluation (which emphasizes transparency of arguments and evidence), big data algorithms can be opaque “black boxes.”

While this may paint a hopeful picture, Rick cautioned the audience to temper its enthusiasm. He warned of the risk of domination of evaluation by data science discourse, and surfaced some significant practical, technical, and ethical considerations that would make integration challenging.

First, big data are often non-representative, and the algorithms underpinning them are non-transparent. Second, “the mechanistic approaches offered by data science, are antithetical to the very notion of evaluation being about people’s values and necessarily involving their participation and consent,” he argued. It is – and will always be – critical to pay attention to the human element that evaluation brings to bear. Finally, big data are helpful for pattern recognition, but the ability to identify a pattern should not be confused with true explanation or understanding (correlation ≠ causation). Overall, there are many problems that integration would not solve for, and some that it could create or exacerbate.

The debate confirmed that this question is complex, nuanced, and multi-faceted. It helped to remind that there is cause for enthusiasm and optimism, at the same time as a healthy dose of skepticism. What was made very clear is that the future should leverage the respective strengths of these two fields in order to maximize good and minimize potential risks.

In the end, the side in favor of integration of big data and evaluation won the debate by a considerable margin.

The future of integration looks promising, but it’ll be interesting to see how this conversation unfolds as the number of examples of integration continues to grow.

Interested in learning more and exploring this further? Stay tuned for a follow-up post from Michael and Rick. You can also attend MERL Tech DC in September 2018 if you’d like to join in the discussions in person!

Blockchain: the ultimate solution?

by Ricardo Santana, MERL Practitioner

I had the opportunity during MERL Tech London 2018 to attend a very interesting session to discuss blockchains and how can they be applied in the MERL space. This session was led by Valentine Gandhi, Founder of The Development CAFÉ, Zara Rahman, Research and Team Lead at the The Engine Room, and Wayan Vota, Co-founder of Kurante.

The first part of the session was an introduction to blockchain, which is basically an distributed ledger system. Why is it an interesting solution? Because the geographically distributed traces left in multiple devices make for a very robust and secure system. It is not possible to take a unilateral decision to scrap or eliminate data because it would be reflected in the distributed constitution of the data chain. Is it possible to corrupt the system? Well, yes, but what makes it robust and secure is that for that to happen, every single person participating in the blockchain system must agree to do so.

That is the powerful innovation of the technology. It remains somehow to the torrents of technology to share files:  it is very hard to control this when your file storage is not in a single server but rather in an enormous number of end-user terminals.

What I want to share from this session, however, is not how the technology works! That information is readily available on the Internet and other sources.

What I really found interesting was the part of the session where professionals interested in blockchain shared our doubts and the questions that we would need to clarify in order to decide whether blockchain technology would be required or not.

Some of the most interesting shared doubts and concerns around this technology were:

What sources of training and other useful resources are available if you want to implement blockchain?

  • Say the organization or leadership team decides that a blockchain is required for the solution. I am pretty sure it is not hard to find information about blockchain on the Internet, but we all face the same problem — the enormous amount of information available makes it tricky to reach the holy grail that provides just enough information without losing hours to desktop research. It would be incredibly beneficial to have a suggested place where this info can be find, even more if it were a specialized guide aimed at the MERL space.

What are the data space constraints?

  • I found this question very important. It is a key aspect of the design and scalability of the solution. I assume that it will not be an important amount of data but I really don’t know. And maybe it is not a significant amount of information for a desktop or a laptop, but what if we are using cell phones as end terminals too? This need to be addressed so the design is based on facts and not assumptions.

Use cases.

  • Again, there are probably a lot of them to be found all over the Internet, but they are hardly going to be insightful for a specific MERL approach. Is it possible to have a repository of relevant cases for the MERL space?

When is blockchain really required?

  • It would be really helpful to have a simple guide that helps any professional clarify whether the volume or importance of the information is worth the implementation of a Blockchain system or not.

Is there a right to be forgotten in Blockchain?

  • Recent events give a special relevance to this question. Blockchains are very powerful to achieve traceability, but what if I want my information to be eliminated because it is simply my right? This is an important aspect in technologies that have a distributed logic. How to use the powerful advantages of blockchain while allocating the individual rights of every single person to take unilateral decisions on their private or personal information?

I am not an expert in the matter but I do recognize the importance of these questions and the hope is that the people able to address them can pick them up and provide useful answers and guidance to clarify some or all of them.

If you have answers to these questions, or more questions about blockchain and MERL, please add them in the comments!

If you’d like to be a part of discussions like this one, register to attend the next MERL Tech conference! MERL Tech Jozi is happening August 1-2, 2018 and we just opened up registration today! MERL Tech DC is coming up September 6-7. Today’s the last day to submit your session ideas, so hurry up and fill out the form if you have an idea to present or share!

 

 

Takeaways from MERL Tech London

Written by Vera Solutions and originally published here on 16th April 2018.

In March, Zak Kaufman and Aditi Patel attended the second annual MERL Tech London conference to connect with leading thinkers and innovators in the technology for monitoring and evaluation space. In addition to running an Amp Impact demo session, we joined forces with Joanne Trotter of the Aga Khan Foundation as well as Eric Barela and Brian Komar from Salesforce.org to share lessons learned in using Salesforce as a MERL Tech solution. The panel included representatives from Pencils of Promise, the International Youth Foundation, and Economic Change, and was an inspiring showcase of different approaches to and successes with using Salesforce for M&E.

The event packed two days of introspection, demo sessions, debates, and sharing of how technology can drive more efficient program monitoring, stronger evaluation, and a more data-driven social sector. The first day concluded with a (hilarious!) Fail Fest–an open and honest session focused on sharing mistakes in order to learn from them.

At MERL Tech London in 2017, participants identified seven priority areas that the MERL Tech community should focus on:

  1. Responsible data policy and practice
  2. Improving data literacy
  3. Interoperability of data and systems
  4. User-driven, accessible technologies
  5. Participatory MERL/user-centered design
  6. Lean MERL/User-focused MERL
  7. Overcoming “extractive” data approaches

These priorities were revisited this year, and it seemed to us that almost all revolve around a recurrent theme of the two days: focusing on the end user of any MERL technology. The term “end user” was not itself without controversy–after all, most of our MERL tech tools involve more than one kind of user.

When trying to dive into the fourth, fifth, and sixth priorities, we often came back to the issue of who is the proverbial “user” for whom we should be optimizing our technologies. One participant mentioned that regardless of who it is, the key is to maintain a lens of “Do No Harm” when attempting to build user-centered tools.

The discussion around the first and seventh priorities naturally veered into a discussion of the General Data Protection Regulation (GDPR), and how we can do better as a sector by using it as a guideline for data protection beyond Europe.

A heated session with Oxfam, Simprints, and the Engine Room dove into the pros, cons, and considerations of biometrics in international development. The overall sense was that biometrics can offer tremendous value to issues like fraud prevention and healthcare, but also enhance the  sector’s challenges and risks around data protection. This is clearly a topicwhere much movement can be expected in the coming years.

In addition to meeting dozens of NGOs, we connected with numerous tech providers working in the space, including SimPrints, SurveyCTO, Dharma, Social Cops, and DevResults. We’re always energized to learn about others’ tools and to explore integration and collaboration opportunities.

We wrapped up the conference at a happy hour event co-hosted by ICT4D London and Salesforce.org, with three speakers focused on ‘ICT as a catalyst for gender equality’. A highlight from the evening was a passionate talk by Seyi Akiwowo, Founder of Glitch UK, a young organization working to reduce online violence against women and girls. Seyi shared her experience as a victim of online violence and how Glitch is turning the tables to fight back.

We’re looking forward for the first MERL Tech Johannesburg taking place August 1-2, 2018.

 

Evaluating ICT4D projects against the Digital Principles

By Laura Walker McDonald,  This post was originally published on the Digital Impact Alliance’s Blog on March 29, 2018.

As I have written about elsewhere, we need more evidence of what works and what doesn’t in the ICT4D and tech for social change spaces – and we need to hold ourselves to account more thoroughly and share what we know so that all of our work improves. We should be examining how well a particular channel, tool or platform works in a given scenario or domain; how it contributes to development goals in combination with other channels and tools; how the team selected and deployed it; whether it is a better choice than not using technology or using a different sort of technology; and whether or not it is sustainable.

At SIMLab, we developed our Framework for Monitoring and Evaluation of Technology in Social Change projects to help implementers to better measure the impact of their work. It offers resources towards a minimum standard of best practice which implementers can use or work toward, including on how to design and conduct evaluations. With the support of the Digital Impact Alliance (DIAL), the resource is now finalized and we have added new evaluation criteria based on the Principles for Digital Development.

Last week at MERL Tech London, DIAL was able to formally launch this product by sharing a 2-page summary available at the event and engaging attendees in a conversation about how it could be used. At the event, we joined over 100 organizations to discuss Monitoring, Evaluation, Research and Learning related to technology used for social good.

Why evaluate?

Evaluations provide snapshots of the ongoing activity and the progress of a project at a specific point in time, based on systematic and objective review against certain criteria. They may inform future funding and program design; adjust current program design; or to gather evidence to establish whether a particular approach is useful. They can be used to examine how, and how far, technology contributes to wider programmatic goals. If set up well, your program should already have evaluation criteria and research questions defined, well before it’s time to commission the evaluation.

Evaluation criteria provide a useful frame for an evaluation, bringing in an external logic that might go beyond the questions that implementers and their management have about the project (such as ‘did our partnerships on the ground work effectively?’ or ‘how did this specific event in the host country affect operations?’) to incorporate policy and best practice questions about, for example, protection of target populations, risk management, and sustainability. The criteria for an evaluation could be any set of questions that draw on an organization’s mission, values, principles for action; industry standards or other best practice guidance; or other thoughtful ideas of what ‘good’ looks like for that project or organization. Efforts like the Principles for Digital Development can set useful standards for good practice, and could be used as evaluation criteria.

Evaluating our work, and sharing learning, is radical – and critically important

While the potential for technology to improve the lives of vulnerable people around the world is clear, it is also evident that these improvements are not keeping pace with the advances in the sector. Understanding why requires looking critically at our work and holding ourselves to account. There is still insufficient evidence of the contribution technology makes to social change work. What evidence there is often is not shared or the analysis doesn’t get to the core issues. Even more important, the learnings from what has not worked and why have not been documented and absorbed.

Technology-enabled interventions succeed or fail based on their sustainability, business models, data practices, choice of communications channel and technology platform; organizational change, risk models, and user support – among many other factors. We need to build and examine evidence that considers these issues and that tells us what has been successful, what has failed, and why. Holding ourselves to account against standards like the Principles is a great way to improve our practice, and honor our commitment to the people we seek to help through our work.

Using the Digital Principles as evaluation criteria

The Principles for Digital Development are a set of living guidance intended to help practitioners succeed in applying technology to development programs. They were developed, based on some pre-existing frameworks, by a working group of practitioners and are now hosted by the Digital Impact Alliance.

These nine principles could also form a useful set of evaluation criteria, not unlike OECD evaluation criteria, or Sphere standards. Principles overlap, so data can be used to examine more than one criterion, and ot every evaluation would need to consider all of the Digital Principles.

Below are some examples of Digital Principles and sample questions that could initiate, or contribute to, an evaluation.

Design with the User: Great projects are designed with input from the stakeholders and users who are central to the intended change. How far did the team design the project with its users, based on their current tools, workflows, needs and habits, and work from clear theories of change and adaptive processes?

Understand the Existing Ecosystem: Great projects and programs are built, managed, and owned with consideration given to the local ecosystem. How far did the project work to understand the local, technology and broader global ecosystem in which the project is situated? Did it build on existing projects and platforms rather than duplicating effort? Did the project work sensitively within its ecosystem, being conscious of its potential influence and sharing information and learning?

Build for Sustainability: Great projects factor in the physical, human, and financial resources that will be necessary for long-term sustainability. How far did the project: 1) think through the business model, ensuring that the value for money and incentives are in place not only during the funded period but afterwards, and 2) ensure that long-term financial investments in critical elements like system maintenance and support, capacity building, and monitoring and evaluation are in place? Did the team consider whether there was an appropriate local partner to work through, hand over to, or support the development of, such as a local business or government department?

Be Data Driven: Great projects fully leverage data, where appropriate, to support project planning and decision-making. How far did the project use real-time data to make decisions, use open data standards wherever possible, and collect and use data responsibly according to international norms and standards?

Use Open Standards, Open Data, Open Source, and Open Innovation: Great projects make appropriate choices, based on the circumstances and the sensitivity of their project and its data, about how far to use open standards, open the project’s data, use open source tools and share new innovations openly. How far did the project: 1) take an informed and thoughtful approach to openness, thinking it through in the context of the theory of change and considering risk and reward, 2) communicate about what being open means for the project, and 3) use and manage data responsibly according to international norms and standards?

For a more complete set of guidance, see the complete Framework for Monitoring and Evaluating Technology, and the more nuanced and in-depth guidance on the Principles, available on the Digital Principles website.

Technologies in monitoring and evaluation | 5 takeaways

Bloggers: Martijn Marijnis and Leonard Zijlstra. This post originally appeared on the ICCO blog on April 3, 2018.
.

Technologies in monitoring and evaluation | 5 takeaways

On March 19 and 20 ICCO participated in the MERL Tech 2018 in London. The conference explores the possibilities of technology in monitoring, evaluation, learning and research in development. About 200 like-minded participants from various countries participated. Key issues on the agenda were data privacy, data literacy within and beyond your organization, human-centred monitoring design and user-driven technologies. Interesting practices where shared, amongst others in using blockchain technologies and machine learning. Here are our most important takeaways:

1)  In many NGOs data gathering still takes place in silo’s

Oxfam UK shared some knowledgeable insights and practical tips in putting in place an infrastructure that combines data: start small and test, e.g. by building up a strong country use case; discuss with and learn from others; ensure privacy by design and make sure senior leadership is involved. ICCO Cooperation currently faces a similar challenge, in particular in combining our household data with our global result indicators.

2)  Machine learning has potential for NGOs

While ICCO recently started to test machine learning in the food security field (see this blog) other organisations showcased interesting examples: the Wellcome Trust shared a case where they tried to answer the following question: Is the organization informing and influencing policy and if so, how? Wellcome teamed up their data lab and insight & analysis team and started to use open APIs to pull data in combination with natural language processing to identify relevant cases of research supported by the organization. With their 8.000 publications a year this would be a daunting task for a human team. First, publications linked to Wellcome funding were extracted from a European database (EPMC) in combination with end of grant reports. Then WHO’s reference section was scraped to see if and to what extent WHO’s policy was influenced and to identify potential interesting cases for Wellcome’s policy team.

3)  Use a standardized framework for digital development

See digitalpinciples.org. It gives – amongst others – practical guidelines on how to use open standards and open data, how data can be reused, how privacy and security can be addressed, how users can and should be involved in using technologies in development projects. It is a useful framework for evaluating your design.

4)  Many INGOs get nervous these days about blockchain technology

What is it, a new hype or a real game changer? For many it is just untested technology with high risks and little upside for the developing world. But, for example INGOs working in agriculture value chains or in humanitarian relief operations, its potential is definitely consequential enough to merit a closer look. It starts with the underlying principle, that users of a so-called blockchain can transfer value, or assets, between each other without the need for a trusted intermediary. The running history of the transactions is called the blockchain, and each transaction is called a block. All transactions are recorded in a ledger that is shared by all users of a blockchain.

The upside of blockchain applications is the considerable time and money saving aspect of it. Users rely on this shared ledger to provide a transparent view into the details of the assets or values, including who owns them, as well as descriptive information such as quality or location. Smallholder farmers could benefit (e.g. real-time payment on delivery, access to credit), so can international sourcing companies (e.g. traceability of produce without certification), banks (e.g. cost-reductions, risk-reduction), as much as refugees and landless (e.g. registration, identification). Although we haven’t yet seen large-scale adoption of blockchain technology in the development sector, investors like the Bill and Melinda Gates Foundation and various venture capitalists are paying attention to this space.

But one of the main downsides or challenges for blockchain, like with agricultural technology at large, is connecting the technology to viable business models and compelling use cases. With or without tested technology, this is hard enough as it is and requires innovation, perseverance and focus on real value for the end-user; ICCO’s G4AW projects gain experience with blockchain.

5)  Start thinking about data-use incentives

Over the years, ICCO has made significant investments in monitoring & evaluation and data skills training. Yet limited measurable results of increased data use can be seen, like in many other organizations. US-based development consultant Cooper&Smith shared revealing insights into data usage incentives. It tested three INGOs working across five regions globally. The hypothesis was, that better alignment of data-use training incentives leads to increased data use later on. They looked at both financial and non-financial rewards that motivate individuals to behave in a particular way. Incentives included different training formats (e.g. individual, blended), different hardware (e.g. desktop, laptop, mobile phone), recognition (e.g. certificate, presentation at a conference), forms of feedback & support (e.g. one-on-one, peer group) and leisure time during the training (e.g. 2 hours/week, 12 hours/week). Data use was referred to as the practice of collecting, managing, analyzing and interpreting data for making program policy and management decisions.

They found considerable differences in appreciation of the attributes. For instance, respondents overwhelmingly prefer a certificate in data management, but instead they currently receive primarily no recognition or only recognition from their supervisor. Or  one region prefers a certificate while the other prefers attending an international conference as reward. Or that they prefer one-on-one feedback but instead they receive only peer-2-peer support. The lesson here is, that while most organizations apply a ‘one-size fits all’-reward system (or have no reward system at all), this study points at the need to develop a culturally sensitive and geographically smart reward system to see real increase in data usage.

For many NGOs the data revolution has just begun, but we are underway!

What Are Your ICT4D Challenges? Take a DIAL Survey to Learn What Helps and Hurts Us All

By Laura Walker McDonald, founder of BetterLab.io. Originally posted on ICT Works on March 26, 2018.

DIAL ICT4D Survey

When it comes to the impact and practice of our ICT4D work, we’re long on stories and short on evidence. My previous organization, SIMLab, developed Frameworks on Context Analysis andMonitoring and Evaluation of technology projects to try and tackle the challenge at that micro level.

But we also have little aggregated data about the macro trends and challenges of our growing sector. That’s led the Digital Impact Alliance (DIAL) to conduct an entirely new kind of data-gathering exercise, and one that would add real quantitative data to what we know about what it’s like to implement projects and develop platforms.

Please help us gather new insights from more voices

Please take our survey on the reality of delivering services to vulnerable populations in emerging markets using digital tools. We’re looking for experiences from all of DIAL’s major stakeholder groups:

  • NGO leaders from the project site to the boardroom;
  • Technology experts;
  • Platform providers and mobile network operators;
  • Governments and donors.

We’re adding to this survey with findings with in-depth interviews with 50 people from across those groups.

Please forward this survey!

We want to hear from those whose voices aren’t usually heard by global consultation and research processes. We know that the most innovative work in our space happens in projects and collaborations in the Global South – closest to the underserved communities who are our highest priority.

Please forward this survey to we can hear from those innovators, from the NGOs, government ministries, service providers and field offices who are doing the important work of delivering digital-enabled services to communities, every day.

It’s particularly important that we hear from colleagues in government, who may be supporting digital development projects in ways far removed from the usual digital development conversation.

Why should I take and share the survey?

We’ll use the data to help measure the impact of what we do – this will be a baseline for indicators of interest to DIAL. But it will provide a unique opportunity for you to help us build a unique snapshot of the challenges and opportunities you face in your work, in funding, designing, or delivering these services.

You’ll be answering questions we don’t believe are asked enough – about your partnerships, about how you cover your costs, and about the technical choices you’re making, specific to the work you do – whether you’re a businessperson, NGO worker, technologist, donor, or government employee.

How do I participate?

Please take the survey here. It will take 15-20 minutes to complete, and you’ll be answering questions, among others, about how you design and procure digital projects; how easy and how cost-effective they are to undertake; and what you see as key barriers. Your response can be anonymous.

To thank you for your time, if you leave us your email, we’ll share our findings with you and invite you into the conversation about the results. We’ll also be sharing our summary findings with the community.

We hope you’ll help us – and share this link with others.

Please help us get the word out about our survey, and help us gather more and better data about how our ecosystem really works.

Digital Data Collection and the Maturing of a MERL Technology

by Christopher Robert, CEO of Dobility (Survey CTO). This post was originally published on March 15, 2018, on the Survey CTO blog.

Digital data collection: stakeholders and complex relationships

Needs, markets, and innovation combine to produce technological change. This is as true in the international development sector as it is anywhere else. And within that sector, it’s as true in the broad category of MERL (monitoring and evaluation, research, and learning) technologies as it is in the narrower sub-category of digital data collection technologies. Here, I’ll consider the recent history of digital data collection technology as an example of MERL technology maturation – and as an example, more broadly, of the importance of market structure in shaping the evolution of a technology.

My basic observation is that, as digital data collection technology has matured, the same stakeholders have been involved – but the market structure has changed their relative power and influence over time. And it has been these very changes in power and influence that have changed the cost and nature of the technology itself.

First, when it comes to digital data collection in the development context, who are the stakeholders?

  • Donors. These are the primary actors who fund development work, evaluation of development policies and programs, and related research. There are mega-actors like USAID, Gates, and the UN agencies, but also many other charities, philanthropies, and public or nonprofit actors, from Catholic Charities to the U.S. Centers for Disease Control and Prevention.
  • Developers. These are the designers and software engineers involved in producing technology in the space. Some are students or university faculty, some are consultants, many work full-time for nonprofits or businesses in the space. (While some work on open-source initiatives in a voluntary capacity, that seems quite uncommon in practice. The vast majority of developers working on open-source projects in the space get paid for that work.)
  • Consultants and consulting agencies.These are the technologists and other specialists who help research and program teams use technology in the space. For example, they might help to set up servers and program digital survey instruments.
  • Researchers. These are the folks who do the more rigorous research or impact evaluations, generally applying social-science training in public health, economics, agriculture, or other related fields.
  • M&E professionals.These are the people responsible for program monitoring and evaluation. They are most often part of an implementing program team, but it’s also not uncommon to share more centralized (and specialized) M&E teams across programs or conduct outside evaluations that more fully separate some M&E activities from the implementing program team.
  • IT professionals.These are the people responsible for information technology within those organizations implementing international development programs and/or carrying out MERL activities.
  • Program beneficiaries. These are the end beneficiaries meant to be aided by international development policies and programs. The vast majority of MERL activities are ultimately concerned with learning about these beneficiaries.

Digital data collection stakeholders

These different stakeholders have different needs and preferences, and the market for digital data collection technologies has changed over time – privileging different stakeholders in different ways. Two distinct stages seem clear, and a third is coming into focus:

  1. The early days of donor-driven pilots and open source. These were the days of one-offs, building-your-own, and “pilotitis,” where donors and developers were effectively in charge and there was a costly additional layer of technical consultants between the donors/developers and the researchers and M&E professionals who had actual needs in the field. Costs were high, and some combination of donor and developer preferences reigned supreme.
  2. Intensifying competition in program-adopted professional products.Over time, professional products emerged that began to directly market to – and serve – researchers and M&E professionals. Costs fell with economies of scale, and the preferences of actual users in the field suddenly started to matter in a more direct, tangible, and meaningful way.
  3. Intensifying competition in IT-adopted professional products.Now that use of affordable, accessible, and effective data-collection technology has become ubiquitous, it’s natural for IT organizations to begin viewing it as a kind of core organizational infrastructure, to be adopted, supported, and managed by IT. This means that IT’s particular preferences and needs – like scale, standardization, integration, and compliance – start to become more central, and costs unfortunately rise.

While I still consider us to be in the glory days of the middle stage, where costs are low and end-users matter most, there are still plenty of projects and organizations living in that first stage of more costly pilots, open source projects, and one-offs. And I think that the writing’s very much on the wall when it comes to our progression toward the third stage, where IT comes to drive the space, innovation slows, and end-user needs are no longer dominant.

Full disclosure: I myself have long been a proponent of the middle phase, and I am proud that my social enterprise has been able to help graduate thousands of users from that costly first phase. So my enthusiasm for the middle phase began many years ago and in fact helped to launch Dobility.

THE EARLY DAYS OF DONOR-DRIVEN PILOTS AND OPEN SOURCE

Digital data collection stage 1 (the early days)

In the beginning, there were pioneering developers, patient donors, and program or research teams all willing to take risks and invest in a better way to collect data from the field. They took cutting-edge technologies and found ways to fit them into some of the world’s most difficult, least-cutting-edge settings.

In these early days, it mattered a lot what could excite donors enough to open their checkbooks – and what would keep them excited enough to keep the checks coming. So the vital need for large and ongoing capital injections gave donors a lot of influence over what got done.

Developers also had a lot of sway. Donors couldn’t do anything without them, and they also didn’t really know how to actively manage them. If a developer said “no, that would be too hard or expensive” or even “that wouldn’t work,” what could the donor really say or do? They could cut off funding, but that kind of leverage only worked for the big stuff, the major milestones and the primary objectives. For that stuff, donors were definitely in charge. But for the hundreds or thousands of day-to-day decisions that go into any technology solution, it was the developers effectively in charge.

Actual end-users in the field – the researchers and M&E professionals who were piloting or even trying to use these solutions – might have had some solid ideas about how to guide the technology development, but they had essentially no levers of control. In practice, the solutions being built by the developers were often so technically-complex to configure and use that there was an additional layer of consultants (technical specialists) sitting between the developers and the end-users. But even if there wasn’t, the developers’ inevitable “no, sorry, that’s not feasible,” “we can’t realistically fit that into this release,” or simple silence was typically the end of the story for users in the field. What could they do?

Unfortunately, without meaning any harm, most developers react by pushing back on whatever is contrary to their own preferences (I say this as a lifelong developer myself). Something might seem like a hassle, or architecturally unclean, and so a developer will push back, say it’s a bad idea, drag their heels, even play out the clock. In the past five years of Dobility, there have been hundreds of cases where a developer has said something to the effect of “no, that’s too hard” or “that’s a bad idea” to things that have turned out to (a) take as little as an hour to actually complete and (b) provide massive amounts of benefit to end-users. There’s absolutely no malice involved, it’s just the way most of them/us are.

This stage lasted a long time – too long, in my view! – and an entire industry of technical consultants and paid open-source contributors grew up around an approach to digital data collection that didn’t quite embrace economies of scale and never quite privileged the needs or preferences of actual users in the field. Costs were high and complaints about “pilotitis” grew louder.

INTENSIFYING COMPETITION IN PROGRAM-ADOPTED PROFESSIONAL PRODUCTS

Digital data collection stage 2 (the glory days)

But ultimately, the protagonists of the early days succeeded in establishing and honing the core technologies, and in the process they helped to reveal just how much was common across projects of different kinds, even across sectors. Some of those protagonists also had the foresight and courage to release their technologies with the kinds of permissive open-source licenses that would allow professionalization and experimentation in service and support models. A new breed of professional products directly serving research, program, and M&E teams was born – in no small part out of a single, tremendously-successful open-source project, Open Data Kit (ODK).

These products tended to be sold directly to end-users, and were increasingly intended for those end-users to be able to use themselves, without the help of technical staff or consultants. For traditionalists of the first stage, this was a kind of heresy: it was considered gauche at best and morally wrong at worst to charge money for technology, and it was seen as some combination of impossible and naive to think that end-users could effectively deploy and manage these technologies without technical assistance.

In fact, the new class of professional products were not designed to be used entirely without assistance. But they were designed to require as little assistance as possible, and the assistance came with the product instead of being provided by a separate (and separately-compensated) internal or external team.

A particularly successful breed of products came to use a “Software as a Service” (SaaS) model that streamlined both product delivery and support, ramping up economies of scale and driving down costs in the process (like SurveyCTO). When such products offered technical support free-of-charge as part of the purchase or subscription price, there was a built-in incentive to improve the product: since tech support was so costly to deliver, improving the product such that it required less support became one of the strongest incentives driving product development. Those who adopted the SaaS model not only had to earn every dollar of revenue from end-users, but they had to keep earning that revenue month in, month out, year in, year out, in order to retain business and therefore the revenue needed to pay the bills. (Read about other SaaS benefits for M&E in this recent DevResults post.)

It would be difficult to overstate the importance of these incentives to improve the product and earn revenue from end-users. They are nothing short of transformative. Particularly once there is active competition among vendors, users are squarely in charge. They control the money, their decisions make or break vendors, and so their preferences and needs are finally at the center.

Now, in addition to the “it’s heresy to charge money or think that end-users can wield this kind of technology” complaints that used to be more common, there started to be a different kind of complaint: there are too many solutions! It’s too overwhelming, how many digital data collection solutions there are now. Some go so far as to decry the duplication of effort, to claim that the free market is inefficient or failing; they suggest that donors, consultants, or experts be put back in charge of resource allocation, to re-impose some semblance of sanity to the space.

But meanwhile, we’ve experienced a kind of golden age in terms of who can afford digital data collection technology, who can wield it effectively, and in what kinds of settings. There are a dizzying number of solutions – but most of them cater to a particular type of need, or have optimized their business model in a particular sort of way. Some, like us, rely nearly 100% on subscription revenues, others fund themselves more primarily from service provision, others are trying interesting ways to cross-subsidize from bigger, richer users so that they can offer free or low-cost options to smaller, poorer ones. We’ve overcome pilotitis, economies of scale are finally kicking in, and I think that the social benefits have been tremendous.

INTENSIFYING COMPETITION IN IT-ADOPTED PROFESSIONAL PRODUCTS

Digital data collection stage 3 (the coming days)

It was the success of the first stage that laid the foundation for the second stage, and so too it has been the success of the second stage that has laid the foundation for the third: precisely because digital data collection technology has become so affordable, accessible, and ubiquitous, organizations are increasingly thinking that it should be IT departments that procure and manage that technology.

Part of the motivation is the very proliferation of options that I mentioned above. While economics and the historical success of capitalism has taught us that a marketplace thriving with competition is most often a very good thing, it’s less clear that a wide variety of options is good within any single organization. At the very least, there are very good reasons to want to standardize some software and processes, so that different people and teams can more effortlessly share knowledge and collaborate, and so that there can be some economies of scale in training, support, and compliance.

Imagine if every team used its own product and file format for writing documents, for example. It would be a total disaster! The frictions across and between teams would be enormous. And as data becomes more and more core to the operations of more organizations – the way that digital documents became core many years ago – it makes sense to want to standardize and scale data systems, to streamline integrations, just for efficiency purposes.

Growing compliance needs only up the ante. The arrival of the EU’s General Data Protection Regulation (GDPR) this year, for example, raises the stakes for EU-based (or even EU-touching) organizations considerably, imposing stiff new data privacy requirements and steep penalties for violations. Coming into compliance with GDPR and other data-security regulations will be effectively impossible if IT can’t play a more active role in the procurement, configuration, and ongoing management of data systems; and it will be impractical for IT to play such a role for a vast array of constantly-shifting technologies. After all, IT will require some degree of stability and scale.

But if IT takes over digital data collection technology, what changes? Does the golden age come to an end?

Potentially. And there are certainly very good reasons to worry.

First, changing who controls the dollars – who’s in charge of procurement – threatens to entirely up-end the current regime, where end-users are directly in charge and their needs and preferences are catered to by a growing body of vendors eager to earn their business.

It starts with the procurement process itself. When IT is in charge, procurement processes are long, intensive, and tend to result in a “winner take all” contract. After all, it makes sense that IT departments would want to take their time and choose carefully; they tend to be choosing solutions for the organization as a whole (or at least for some large class of users within the organization), and they most often intend to choose a solution, invest heavily in it, and have it work for as long as possible.

This very natural and appropriate method that IT uses to procure is radically different from the method used by research, program, and M&E teams. And it creates a radically different dynamic for vendors.

Vendors first have to buy into the idea of investing heavily in these procurement processes – which some may simply choose not to do. Then they have to ask themselves, “what do these IT folks care most about?” In order to win these procurements, they need to understand the core concerns driving the purchasing decision. As in the old saying “nobody ever got fired for choosing IBM,” safety, stability, and reputation are likely to be very important. Compliance issues are likely to matter a lot too, including the vendor’s established ability to meet new and evolving standards. Integrations with corporate systems are likely to count for a lot too (e.g., integrating with internal data and identity-management systems).

Does it still matter how well the vendor meets the needs of end-users within the organization? Of course. But note the very important shift in the dynamic: vendors now have to get the IT folks to “yes” and so would be quite right to prioritize meeting their particular needs. Nobody will disagree that end-users ultimately matter, but meanwhile the focus will be on the decision-makers. The vendors that meet the decision-makers’ needs will live, the others will die. That’s simply one aspect of how a free market works.

Note also the subtle change in dynamic once a vendor wins a contract: the SaaS model where vendors had to re-earn every customer’s revenue month in, month out, is largely gone now. Even if the contract is formally structured as a subscription or has lots of exit options, the IT model for technology adoption is inherently stickier. There is a lot more lock-in in practice. Solutions are adopted, they’re invested in at large scale, and nobody wants to walk away from that investment. Innovation can easily slow, and nobody wants to repeat the pain of procurement and adoption in order to switch solutions.

And speaking of the pain of the procurement process: costs have been rising. After all, the procurement process itself is extremely costly to the vendor – especially when it loses, but even when it wins. So that’s got to get priced in somewhere. And then all of the compliance requirements, all of the integrations with corporate systems, all of that stuff’s really expensive too. What had been an inexpensive, flexible, off-the-shelf product can easily become far more expensive and far less flexible as it works itself through IT and compliance processes.

What had started out on a very positive note (“let’s standardize and scale, and comply with evolving data regulations”) has turned in a decidedly dystopian direction. It’s sounding pretty bad now, and you wouldn’t be wrong to think “wait, is this why a bunch of the products I use for work are so much more frustrating than the products I use as a consumer?” or “if Microsoft had to re-earn every user’s revenue for Excel, every month, how much better would it be?”

While I don’t think there’s anything wrong with the instinct for IT to take increasing control over digital data collection technologies, I do think that there’s plenty of reason to worry. There’s considerable risk that we lose the deep user orientation that has just been picking up momentum in the space.

WHERE WE’RE HEADED: STRIKING A BALANCE

Digital data collection stage 4 (finding a balance?)

If we don’t want to lose the benefits of a deep user orientation in this particular technology space, we will need to work pretty hard – and be awfully clever – to avoid it. People will say “oh, but IT just needs to consult research, program, and M&E teams, include them in the process,” but that’s hogwash. Or rather, it’s woefully inadequate. The natural power of those controlling resources to bend the world to their preferences and needs is just too powerful for mere consultation or inclusion to overcome.

And the thing is: what IT wants and needs is good. So the solution isn’t just “let’s not let them anywhere near this, let’s keep the end-users in charge.” No, that approach collapses under its own weight eventually, and certainly it can’t meet rising compliance requirements. It has its own weaknesses and inefficiencies.

What we need is an approach – a market structure – that allows the needs of IT and the needs of end-users both to matter to appropriate degrees.

With SurveyCTO, we’re currently in an interesting place: we’re becoming split between serving end-users and serving IT organizations. And I suppose as long as we’re split, with large parts of our revenue coming from each type of decision-maker, we remain incentivized to keep meeting everybody’s needs. But I see trouble on the horizon: the IT organizations can pay more, and more organizations are shifting in that direction… so once a large-enough proportion of our revenue starts coming from big, winner-take-all IT contracts, I fear that our incentives will be forever changed. In the language of economics, I think that we’re currently living in an unstable equilibrium. And I really want the next equilibrium to serve end-users as well as the last one!

What’s the Deal with Data — Bridging the Data Divide in Development

Written by Ambika Samarthya-Howard, Head of Communications, Praekelt.org. This post was originally published on March 26, 2018, on Medium.

Working on communications at Praekelt.org, I have had the opportunity to see first-hand the power of sharing stories in driving impact and changing attitudes. Over the past month I’ve attended several unrelated events all touching on data, evaluation, and digital development which have reaffirmed the importance of finding common ground to share and communicate data we value.

Storytelling and Data

I recently presented a poster on “Storytelling for Organisational Change” at the University of London’s Behavior Change Conference. Our current evaluations at Praekelt draw on work by the center, which is a game-changer in the field. But I didn’t submit an abstract on our agile, experimental investigations: I was sharing information about how I was using films and our storytelling to create change within the organisation.

After my abstract was accepted, I realized I had to present my findings as a poster. For many practitioners (like myself) we really have no idea what a poster entails. Thankfully I got advice from academics and support from design colleagues to translate my videos, photos, and storytelling deck into a visual form I could pin up. When the printers in New York told me “this is a really great poster”, I started picking up the hint that it was atypical.

Once I arrived at the poster hall at UCL, I could see why. Nearly, if not all, of the posters in the room had charts and numbers and graphs — lots and lots of data points. On the other hand, my poster had almost no “data”. It was colorful, and showed a few engaging images, the story of our human-centered design process, and was accompanied by videos playing on my laptop alongside the booth. It was definitely a departure from the “research” around the room.

This divide between research and practice showed up many times through the conference. For starters, this year, attendees were asked to choose a sticker label based on whether they were in research/ academics or programme/ practitioners. Many of the sessions talked about how to bridge the divide and make research more accessible to practitioners, and take learnings from programme creators to academia.

Thankfully for me, the tight knit group of practitioners felt solace and connection to my chart-less poster, and perhaps the academics a bit of a relief at the visuals as well: we went home with one of the best poster awards at the conference.

Data Parties and Cliques

The London conference was only the beginning of when I became aware of the conversations around the data divide in digital development. “Why are we even using the word data? Does anyone else value it? Does anyone else know what it means?” Anthony Waddell, Chief Innovation Officer of IBI, provocatively put out there at a breakout session at USAID’s Digital Development Forum in Washington. The conference gathered organisations around the United States working in digital development, asking them to consider key points around the evolution of digital development in the next decade — access, inclusivity, AI, and, of course, the role of data.

This specific break-out session was sharing best practices of using and understanding data within organisations, especially amongst programmes teams and country office colleagues. It also expanded to sharing with beneficiaries, governments, and donors. We questioned whose data mattered, why we were valuing data, and how to get other people to care.

Samhir Vasdev, the advisor for Digital Development at IREX, spoke on the panel about MIT’s initiatives and their Data Culture Lab, which shared exercises to help people understand data. He talked about throwing data parties where teams could learn and understand that what they were creating was data, too. The gatherings allow people to explore the data they produce, but perhaps did not get a chance to interrogate. The real purpose is to understand what new knowledge their own data tells them, or what further questions the data challenges them to explore. “Data parties a great way to encourage teams to explore their data and transform it into insights or questions that they can use directly in their programs.”

Understanding data can be empowering. But being shown the road forward doesn’t necessarily means that’s the road participants can or will take. As Vasdev noted, “ “Exercises like this come with their own risks. In some cases, when working with data together with beneficiaries who themselves produced that information, they might begin demanding results or action from their data. You have to be prepared to manage these expectations or connect them with resources to enable meaningful action.” One can imagine the frustration if participants saw their data leading to the need for a new clinic, yet a clinic never got built.

Big Data, Bias, and M&E

Opening the MERL (Monitoring, Evaluation, Research, and Learning) Tech Conference in London, Andre Clark, Effectiveness and Learning Adviser at Bond, spoke about the increasing importance of data in development in his keynote. Many of the voices in the room resonated with the trends and concerns I’ve observed over the last month. Is data the answer? How is it the answer?André Clarke’s keynote at MERL Tech

“The tool is not going to solve your problem,” one speaker said during the infamous off-the-record Fail Fest where attendees present on their failures to learn from each other’s mistakes. The speaker shared examples of a new reporting initiative which hadn’t panned out as expected. She noted that “we initially thought tech would help us work faster and more efficiently, but now we are clearly seeing the importance of quality data over timely data”. Although digital data may be better and faster, that does not mean it’s solving the original problem.

In using data to evaluate problems, we have to make sure we are under no illusions that we are actually dealing with core issues at hand. For examples, during my talk on Social Network Analysis we discussed both the opportunities and challenges of using the quantitative process in M&E. The conference consistently emphasized the importance of slower, and deeper processes as opposed to faster, and shorter ones driven by technology.

This holds true for how data is used in M&E practices. For example, I attended a heated debate on the role of “big data” in M&E and whether the convergence was inevitable. As one speaker mentioned, “if you close your eyes and forget the issue at hand is big data, you could feel like it was about any other tool used in M&E”. The problems around data collection, bias, inaccessibility, language, and tools were there in M&E regardless of big data or small data.

Other core issues raised were power dynamics, inclusivity, and the fact that technology is made by people and therefore it is not neutral. As Anahi Ayala Iacucci, Senior Director of Humanitarian Programs at Internews, said explicitly “we are biased, and so we are building biased tools.” In her presentation, she talked about how technology mediates and alters human relationships. If we take the slower and deeper approach we will have an ability to really explore biases and understand the value and complications of data.

“Evaluators don’t understand data, and then managers and public don’t understand evaluation talk,” Maliha Khan of Daira said, bringing it back to my original concerns about translation and bridging gaps in the space. Many of the sessions sought to address this problem, a nice example being Cooper Smith’s Kuunika project in Malawi that used local visual illustrations to accompany their survey questions on tablets. Another speaker pushed for us to move into the measurement space, as opposed to monitoring, which has the potential to be a page we can all agree on.

As someone who feels responsible for not only communicating our work externally, but sharing knowledge amongst our programmes internally, where did all this leave me? I think I’ll take my direction from Anna Maria Petruccelli, Data Analyst at Comic Relief, who spoke about how rather than organisations committing to being data-driven, they could be committed to being data-informed.

To go even further with this advice, at Praekelt we make the distinction between data-driven and evidence-driven, where the latter acknowledges the need to attend to research design and emphasize quality, not just quantity. Evidence encompasses the use of data but includes the idea that not all data are equal, that when interpreting data we attend to both the source of data and research design.

I feel confident that turning our data into knowledge, regardless of how we choose to use it and being aware of how bias informs the way we do, can be the first step forward on a unified journey. I also think this new path forward will leverage the power of storytelling to make data accessible, and organisations better informed. It’s a road less traveled, yes, but hopefully that will make all the difference.

If you are interested in joining this conversation, we encourage you to submit to the first ever MERL Tech Jozi. Abstracts due March 31st.