Tag Archives: DIAL

Living Our Vision: Applying the Principles of Digital Development as an Evaluative Methodology

by: Sylvia Otieno, MPA candidate at George Washington University and Consultant at the World Bank’s IEG; and Allana Nelson, Senior Manager for the Digital Principles at DIAL

For nearly a decade, the Principles of Digital Development (Digital Principles)  have served to guide practitioners in developing and implementing digital tools in their programming. The plenary session at MERL Tech DC 2019 titled “Living Our Vision: Applying the Principles of Digital Development as an Evaluative Methodology” introduced attendees to four evaluation tools that have been developed to help organizations incorporate the Digital Principles into their design, planning, and assessments. 

Laura Walker MacDonald explaining the Monitoring and Evaluation Framework. (Photo by Christopher Neu)

This panel – organized and moderated by Allana Nelson, Senior Manager for the Digital Principles stewardship at the Digital Impact Alliance (DIAL) – highlighted digital development frameworks and tools developed by SIMLab, USAID in collaboration with John Snow Inc., Digital Impact Alliance (DIAL) in collaboration with TechChange, and the Response Innovation Lab. These frameworks and toolkits were built on the good practice guidance provided by the Principles for Digital Development. They are intended to assist development practitioners to be more thoughtful about how they use technology and digital innovations in their programs and organizations. Furthermore, the toolkits assist organizations with building evidence to inform program development. 

Laura Walker McDonald, Senior Director for Insights and Impact at DIAL, presented the Monitoring and Evaluation Framework (developed during her time at SIMLab), which assists practitioners in measuring the impact of their work and the contribution of inclusive technologies to their impact and outcomes. This Monitoring and Evaluation Framework was developed out of the need for more evidence of the successes and failures of technology for social change. “We have almost no evidence of how innovation is brought to scale. This work is trying to reflect publicly the practice of sharing learnings and evaluations. Technology and development isn’t as good as it could be because of this lack of evidence,” McDonald said. The Principles for Digital Development provide the Framework’s benchmarks. McDonald continues to refine this Framework based on feedback from community experts, and she welcomes input that can be shared through this document.

Christopher Neu, COO of TechChange, introduced the new, cross-sector Digital Principles Maturity Matrix Tool for Proposal Evaluation that his team developed on behalf of DIAL. The Maturity Matrix tool helps donors and implementers asses how the Digital Principles are planned to be used during the program proposal creation process. Donors may use the tool to evaluate proposal responses to their funding opportunities, and implementers may use the tool as they write their proposals. “This is a tool to give donors and implementers a way to talk about the Digital Principles in their work. This is the beginning of the process, not the end,” Neu said during the session. Users of the Maturity Matrix Tool score themselves on a rating between one and three against metrics that span each of the nine Digital Principles and across the four stages of the Digital Principles project lifecycle. A program is scored one when it loosely incorporates the identified activity or action into proposals and implementation. A score of two indicates that the program is clearly in line with best practices or that the proposal’s writers have at least thought considerably about them. Those who incorporate the Digital Principles on a deeper level and provide an action plan to increase engagement earn a score of three. It is important to note that not every project will require the same level of Digital Principles Maturity, and not every Digital Principle may be required to be used in a program. The scores are intended to provide donors and organizations evidence that they are making the best and most responsible investment in technology. 

Steve Ollis, Senior Digital Health Advisor at John Snow Inc., presented the Digital Health Investment Review Tool (DHIRT), which assists donors investing in Digital Health programs to make informed decisions about their funding. The tool asks donors to adhere to the Digital Principles and the Principles of Donor Alignment for Digital Health (Digital Investment Principles), which are also based on the Digital Principles. When implementing this tool, practitioners can assess implementer proposals across 12 criteria. After receiving a score between one to five (one being nascent and five being optimized), organizations can better assess how effectively they incorporate the Digital Principles and other best practices (including change management) into their project proposals. 

Max Vielle, Global Director of Response Innovation Lab, introduced the Innovation Evidence Toolkit, which helps technology innovators in the humanitarian sector build evidence to thoughtfully develop and assess their prototypes and pilots. “We wanted to build a range of tools for implementors to assess their ability to scale the project,” Vielle said of the toolkit. Additionally, the tool assists innovators in determining the scalability of their technologies. The Innovation Evidence Toolkit helps humanitarian innovators and social entrepreneurs think through how they use technology when developing, piloting, and scaling their projects. “We want to remove the barriers for non-humanitarian actors to act in humanitarian responses to get services to people who need them,” Vielle said. This accessible toolkit can be used by organizations with varying levels of capacity and is available offline for those working in low-connectivity environments. 

Participants discuss the use of different tools for evaluating the Principles. (Photo by Christopher Neu)

Evidence-based decision making is key to improving the use of technologies in the development industry. The coupling of the Principles of Digital Development and evaluation methodologies will assist development practitioners, donors, and innovators not only in building evidence, but also in effectively implementing programs that align with the Digital Principles.

Evaluating ICT4D projects against the Digital Principles

By Laura Walker McDonald,  This post was originally published on the Digital Impact Alliance’s Blog on March 29, 2018.

As I have written about elsewhere, we need more evidence of what works and what doesn’t in the ICT4D and tech for social change spaces – and we need to hold ourselves to account more thoroughly and share what we know so that all of our work improves. We should be examining how well a particular channel, tool or platform works in a given scenario or domain; how it contributes to development goals in combination with other channels and tools; how the team selected and deployed it; whether it is a better choice than not using technology or using a different sort of technology; and whether or not it is sustainable.

At SIMLab, we developed our Framework for Monitoring and Evaluation of Technology in Social Change projects to help implementers to better measure the impact of their work. It offers resources towards a minimum standard of best practice which implementers can use or work toward, including on how to design and conduct evaluations. With the support of the Digital Impact Alliance (DIAL), the resource is now finalized and we have added new evaluation criteria based on the Principles for Digital Development.

Last week at MERL Tech London, DIAL was able to formally launch this product by sharing a 2-page summary available at the event and engaging attendees in a conversation about how it could be used. At the event, we joined over 100 organizations to discuss Monitoring, Evaluation, Research and Learning related to technology used for social good.

Why evaluate?

Evaluations provide snapshots of the ongoing activity and the progress of a project at a specific point in time, based on systematic and objective review against certain criteria. They may inform future funding and program design; adjust current program design; or to gather evidence to establish whether a particular approach is useful. They can be used to examine how, and how far, technology contributes to wider programmatic goals. If set up well, your program should already have evaluation criteria and research questions defined, well before it’s time to commission the evaluation.

Evaluation criteria provide a useful frame for an evaluation, bringing in an external logic that might go beyond the questions that implementers and their management have about the project (such as ‘did our partnerships on the ground work effectively?’ or ‘how did this specific event in the host country affect operations?’) to incorporate policy and best practice questions about, for example, protection of target populations, risk management, and sustainability. The criteria for an evaluation could be any set of questions that draw on an organization’s mission, values, principles for action; industry standards or other best practice guidance; or other thoughtful ideas of what ‘good’ looks like for that project or organization. Efforts like the Principles for Digital Development can set useful standards for good practice, and could be used as evaluation criteria.

Evaluating our work, and sharing learning, is radical – and critically important

While the potential for technology to improve the lives of vulnerable people around the world is clear, it is also evident that these improvements are not keeping pace with the advances in the sector. Understanding why requires looking critically at our work and holding ourselves to account. There is still insufficient evidence of the contribution technology makes to social change work. What evidence there is often is not shared or the analysis doesn’t get to the core issues. Even more important, the learnings from what has not worked and why have not been documented and absorbed.

Technology-enabled interventions succeed or fail based on their sustainability, business models, data practices, choice of communications channel and technology platform; organizational change, risk models, and user support – among many other factors. We need to build and examine evidence that considers these issues and that tells us what has been successful, what has failed, and why. Holding ourselves to account against standards like the Principles is a great way to improve our practice, and honor our commitment to the people we seek to help through our work.

Using the Digital Principles as evaluation criteria

The Principles for Digital Development are a set of living guidance intended to help practitioners succeed in applying technology to development programs. They were developed, based on some pre-existing frameworks, by a working group of practitioners and are now hosted by the Digital Impact Alliance.

These nine principles could also form a useful set of evaluation criteria, not unlike OECD evaluation criteria, or Sphere standards. Principles overlap, so data can be used to examine more than one criterion, and ot every evaluation would need to consider all of the Digital Principles.

Below are some examples of Digital Principles and sample questions that could initiate, or contribute to, an evaluation.

Design with the User: Great projects are designed with input from the stakeholders and users who are central to the intended change. How far did the team design the project with its users, based on their current tools, workflows, needs and habits, and work from clear theories of change and adaptive processes?

Understand the Existing Ecosystem: Great projects and programs are built, managed, and owned with consideration given to the local ecosystem. How far did the project work to understand the local, technology and broader global ecosystem in which the project is situated? Did it build on existing projects and platforms rather than duplicating effort? Did the project work sensitively within its ecosystem, being conscious of its potential influence and sharing information and learning?

Build for Sustainability: Great projects factor in the physical, human, and financial resources that will be necessary for long-term sustainability. How far did the project: 1) think through the business model, ensuring that the value for money and incentives are in place not only during the funded period but afterwards, and 2) ensure that long-term financial investments in critical elements like system maintenance and support, capacity building, and monitoring and evaluation are in place? Did the team consider whether there was an appropriate local partner to work through, hand over to, or support the development of, such as a local business or government department?

Be Data Driven: Great projects fully leverage data, where appropriate, to support project planning and decision-making. How far did the project use real-time data to make decisions, use open data standards wherever possible, and collect and use data responsibly according to international norms and standards?

Use Open Standards, Open Data, Open Source, and Open Innovation: Great projects make appropriate choices, based on the circumstances and the sensitivity of their project and its data, about how far to use open standards, open the project’s data, use open source tools and share new innovations openly. How far did the project: 1) take an informed and thoughtful approach to openness, thinking it through in the context of the theory of change and considering risk and reward, 2) communicate about what being open means for the project, and 3) use and manage data responsibly according to international norms and standards?

For a more complete set of guidance, see the complete Framework for Monitoring and Evaluating Technology, and the more nuanced and in-depth guidance on the Principles, available on the Digital Principles website.

What Are Your ICT4D Challenges? Take a DIAL Survey to Learn What Helps and Hurts Us All

By Laura Walker McDonald, founder of BetterLab.io. Originally posted on ICT Works on March 26, 2018.

DIAL ICT4D Survey

When it comes to the impact and practice of our ICT4D work, we’re long on stories and short on evidence. My previous organization, SIMLab, developed Frameworks on Context Analysis andMonitoring and Evaluation of technology projects to try and tackle the challenge at that micro level.

But we also have little aggregated data about the macro trends and challenges of our growing sector. That’s led the Digital Impact Alliance (DIAL) to conduct an entirely new kind of data-gathering exercise, and one that would add real quantitative data to what we know about what it’s like to implement projects and develop platforms.

Please help us gather new insights from more voices

Please take our survey on the reality of delivering services to vulnerable populations in emerging markets using digital tools. We’re looking for experiences from all of DIAL’s major stakeholder groups:

  • NGO leaders from the project site to the boardroom;
  • Technology experts;
  • Platform providers and mobile network operators;
  • Governments and donors.

We’re adding to this survey with findings with in-depth interviews with 50 people from across those groups.

Please forward this survey!

We want to hear from those whose voices aren’t usually heard by global consultation and research processes. We know that the most innovative work in our space happens in projects and collaborations in the Global South – closest to the underserved communities who are our highest priority.

Please forward this survey to we can hear from those innovators, from the NGOs, government ministries, service providers and field offices who are doing the important work of delivering digital-enabled services to communities, every day.

It’s particularly important that we hear from colleagues in government, who may be supporting digital development projects in ways far removed from the usual digital development conversation.

Why should I take and share the survey?

We’ll use the data to help measure the impact of what we do – this will be a baseline for indicators of interest to DIAL. But it will provide a unique opportunity for you to help us build a unique snapshot of the challenges and opportunities you face in your work, in funding, designing, or delivering these services.

You’ll be answering questions we don’t believe are asked enough – about your partnerships, about how you cover your costs, and about the technical choices you’re making, specific to the work you do – whether you’re a businessperson, NGO worker, technologist, donor, or government employee.

How do I participate?

Please take the survey here. It will take 15-20 minutes to complete, and you’ll be answering questions, among others, about how you design and procure digital projects; how easy and how cost-effective they are to undertake; and what you see as key barriers. Your response can be anonymous.

To thank you for your time, if you leave us your email, we’ll share our findings with you and invite you into the conversation about the results. We’ll also be sharing our summary findings with the community.

We hope you’ll help us – and share this link with others.

Please help us get the word out about our survey, and help us gather more and better data about how our ecosystem really works.