Tag Archives: MERL Tech

MERL Tech DC Session Ideas are due Monday, Apr 22!

MERL Tech is coming up in September 2019, and there are only a few days left to get your session ideas in for consideration! We’re actively seeking practitioners in monitoring, evaluation, research, learning, data science, technology (and other related areas) to facilitate every session.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. Submit your session ideas by midnight ET on April 22, 2019. You will hear back from us by May 20 and, if selected, you will be asked to submit the final session title, summary and outline by June 17.

Submit your session ideas here by April 22, midnight ET

This year’s conference theme is MERL Tech: Taking Stock

Conference strands include:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines. Please consider these in your session proposals and in how you are choosing your speakers and facilitators.

Submit your session ideas now!

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone. Please review our Code of Conduct. Session submissions are reviewed and selected by our steering committee.

Join us for MERL Tech DC, Sept 5-6th!

MERL Tech DC: Taking Stock

September 5-6, 2019

FHI 360 Academy Hall, 8th Floor
1825 Connecticut Avenue NW
Washington, DC 20009

We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.

The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.

Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.

Conference strands:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.

Submit your session ideas, register to attend the conference, or reserve a demo table for MERL Tech DC now!

You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.

MERL for Blockchain Interventions: Integrating MERL into Token Design

Guest post by Michael Cooper, Mike is a Senior Social Scientist at Emergence who advises foreign assistance funders, service providers and evaluators on blockchain applications. He can be reached at emergence.cooper@gmail.com 

Tokens Could be Our Focus

There is no real evidence base about what does and does not work for applying blockchain technology to interventions seeking social impacts.  Most current blockchain interventions are driven by developers (programmers) and visionary entrepreneurs. There is little thinking in current blockchain interventions around designing for “social” impact (there is an over abundant trust in technology to achieve the outcomes and little focus on the humans interacting with the technology) and integrating relevant evidence from behavioral economics, behavior change design, human centered design, etc.

To build the needed evidence base, Monitoring, Evaluation, Research and Learning (MERL) practitioners will have to not only get to know the broad strokes of blockchain technology but the specifics of token design and tokenomics (the political economics of tokenized ecosystems).  Token design could become the focal point for MERL on blockchain interventions since:

  • If not all, the vast majority of blockchain interventions will involve some type of desired behavior change
  • The token provides the link between the ledger (which is the blockchain) and the social ecosystem created by the token in which the behavior change is meant to happen
  • Hence the token is the “nudge” meant to leverage behavior change in the social ecosystem while governing the transactions on the blockchain ledger. 

(While this blog will focus on these points, it will not go into a full discussion of what tokens are and how they create ecosystems. But there are some very good resources out there that do this which you can review at your leisure and to the degree that works for you.  The Complexity Institute has published a book exploring the various attributes of complexity and main themes involved with tokenomics while Outlier Ventures has published, what I consider, to be the best guidance on token design.  The Outlier Ventures guidance contains many of the tools MERL practitioners will be familiar with (problem analysis, stakeholder mapping, etc.) and should be consulted.) 

Hence it could be that by understanding token design and its requirements and mapping it against our current MERL thinking, tools and practices, we can develop new thinking and tools that could be the beginning point in building our much-needed evidence base. 

What is a “blockchain intervention”? 

As MERL practitioners we roughly define an “intervention” as a group of inputs and activities meant to leverage outcomes within a given eco-system.  “Interventions” are what we are usually mandated to asses, evaluate and help improve.

When thinking about MERL and blockchain, it is useful to think of two categories of “blockchain interventions”. 

1) Integrating the blockchain into MERL data collection, entry, management, analysis or dissemination practices and

2) MERL strategies for interventions using the blockchain in some way shape or form. 

Here we will focus on the #2 and in so doing demonstrate that while the blockchain is an innovative, potentially disruptive technology, evaluating its applications on social outcomes is still an issue of assessing behavior change against dimensions of intervention design. 

Designing for Behavior Change

We generally design interventions (programs, projects, activities) to “nudge” a certain type of behavior (stated as outcomes in a theory of change) amongst a certain population (beneficiaries, stakeholders, etc.).  We often attempt to integrate mechanisms of change into our intervention design, but often do not for a variety of reasons (lack of understanding, lack of resources, lack of political will, etc.).  This lack of due diligence in design is partly responsible for the lack of evidence around what works and what does not work in our current universe of interventions. 

Enter blockchain technology, which as MERL practitioners, we will be responsible for assessing in the foreseeable future.  Hence, we will need to determine how interventions using the blockchain attempt to nudge behavior, what behaviors they seek to nudge, amongst whom, when and how well the design of the intervention accomplishes these functions.  In order to do that we will need to better understand how blockchains use tokens to nudge behavior. 

The Centrality of the Token

We have all used tokens before.  Stores issue coupons that can only be used at those stores, we get receipts for groceries as soon as we pay, arcades make you buy tokens instead of just using quarters.  The coupons and arcade tokens can be considered utility tokens, meaning that they can only be used in a specific “ecosystem” which in this case is a store and arcade respectively.  The grocery store receipt is a token because it demonstrates ownership, if you are stopped on the way out the store and you show your receipt you are demonstrating that you now have rights to ownership over the foodstuffs in your bag. 

Whether you realize it or not at the time, these tokens are trying to nudge your behavior.  The store gives you the coupon because the more time you spend in their store trying to redeem coupons, the greatly likelihood you will spend additional money there.  The grocery store wants you to pay for all your groceries while the arcade wants you to buy more tokens than you end up using. 

If needed, we could design MERL strategies to assess how well these different tokens nudged the desired behaviors. We would do this, in part, by thinking about how each token is designed relative to the behavior it wants (i.e. the value, frequency and duration of coupons, etc.).

Thinking about these ecosystems and their respective tokens will help us understand the interdependence between 1) the blockchain as a ledger that records transactions, 2) the token that captures the governance structures for how transactions are stored on the blockchain ledger as well as the incentive models for 3) the mechanisms of change in the social eco-system created by the token. 

Figure #1:  The inter-relationship between the blockchain (ledger), token and social eco-system

Token Design as Intervention Design  

Just as we assess theories of change and their mechanisms against intervention design, we will assess blockchain based interventions against their token design in much the same way.  This is because blockchain tokens capture all the design dimensions of an intervention; namely the problem to be solved, stakeholders and how they influence the problem (and thus the solution), stakeholder attributes (as mapped out in something like a stakeholder analysis), the beneficiary population, assumptions/risks, etc. 

Outlier Ventures has adapted what they call a Token Utility Canvas as a milestone in their token design process.  The canvas can be correlated to the various dimensions of an evaluability assessment tool (I am using the evaluability assessment tool as a demonstration of the necessary dimensions of an interventions design, meaning that the evaluability assessment tool assesses the health of all the components of an intervention design).  The Token Utility Canvas is a useful milestone in the token design process that captures many of the problem diagnostic, stakeholder assessment and other due diligence tools that are familiar to MERL practitioners who have seen them used in intervention design.  Hence token design could be largely thought of as intervention design and evaluated as such.

Table#1: Comparing Token Design with Dimensions of Program Design (as represented in an Evaluability Assessment)

This table is not meant to be exhaustive and not all of the fields will be explained here but in general, it could be a useful starting point in developing our own thinking and tools for this emerging space. 

The Token as a Tool for Behavior Change

Coming up with a taxonomy of blockchain interventions and relevant tokens is a necessary task, but all blockchains that need to nudge behavior will have to have a token.

Consider supply chain management.  Blockchains are increasingly being used as the ledger system for supply chain management.  Supply chains are typically comprised of numerous actors packaging, shipping, receiving, applying quality control protocols to various goods, all with their own ledgers of the relevant goods as they snake their way through the supply chain.  This leads to ample opportunities for fraud, theft and high costs associated with reconciling the different ledgers of the different actors at different points in the supply chain.  Using the blockchain as the common ledger system, many of these costs are diminished as a single ledger is used with trusted data, hence transactions (shipping, receiving, repackaging, etc.) can happen more seamlessly and reconciliation costs drop.

However even in “simple” applications such as this there are behavior change implications. We still want the supply chain actors to perform their functions in a manner that adds value to the supply chain ecosystem as a whole, rewarding them for good behavior within the ecosystem and punishing for bad.

What if those shippers trying to pass on a faulty product had already deposited a certain value of currency in an escrow account (housed in a smart contract on the blockchain)? Meaning that if they are found to be attempting a prohibited behavior (passing on faulty products) they surrender a certain amount automatically from the escrow account in the blockchain smart contract.  How much should be deposited in the escrow account?  What is the ratio between the degree of punishment and undesired action?  These are behavior questions around a mechanism of change that are dimensions of current intervention designs and will be increasingly relevant in token design.

The point of this is to demonstrate that even “benign” applications of the blockchain, like supply chain management, have behavior change implications and thus require good due diligence in token design.

There is a lot that could be said about the validation function of this process, who validates that the bad behavior has taken place and should be punished or that good behavior should be rewarded?  There are lessons to be learned from results based contracting and the role of the validator in such a contracting vehicle.  This “validating” function will need to be thought out in terms of what can be automated and what needs a “human touch” (and who is responsible, what methods they should use, etc.).   

Implications for MERL

If tokens are fundamental to MERL strategies for blockchain interventions, there are several critical implications:

  • MERL practitioners will need to be heavily integrated into the due diligence processes and tools for token design
  • MERL strategies will need to be highly formative, if not developmental, in facilitating the timeliness and overall effectiveness of the feedback loops informing token design
  • New thinking and tools will need to be developed to assess the relationships between blockchain governance, token design and mechanisms of change in the resulting social ecosystem. 

The opportunity cost for impact and “learning” could go up the less MERL practitioners are integrated into the due diligence of token design.  This is because the costs to adapt token design are relatively low compared to current social interventions, partly due to the ability to integrate automated feedback. 

Blockchain based interventions present us with significant learning opportunities due to our ability to use the technology itself as a data collection/management tool in learning about what does and does not work.  Feedback from an appropriate MERL strategy could inform decision making around token design that could be coded into the token on an iterative basis.  For example as incentives of stakeholder’s shift (i.e. supply chain shippers incur new costs and their value proposition changes) token adaptation can respond in a timely fashion so long as the MERL feedback that informs the token design is accurate.

There is need to determine what components of these feedback loops can be completed by automated functions and what requires a “human touch”.  For example, what dimensions of token design can be informed by smart infrastructure (i.e. temp gauges on shipping containers in the supply chain) versus household surveys completed by enumerators?  This will be a task to complete and iteratively improve starting with initial token design and lasting through the lifecycle of the intervention.  Token design dimensions, outlined in the Token Utility Canvas, and decision-making will need to result in MERL questions that are correlated to the best strategy to answer them, automated or human, much the same as we do now in current interventions. 

While many of our current due diligence tools used in both intervention and evaluation design (things like stakeholder mapping, problem analysis, cost benefit analysis, value propositions, etc.), will need to be adapted to the type of relationships that are within a tokenized eco-systems.  These include the relationships of influence between the social eco-system as well as the blockchain ledger itself (or more specifically the governance of that ledger) as demonstrated in figure #1.  

This could be our, as MERL practitioners, biggest priority.  While blockchain interventions could create incredible opportunities for social experimentation, the need for human centered due diligence (incentivizing humans for positive behavior change) in token design is critical.  Over reliance on the technology to drive social outcomes is already a well evidenced opportunity cost that could be avoided with blockchain-based solutions if the gap between technologists, social scientists and practitioners can be bridged.    

Blockchain for International Development: Using a Learning Agenda to Address Knowledge Gaps

Guest post by John Burg, Christine Murphy, and Jean Paul Pétraud, international development professionals who presented a one-hour session at the  MERL Tech DC 2018 conference on Sept. 7, 2018. Their presentation focused on the topic of creating a learning agenda to help MERL practitioners gauge the value of blockchain technology for development programming. Opinions and work expressed here are their own.

We attended the MERL Tech DC 2018 conference held on Sept. 7, 2018 and led a session related to the creation of a learning agenda to help MERL practitioners gauge the value of blockchain technology for development programming.

As a trio of monitoring, evaluation, research, and learning, (MERL) practitioners in international development, we are keenly aware of the quickly growing interest in blockchain technology. Blockchain is a type of distributed database that creates a nearly unalterable record of cryptographically secure peer-to-peer transactions without a central, trusted administrator. While it was originally designed for digital financial transactions, it is also being applied to a wide variety of interventions, including land registries, humanitarian aid disbursement in refugee camps, and evidence-driven education subsidies. International development actors, including government agencies, multilateral organizations, and think tanks, are looking at blockchain to improve effectiveness or efficiency in their work.

Naturally, as MERL practitioners, we wanted to learn more. Could this radically transparent, shared database managed by its users, have important benefits for data collection, management, and use? As MERL practice evolves to better suit adaptive management, what role might blockchain play? For example, one inherent feature of blockchain is the unbreakable and traceable linkages between blocks of data. How might such a feature improve the efficiency or effectiveness of data collection, management, and use? What are the advantages of blockchain over other more commonly used technologies? To guide our learning we started with an inquiry designed to help us determine if, and to what degree, the various features of blockchain add value to the practice of MERL. With our agenda established, we set out eagerly to find a blockchain case study to examine, with the goal of presenting our findings at the September 2018 MERL Tech DC conference.

What we did

We documented 43 blockchain use-cases through internet searches, most of which were described with glowing claims like “operational costs… reduced up to 90%,” or with the assurance of “accurate and secure data capture and storage.” We found a proliferation of press releases, white papers, and persuasively written articles. However, we found no documentation or evidence of the results blockchain was purported to have achieved in these claims. We also did not find lessons learned or practical insights, as are available for other technologies in development.

We fared no better when we reached out directly to several blockchain firms, via email, phone, and in person. Not one was willing to share data on program results, MERL processes, or adaptive management for potential scale-up. Despite all the hype about how blockchain will bring unheralded transparency to processes and operations in low-trust environments, the industry is itself opaque. From this, we determined the lack of evidence supporting value claims of blockchain in the international development space is a critical gap for potential adopters.

What we learned

Blockchain firms supporting development pilots are not practicing what they preach — improving transparency — by sharing data and lessons learned about what is working, what isn’t working, and why. There are many generic decision trees and sales pitches available to convince development practitioners of the value blockchain will add to their work. But, there is a lack of detailed data about what happens when development interventions use blockchain technology.

Since the function of MERL is to bridge knowledge gaps and help decision-makers take action informed by evidence, we decided to explore the crucial questions MERL practitioners may ask before determining whether blockchain will add value to data collection, management, and use. More specifically, rather than a go/no-go decision tool, we propose using a learning agenda to probe the role of blockchain in data collection, data management and data use at each stage of project implementation.   “Before you embark on that shiny blockchain project, you need to have a very clear idea of why you are using a blockchain.”  

Avoiding the Pointless Blockchain Project, Gideon Greenspan (2015)

Typically, “A learning agenda is a set of questions, assembled by an organization or team, that identifies what needs to be learned before a project can be planned and implemented.” The process of developing and finding answers to learning questions is most useful when it’s employed continuously throughout the duration of project implementation, so that changes can be made based on what is learned about changes in the project’s context, and to support the process of applying evidence to decision-making in adaptive management.

We explored various learning agenda questions for data collection, management and use that should continue to be developed and answered throughout the project cycle. However, because the content of a learning agenda is highly context-dependent, we focused on general themes. Examples of questions that might be asked by beneficiaries, implementing partners, donors, and host-country governments, include:

  • What could each of a project’s stakeholder groups gain from the use of blockchain across the stages of design and implementation, and, would the benefits of blockchain incentivize them to participate?
  • Can blockchain resolve trust or transparency issues between disparate stakeholder groups, e.g. to ensure that data reported represent reality, or that they are of sufficient quality for decision-making?
  • Are there less-expensive, more appropriate, or easier to execute, existing technologies that already meet each group’s MERL needs?
  • Are there unaddressed MERL management needs blockchain could help address, or capabilities blockchain offers that might inspire new and innovative thinking about what is done, and how it gets done?

This approach resonated with other MERL for development practitioners

We presented this approach to a diverse group of professionals at MERL Tech DC, including other MERL practitioners and IT support professionals, representing organizations from multilateral development banks to US-based NGOs. Facilitated as a participatory roundtable, the session participants discussed how MERL professionals could use learning agendas to help their organizations both decide whether blockchain is appropriate for intervention design, as well as guide learning during implementation to strengthen adaptive management.

Questions and issues raised by the session participants ranged widely, from how blockchain works, to expressing doubt that organizational leaders would have the risk appetite required to pilot blockchain when time and costs (financial and human resource) were unknown. Session participants demonstrated an intense interest in this topic and our approach. Our session ran over time and side conversations continued into the corridors long after the session had ended.

Next Steps

Our approach, as it turns out, echoes others in the field who question whether the benefits of blockchain add value above and beyond existing technologies, or accrue to stakeholders beyond the donors that fund them. This trio of practitioners will continue to explore ways MERL professionals can help their teams learn about the benefits of blockchain technology for international development. But, in the end, it may turn out that the real value of blockchain wasn’t the application of the technology itself, but rather as an impetus to question what we do, why we do it, and how we could do it better.

Creative Commons License
Blockchain for International Development: Using a Learning Agenda to Address Knowledge Gaps by John Burg, Christine Murphy, and Jean-Paul Petraud is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

MERL and the 4th Industrial Revolution: Submit your AfrEA abstract now!

by Dhashni Naidoo, Genesis Analytics

Digitization is everywhere! Digital technologies and data have changed the way we engage with each other and how we work. We cannot escape the effects of digitization. Whether in our personal capacity — how our own data is being used — or in our professional capacity, in terms of understanding how to use data and technology. These changes are exciting! But we also need to consider the challenges they present to the MERL community and their impact on development.

The advent and proliferation of big data has the potential to change how evaluations are conducted. New skills are needed to process and analyse big data. Mathematics, statistics and analytical skills will be ever more important. As evaluators, we need to be discerning about the data we use. In a world of copious amounts of data, we need to ensure we have the ability to select the right data to answer our evaluation questions.

We also have an ethical and moral duty to manage data responsibly. We need new strategies and tools to guide the ways in which we collect, store, use and report data. Evaluators need to improve our skills as related to processing and analysing data. Evaluative thinking in the digital age is evolving and we need to consider the technical and soft skills required to maintain integrity of the data and interpretation thereof.

Though technology can make data collection faster and cheaper, two important considerations are access to technology by vulnerable groups and data integrity. Women, girls and people in rural areas normally do not have the same levels of access to technology as men and boys This impacts on our ability to rely solely on technology to collect data from these population groups, because we need to be aware of inclusion, bias and representativity. Equally we need to consider how to maintain the quality of data being collected through new technologies such as mobile phones and to understand how the use of new devices might change or alter how people respond.

In a rapidly changing world where technologies such as AI, Blockchain, Internet of Things, drones and machine learning are on the horizon, evaluators need to be robust and agile in how we change and adapt.

For this reason, a new strand has been introduced at the African Evaluation Association (AfrEA) conference, taking place from 11 – 15 March 2019 in Abidjan, Cote d’Ivoire. This stream, The Fourth Industrial Revolution and its Impact on Development: Implications for Evaluation, will focus on five sub-themes:

  • Guide to Industry 4.0 and Next Generation Tech
  • Talent and Skills in Industry 4.0
  • Changing World of Work
  • Evaluating youth programmes in Industry 4.0
  • MERLTech

Genesis Analytics will be curating this strand.  We are excited to invite experts working in digital development and practitioners at the forefront of technological innovation for development and evaluation to submit abstracts for this strand.

The deadline for abstract submissions is 16 November 2018. For more information please visit the AfrEA Conference site!

Does your MERL Tech effort need innovation or maintenance?

by Stacey Berlow, Managing Partner at Project Balance and Jana Melpolder, MERL Tech DC Volunteer and Communications Manager at Inveneo. Find Jana on Twitter:  @JanaMelpolder

At MERL Tech DC 2018, Project Balance’s Stacey Berlow led a session titled “Application Maintenance Isn’t Sexy, But Critical to Success.” In her session and presentation, she outlined several reasons why software maintenance planning and funding is essential to the sustainability of an M&E software solution.

The problems that arise with software or applications go well beyond day-to-day care and management. A foundational study on software maintenance by P. Lientz and E. Burton [1] looked at the activities of 487 IT orgs and found that maintenance activities can be broken down into four types:

  • Corrective (bug fixing),
  • Adaptive (impacts due to changes outside the system),
  • Perfective (enhancements), and
  • Preventive (monitoring and optimization)

The table below outlines the percentage of time IT departments spend on the different types of maintenance. Note that most of the time dedicated to maintenance is not defect fixing (corrective), but enhancing (perfecting) the tool or system.

Maintenance Type Effort Breakdown
Corrective (Total: 21.7%) Emergency fixes: 12.4% 

Routine debugging: 9.3%

Adaptive (Total: 23.6%) Changes to data inputs and files: 17.4%

Changes to hardware and system software: 6.2% 

Perfective (Total: 51.3%) Customer enhancements: 41.8% 

Improvements to documentation: 5.5% 

Optimization: 4.0%

Other (Total: 3.4%) Various: 3.4%

The study also pointed out some of the most common maintenance problems:

  • Poor quality application system documentation
  • Excessive demand from customers
  • Competing demands for maintenance personnel time
  • Inadequate training of user personnel
  • Turnover in the user organizations

Does Your Project Need Innovations or Just Maintenance?

Organizations often prioritize innovation over maintenance. They have a list of enhancing strategies or improvements they want to make, and they’ll start new projects when what they should really be focusing on is maintenance. International development organizations often want to develop new software with the latest technology — they want NEW software for their projects. In reality, what is usually needed is software maintenance and enhancement of an existing product.

Moreover, when an organization is considering adopting a new piece of software, it’s absolutely vital that it think about the cost of maintenance in addition to the cost of development. Experts estimate that the cost of maintenance can vary from 40%-90% of the original build cost [2]. Maintenance costs a lot more than many organizations realize.

It’s also not easy to know beforehand or to estimate what the actual cost of maintenance will be. Creating a Service Level Agreement (SLA), which specifies the time required to respond to issues or deploy enhancements as part of a maintenance contract, is vital to having a handle on the human resources, price levels and estimated costs of maintenance.

As Stacey emphasizes, “Open Source does not mean ‘free’. Updates to DHIS2 versions, Open MRS, Open HIE, Drupal, WordPress, and more WILL require maintenance to custom code.”

It’s All About the Teamwork

Another point to consider when it comes to the cost of maintenance for your app or software is the time and money spent on staff. Members of your team will not always be well-versed in a certain type of software. Also, when transferring a software asset to a funder or ministry/government entity, consider the skill level of the receiving team as well as the time availability of team members. Many software products cannot be well maintained by teams that not involved in developing them. As a result, they often fall into disrepair and become unusable. A software vendor may be better equipped to monitor and respond to issues than the team.

What Can You Do?

So what are effective ways to ensure the sustainability of software tools? There’s a few strategies you can use. First of all, ensure that your IT staff members are involved in the planning of your project or organization’s RFP process. They will give you valuable metrics on efforts and cost, right up front, so that you can secure funding. Second, scale down the size of your project so that your tool budget matches your funds. Consider what the minimum software functionality is that you need, and enhance the tools later. Third, invite the right stakeholders and IT staff members to meetings and conference calls as soon as the project begins. Having the right people on board early on will make a huge difference in how you manage and transition software to country stakeholders later at the end of the project!

The session at MERL Tech ended with a discussion of the incredible need and value of involving local skills and IT experts as part of the programming team. Local knowledge and IT expertise is one of the most important, if not the most important, pieces of the application maintenance puzzle. One of the key ideas I learned was that application maintenance should start at the local level and grow from there. Local IT personnel will be able to answer many technical questions and address many maintenance issues. Furthermore, IT staff members from international development agencies will be able to learn from local IT experts as well, giving a boost in the capacity of all staff members across the board.

Application maintenance may not be the most interesting part of an international development project, but it is certainly one of the most vital to help ensure the project’s success and ongoing sustainability.

Check out this great Software Maintenance/Monitoring Checklist to ensure you’ve considered everything you need when planning your next MERL Tech (or other) effort!

[1] P. Lientz, E. Burton, Software Maintenance Management: A Study of the Maintenance of Computer Application Software in 487 Data Processing Organizations, Addison-Wesley (August 1, 1980)

[2] Reference: Jeff Hanby, Software Maintenance: Understanding and Estimating Costs, https://bit.ly/2Ob3iOn

How to Create a MERL Culture within Your Organization

Written by Jana Melpolder, MERL Tech DC Volunteer and former ICT Works Editor. Find Jana on Twitter:  @JanaMelpolder

As organizations grow, they become increasingly aware of how important MERL (Monitoring, Evaluation, Research, and Learning) is to their international development programs. To meet this challenge, new hires need to be brought on board, but more importantly, changes need to happen in the organization’s culture.

How can nonprofits and organizations change to include more MERL? Friday afternoon’s MERL Tech DC  session “Creating a MERL Culture at Your Nonprofit” set out to answer that question. Representatives from Salesforce.org and Samaschool.org were part of the discussion.

Salesforce.org staff members Eric Barela and Morgan Buras-Finlay emphasized that their organization has set aside resources (financial and otherwise) for international and external M&E. “A MERL culture is the foundation for the effective use of technology!” shared Eric Barela.

Data is a vital part of MERL, but those providing it to organizations often need to “hold the hands” of those on the receiving end. What is especially vital is helping people understand this data and gain deeper insight from it. It’s not just about the numbers – it’s about what is meant by those numbers and how people can learn and improve using the data.

According to Salesforce.org, an organization’s MERL culture is comprised of its understanding of the benefit of defining, measuring, understanding, and learning for social impact with rigor. And building or maintaining a MERL culture doesn’t just mean letting the data team do whatever they like or being the ones in charge. Instead, it’s vital to focus on outcomes. Salesforce.org discussed how its MERL staff prioritize keeping a foot in the door in many places and meeting often with people from different departments.

Where does technology fit into all of this? According to Salesforce.org, the push is on keep the technology ethical. Morgan Buras-Finlay described it well, saying “technology goes from building a useful tool to a tool that will actually be used.”

Another participant on Friday’s panel was Samaschool’s Director of Impact, Kosar Jahani. Samaschool describes itself as a San Francisco-based nonprofit focused on preparing low-income populations to succeed as independent workers. The organization has “brought together a passionate group of social entrepreneurs and educators who are reimagining workforce development for the 21st century.”

Samaschool creates a MERL culture through Learning Calls for their different audiences and funders. These Learning Calls are done regularly, they have a clear agenda, and sometimes they even happen openly on Facebook LIVE.

By ensuring a high level of transparency, Samasource is also aiming to create a culture of accountability where it can learn from failures as well as successes. By using social media, doors are opened and people have an easier time gaining access to information that otherwise would have been difficult to obtain.

Kosar explained a few negative aspects of this kind of transparency, saying that there is a risk to putting information in such a public place to view. It can lead to lost future investment. However, the organization feels this has helped build relationships and enhanced interactions.

Sadly, flight delays prevented a third organization. Big Elephant Studios and its founder Andrew Means from attending MERL Tech. Luckily, his slides were presented by Eric Barela. Andrew’s slides highlighted the following three things that are needed to create a MERL Culture:

  • Tools – investments in tools that help an organization acquire, access, and analyze the data it needs to make informed decisions
  • Processes – Investments in time to focus on utilizing data and supporting decision making
  • Culture – Organizational values that ensure that data is invested in, utilized, and listened to

One of Andrew’s main points was that generally, people really do want to gain insight and learn from data. The other members of the panel reiterated this as well.

A few lingering questions from the audience included:

  • How do you measure how culture is changing within an organization?
  • How does one determine if an organization’s culture is more focused on MERL that previously?
  • Which social media platforms and strategies can be used to create a MERL culture that provides transparency to clients, funders, and other stakeholders?

What about you? How do you create and measure the “MERL Culture” in your organization?

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL on the Money: Are we getting funding for data right?

By Paige Kirby, Senior Policy Advisor at Development Gateway

Time for a MERL pop quiz: Out of US $142.6 billion spent in ODA each year, how much goes to M&E?

A)  $14.1-17.3 billion
B)  $8.6-10 billion
C)  $2.9-4.3 billion

It turns out, the correct answer is C. An average of only $2.9-$4.3 billion — or just 2-3% of all ODA spending — goes towards M&E.

That’s all we get. And despite the growing breadth of logframes and depths of donor reporting requirements, our MERL budgets are likely not going to suddenly scale up.

So, how can we use our drop in the bucket better, to get more results for the same amount of money?

At Development Gateway, we’ve been doing some thinking and applied research on this topic, and have three key recommendations for making the most of MERL funding.

Teamwork

Image Credit: Kjetil Korslien CC BY NC 2.0

When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.

Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.

Consider the Public Option

Image Credit: Development Gateway

And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?

If partner governments and donors invested in country statistical and administrative data systems, we implementers would not have such incentive or need to conduct one-off data collection.

For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.

The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.

Flip the Paradigm

Image Credit: Rafael J M Souza CC BY 2.0

And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.

But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.

Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.

 

Integrating big data into program evaluation: An invitation to participate in a short survey

As we all know, big data and data science are becoming increasingly important in all aspects of our lives. There is a similar rapid growth in the applications of big data in the design and implementation of development programs. Examples range from the use of satellite images and remote sensors in emergency relief and the identification of poverty hotspots, through the use of mobile phones to track migration and to estimate changes in income (by tracking airtime purchases), social media analysis to track sentiments and predict increases in ethnic tension, and using smart phones on Internet of Things (IOT) to monitor health through biometric indicators.

Despite the rapidly increasing role of big data in development programs, there is speculation that evaluators have been slower to adopt big data than have colleagues working in other areas of development programs. Some of the evidence for the slow take-up of big data by evaluators is summarized in “The future of development evaluation in the age of big data”.  However, there is currently very limited empirical evidence to test these concerns.

To try to fill this gap, my colleagues Rick Davies and Linda Raftree and I would like to invite those of you who are interested in big data and/or the future of evaluation to complete the attached survey. This survey, which takes about 10 minutes to complete asks evaluators to report on the data collection and data analysis techniques that you use in the evaluations you design, manage or analyze; while at the same time asking data scientists how familiar they are with evaluation tools and techniques.

The survey was originally designed to obtain feedback from participants in the MERL Tech conferences on “Exploring the Role of Technology in Monitoring, Evaluation, Research and Learning in Development” that are held annually in London and Washington, DC, but we would now like to broaden the focus to include a wider range of evaluators and data scientists.

One of the ways in which the findings will be used is to help build bridges between evaluators and data scientists by designing integrated training programs for both professions that introduce the tools and techniques of both conventional evaluation practice and data science, and show how they can be combined to strengthen both evaluations and data science research. “Building bridges between evaluators and big data analysts” summarizes some of the elements of a strategy to bring the two fields closer together.

The findings of the survey will be shared through this and other sites, and we hope this will stimulate a follow-up discussion. Thank you for your cooperation and we hope that the survey and the follow-up discussions will provide you with new ways of thinking about the present and potential role of big data and data science in program evaluation.

Here’s the link to the survey – please take a few minute to fill it out!

You can also join me, Kerry Bruce and Pete York on September 5th for a full day workshop on Big Data and Evaluation in Washington DC.