Tag Archives: MERL Tech

Big Data, Big Responsibilities

by Catherine Gwin

Big data comes with big responsibilities, where both the funder and recipient organization have ethical and data security obligations.

Big data allows organizations to count and bring visibility to marginalized populations and to improve on decision-making. However, concerns of data privacy, security and integrity pose challenges within data collection and data preservation. What does informed consent look like in data collection? What are the potential risks we bring to populations? What are the risks of compliance?

Throughout the MERL Tech DC panel, “Big Data, Big Responsibilities,” Mollie Woods, Senior Monitoring, Evaluation and Learning (MEL) Advisor of ChildFund International and Michael Roytman, Founder and Board Director of Dharma Platform, unpacked some of the challenges based on their experiences. Sam Scarpino, Dharma’s Chief Strategy Officer, served as the session moderator, posing important questions about this area.

The session highlighted three takeaways organizations should consider when approaching data security.

1) Language Barriers between Evaluators and Data Scientists

Both Roytman and Woods agreed that the divide between evaluators and data scientists is the lack of knowledge of the others’ field. How do you ask a question when you didn’t know you had to?

In Woods’ experience, the Monitoring and Evaluation team and IT team each have a role in data security, but work independently. The evolving field of M+E inhibits time for staying attuned to what data security needs. Additionally, the organization’s limited resources can impede the IT team from supporting programmatic data security. 

A potential solution ChildFund has considered is investing in an IT person with a focus on MERL who has experience and knowledge in the international or humanitarian sphere. However, many organizations fall short when it comes to financing data security. In addition, identifying an individual with these skills can be challenging.

2) Data Collection

Data breaches exposes confidential information, which puts vulnerable populations at risk of exploitative use of their data and potential harm. As we gather data, this constitutes a question about what informed consent looks like? Are we communicating the risks to beneficiaries of releasing their personal information? 

In Woods’ experience, ChildFund approaches data security through a child-safeguarding lens across stakeholders and program participants, where all are responsible for data security. Its child safeguarding policy entails data security protocol and privacy; however, Woods mentioned the dissemination and implementation across countries is a lingering question. Many in-country civil society organizations lack capacity, knowledge, and resources to implement data security protocols, especially if they are working in a country context that does not have laws, regulations or frameworks related to data security and privacy. Currently, ChildFund is advocating for refresher trainings on policy for all involved global partnerships to be updated on organizational changes.

3) Data Preservation

The issue of data breaches is a privacy concern when organizations’ data includes sensitive information of individuals. This puts beneficiaries at-risk of exploitation by bad actors. Roytman explained that there are specific actors, risks, and threats that affect specific kinds of data; though, humanitarian aid organizations are not always a primary target. Nonetheless, this shouldn’t distract organizations from potential risks, but open discussion around how to identify and mitigate risks? 

Protecting sensitive data requires a proper security system, something that not all platforms provide, especially if they are free. Ultimately, security is a financial investment that requires time in order to avoid and mitigate risks and potential threats. In order to increase support and investment in security, ChildFund is working with Dharma to pilot a small program to demonstrate the use of big data analytics with a built in data security system.

Roytman suggested approaching ethical concerns by applying the CIA Triad: Confidentiality, Availability and Integrity. There will always be tradeoffs, he said. If we don’t properly invest in data security and mitigate potential risks, there will be additional challenges to data collection. If we don’t understand data security, how can we ensure informed consent?

Many organizations find themselves doing more harm than good due to lack of funding. Big data can be an inexpensive approach to collecting large quantities of data, but if it leads to harm, there is a problem. This is is a complex issue to resolve, however, as Roytman concluded, the opposite of complexity is not simplicity, but rather transparency.

See Dharma’s blog post about this session here.

Related Resources and Articles

5 tips for operationalizing Responsible Data policy

By Alexandra Robinson and Linda Raftree

MERL and development practitioners have long wrestled with complex ethical, regulatory, and technical aspects of adopting new data approaches and technologies. The topic of responsible data has gained traction over the past 5 years or so, and a handful of early adopters have developed and begun to operationalize institutional RD policies. Translating policy into practical action, however, can feel daunting to organizations. Constrained budgets, complex internal bureaucracies, and ever-evolving technology and regulatory landscapes make it hard to even know where to start. 

The Principles for Digital Development provide helpful high level standards, and donor guidelines (such as USAID’s Responsible Data Considerations) offer additional framing. But there’s no one-size-fits-all policy or implementation plan that organizations can simply copy and paste in order to tick all the responsible data boxes. 

We don’t think organizations should do that anyway, given that each organization’s context and operating approach is different, and policy means nothing if it’s not rolled out through actual practice and behavior change!

In September, we hosted a MERL Tech pre-workshop on Operationalizing Responsible Data to discuss and share different ways of turning responsible data policy into practice. Below we’ve summarized some tips shared at the workshop. RD champions in organizations of any size can consider these when developing and implementing RD policy.

1. Understand Your Context & Extend Empathy

  • Before developing policy, conduct a non-punitive assessment (a.k.a. a landscape assessment, self-assessment or staff research process) on existing data practices, norms, and decision-making structures . This should engage everyone who will using or affected by the new policies and practices. Help everyone relax and feel comfortable sharing how they’ve been managing data up to now so that the organization can then improve. (Hint: avoid the term ‘audit’ which makes everyone nervous.)
  • Create ‘safe space’ to share and learn through the assessment process:
    • Allow staff to speak anonymously about their challenges and concerns whenever possible
    • Highlight and reinforce promising existing practices
    • Involve people in a ‘self-assessment’
    • Use participatory workshops (e.g. work with a team to map a project’s data flows or conduct a Privacy Impact Assessment or a Risk-Benefits Assessment) – this allows everyone who participates to gain RD awareness while also learning new practical tools along with highlighting any areas that need attention. The workshop lead or “RD champion” can also then get a better sense of the wider organizations knowledge, attitudes and practices as related to RD
    • Acknowledge (and encourages institutional leaders to affirm) that most staff don’t have “RD expert” written into their JDs; reinforce that staff will not be ‘graded’ or evaluated on skills they weren’t hired for.
  • Identify organizational stakeholders likely to shape, implement, or own aspects of RD policy and tailor your engagement strategies to their perspectives, motivations, and concerns. Some may feel motivated financially (avoiding fines or the cost of a data breach); others may be motivated by human rights or ethics; whereas some others might be most concerned with RD with respect to reputation, trust, funding and PR.
  • Map organizational policies, major processes (like procurement, due diligence, grants management), and decision making structures to assess how RD policy can be integrated into these existing activities.

2. Consider Alternative Models to Develop RD Policy 

  • There is no ‘one size fits all’ approach to developing RD policy. As the (still small, but promising) number of organizations adopting policy grows, different approaches are emerging. Here are some that we’ve seen:
    • Top-down: An institutional-level policy is developed, normally at the request of someone on the leadership team/senior management. It is then adapted and applied across projects, offices, etc. 
      • Works best when there is strong leadership buy-in for RD policy and a focal point (e.g. an ‘Executive Sponsor’) coordinating policy formation and navigating stakeholders
    • Bottom-up: A group of staff are concerned about RD but do not have support or interest from senior leadership, so they ‘self-start’ the learning process and begin shaping their own practices, joining together, meeting, and communicating regularly until they have wider buy-in and can approach leadership with a use case and budget request for an organization-wide approach.
      • Good option if there is little buy-in at the top and you need to build a case for why RD matters.
    • Project- or Team-Generated: Development and application of RD policies are piloted within a targeted project or projects or on one team. Based on this smaller slice of the organization, the project or team documents its challenges, process, and lessons learned to build momentum for and inform the development of future organization-wide policy. 
      • Promising option when organizational awareness and buy-in for RD is still nascent and/or resources to support RD policy formation and adoption (staff, financial, etc.) are limited.
    • Hybrid approach: Organizational policy/policies are developed through pilot testing across a reasonably-representative sample of projects or contexts. For example, an organization with diverse programmatic and geographical scope develops and pilots policies in a select set of country offices that can offer different learning and experiences; e.g., a humanitarian-focused setting, a development-focused setting, and a mixed setting; a small office, medium sized office and large office; 3-4 offices in different regions; offices that are funded in various ways; etc.  
      • Promising option when an organization is highly decentralized and works across a diverse country contexts and settings. Supports the development of approaches that are relevant and responsive to diverse capacities and data contexts.

3. Couple Policy with Practical Tools, and Pilot Tools Early and Often

  • In order to translate policy into action, couple it with practical tools that support existing organizational practices. 
  • Make sure tools and processes empower staff to make decisions and relate clearly to policy standards or components; for example:
    • If the RD policy includes a high-level standard such as, “We ensure that our partnerships with technology companies align with our RD values,” give staff tools and guidance to assess that alignment. 
  • When developing tools and processes, involve target users early and iteratively. Don’t worry if draft tools aren’t perfectly formatted. Design with users to ensure tools are actually useful before you sink time into tools that will sit on a shelf at best, and confuse or overburden staff at worst. 

4. Integrate and “Right-Size” Solutions 

  • As RD champions, it can be tempting to approach RD policy in a silo, forgetting it is one of many organizational priorities. Be careful to integrate RD into existing processes, align RD with decision-making structures and internal culture, and do not place unrealistic burdens on staff.
  • When building tools and processes, work with stakeholders to develop responsibility assignment charts (e.g. RACI, MOCHA) and determine decision makers.
  • When developing responsibility matrices, estimate the hours each stakeholder (including partners, vendors, and grantees) will dedicate to a particular tool or process. Work with anticipated end users to ensure that processes:
    • Can realistically be carried out within a normal workload
    • Will not excessively burden staff and partners
    • Are realistically proportionate to the size, complexity, and risk involved in a particular investment or project

5. Bridge Policy and Behavior Change through Accompaniment & Capacity Building 

  • Integrating RD policy and practices requires behavior change and can feel technically intimidating to staff. Remember to reassure staff that no one (not even the best resourced technology firms!), has responsible data mastered, and that perfection is not the goal.
  • In order to feel confident using new tools and approaches to make decisions, staff need knowledge to analyze information. Skills and knowledge required will be different according to role, so training should be adapted accordingly. While IT staff may need to know the ins and outs of network security, general program officers certainly do not. 
  • Accompany staff as they integrate RD processes into their work. Walk alongside them, answering questions along the way, but more importantly, helping staff build confidence to develop their own internal RD compass. That way the pool of RD champions will grow!

What approaches have you seen work in your organization?

MERL Tech DC 2019 Feedback Report

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of the conference and related community are to:

  • Improve development, tech, data & MERL literacy
  • Help people find and use evidence & good practices
  • Promote ethical and appropriate use of technology
  • Build and strengthen a “MERL Tech community”
  • Spot trends and future-scope for the sector
  • Transform and modernize MERL in an intentionally responsible and inclusive way

Our sixth MERL Tech DC conference took place on September 5-6, 2019, and we held four pre-workshops on September 4. Some 350 people from 194 organizations joined us for the 2-days, and another 100 people attended the pre-workshops. About 56% of participants attended for the first time, whereas 44% were returnees.

Who attended?

Attendees came from a wide range of organization types and professions.

Conference Themes

The theme for this year’s conference was “Taking Stock” and we had 4 sub-themes:

  1. Tech and Traditional MERL
  2. Data, Data, Data
  3. Emerging Approaches to MERL
  4. The Future of MERL

State of the Field Research

A small team shared their research on “The MERL Tech State of the Field” organized into the above 4 themes. The research will be completed and shared on the MERL Tech site before the end of 2019. (We’ll be presenting it at the South African Evaluation Association Conference in October and at the American Evaluation Association conference in November)

As always, MERL Tech conference sessions were related to: technology for MERL, MERL on ICT4D and Digital Development programs, MERL of MERL Tech, data for decision-making, ethical and responsible data approaches and cross-disciplinary community building. (See the full agenda here):

We checked in with participants on the last day to see how the field had shifted since 2015, when our keynote speaker (Ben Ramalingam) gave some suggestions on how tech could improve MERL.

Ben’s future vision
Where MERL Tech 2019 sessions fell on the expired-tired-wired schematic.
What participants would add to the schematic to update it for 2019 and beyond.

Diversity and Inclusion

We have been making an effort to improve diversity and inclusion at the conference and in the MERL Tech space. An unofficial estimate on speaker racial and gender diversity is below. As compared to 2018 when we first began tracking, the number of women of color speakers increased by 5% and women of color by 2%. The number of white female speakers decreased by 6% and the number of white male speakers went down by 1%. Our gender balance remained fairly consistent.

Where we are failing on diversity and inclusion is at having speakers and participants from outside of North America and Europe – that likely has to do with cost and visas which affect who can attend. It also has to do with who organizations select to represent them at MERL Tech. We’re continuing to try to find ways to collaborate with groups working on MERL Tech in different regions. We believe that new and/or historically marginalized voices should be more involved in shaping the future of the sector and the future of MERL Tech. (If you would like to support us on this or get involved, please contact Linda!)

Post Conference Feedback

Some 25% of participants filled in the post-conference survey and 85% rated their experience “good” or “awesome” (up from 70% in 2018). Answers did not significantly differ based on whether a participant had attended previously or not. Another 8.5% rated sessions via the “Sched” conference agenda app, with an average session satisfaction rating of 9.1 out of 10.

The top rated session was on “Decolonizing Data and Technology in MERL.” As one participant said, “It shook me out of my complacency. It is very easy to think of the tech side of the work we do as ‘value free’, but this is not the case. Being a development practitioner it is important for me to think about inequality in tech and data further than just through the implementation of the projects we run.” Another noted that “As a white, gay male who has a background in international and intercultural education, it was great to see other fields bringing to light the decolonizing mindset in an interactive way. The session was enlightening and brought up conversation that is typically talked about in small groups, but now it was highlighted in front of the entire audience.”

Sign up for MERL Tech News if you’d like to read more about this and other sessions. We’re posting a series of posts and session summaries.

Key suggestions for improving next time were similar to those we hear every year: less showcasing and pitching, ensure that titles match what is actually delivered at the session, ensuring that presenters are well-prepared, and making sessions relevant, practical and applicable.

Additionally, several people commented that the venue had some issues with noise from conversations in the common area spilling into breakout rooms and making it hard to focus. Participants also complained that there was a large amount of trash and waste produced, and suggested more eco-friendly catering for next time.

Access the full feedback report here.

Where/when should the conference happen?

As noted, we are interested in finding a model for MERL Tech that allows for more diversity of voices and experiences, so we asked participants how often and where they thought we should do MERL Tech in the future. The majority (44.3%) felt we should run MERL Tech in DC every 2 years and somewhere else in the year in between. Some 23% said to keep it in DC every year, and around 15% suggested multiple MERL Tech conferences each year in DC and elsewhere. (We were pleased that no one selected the option of “stop doing MERL Tech altogether, it’s unnecessary.”)

Given this response, we will continue exploring options for partners who would like to support financially and logistically to enable MERL Tech to happen outside of DC. Please contact Linda if you’d like to be involved or have ideas on how to make this happen.

New ways to get involved!

Last year, the idea of having a GitHub repository was raised, and this year we were excited to have GitHub join us. They had come up with the idea of creating a MERL Tech Center on GitHub as well, so it was a perfect match! More info here.

We also had a request to create a MERL Tech Slack channel (which we have done). Please get in touch with Linda by email or via Slack if you’d like to join us there for ongoing conversations on data collection, open source, technology (or other channels you request!)

As always you can also follow us on Twitter and MERL Tech News.

Blockchain: Can we talk about impact yet?

by Shailee Adinolfi, John Burg and Tara Vassefi

In September 2018, a three-member team of international development professionals presented a session called “Blockchain Learning Agenda: Practical MERL Workshop” at MERL Tech DC. Following the session, the team published a blog post about the session stating that the authors had “… found no documentation or evidence of the results blockchain was purported to have achieved in these claims [of radical improvements]. [They] also did not find lessons learned or practical insights, as are available for other technologies in development.”

The blog post inspired a barrage of unanticipated discussion online. Unfortunately, in some cases readers (and re-posters) misinterpreted the point as disparaging of blockchain. Rather, the post authors were simply asserting ways to cope with uncertain situations related to piloting blockchain projects. Perhaps the most important outcome of the session and post, however, is that they motivated a coordinated response from several organizations who wanted to delve deeper into the blockchain learning agenda.

To do that, on March 5, 2019, Chemonics, Truepic, and Consensys hosted a roundtable titled “How to Successfully Apply Blockchain in International Development.” All three organizations are applying blockchain in different and complementary ways relevant to international development — including project monitoring, evaluation, learning (MEL) innovations as well as back-end business systems. The roundtable enabled an open dialogue about how blockchain is being tested and leveraged to achieve better international development outcomes. The aim was to explore and engage with real case studies of blockchain in development and share lessons learned within a community of development practitioners in order to reduce the level of opacity surrounding this innovative and rapidly evolving technology.

Three case studies were highlighted:

1. “One-click Biodata Solution” by Chemonics 

  • Chemonics’ Blockchain for Development Solutions Lab designed and implemented a RegTech solution for the USAID foreign assistance and contracting space that sought to leverage the blockchain-based identity platform created by BanQu to dramatically expedite and streamline the collection and verification of USAID biographical data sheets (biodatas), improve personal data protection, and reduce incidents of error and fraud in the hiring process for professionals and consultants hired under USAID contracts.
  • Chemonics processes several thousand biodatas per year and accordingly devotes significant labor effort and cost to support the current paper-based workflow.
  • Chemonics’ technology partner, BanQu, used a private, permissioned blockchain on the Ethereum network to pilot a biodata solution.
  • Chemonics successfully piloted the solution with BanQu, resulting in 8 blockchain-based biodatas being fully processed in compliance with donor requirements.
  • Improved data protection was a priority for the pilot. One goal of the solution was to make it possible for individuals to maintain control over their back-up documentation, like passports, diplomas, and salary information, which could be shared temporarily with Chemonics through the use of an encrypted key, rather than having documentation emailed and saved to less secure corporate digital file systems.
  • Following the pilot, Chemonics determined through qualitative feedback that users across the biodata ecosystem found the blockchain solution to be easy to use and succeeded at reducing level of effort on the biodata completion process. 
  • Chemonics also compiled lessons-learned, including refinements to the technical requirements, options to scale the solution, and additional user feedback and concerns about the technology to inform decision-making around further biodata pilots. 

2. Project i2i presented by Consensys

  • Problem Statement: 35% of the Filipino population is unbanked, and 56% lives in rural areas. The Philippines economy relies heavily on domestic remittances. Unionbank sought to partner with hundreds of rural banks that didn’t have access to electronic banking services that the larger commercial banks do.
  • In 2017, to continue the Central Bank of the Philippines’ national strategy for financial inclusion, the central banks of Singapore and the Philippines announced that they would collaborate on financial technology by employing the regulatory sandbox approach. This will provide industry stakeholders with the room and time to experiment before regulators enact potentially restrictive policies that could stifle innovation and growth. As part of the agreement, the central banks will share resources, best practices, research, and collaborate to “elevate financial innovation” in both economies.
  • Solution design assumptions for Philippines context:
    • It can be easily operated and implemented with limited integration, even in low-tech settings;
    • It enables lower transaction time and lower transaction cost;
    • It enables more efficient operations for rural banks, including reduction of reconciliations and simplification of accounting processes.
  • Unionbank worked with ConsenSys and participating rural banks to create an interbank ledger with tokenization. The payment platform is private, Ethereum-based.
  • In the initial pilot, 20 steps were eliminated in the process.
  • Technology partners: ConsenSys, Azure (Microsoft), Kaleido, Amazon Web Services.
  • In follow up to the i2i project, Union bank partnered with Singapore-based OCBC Bank, wherein the parties deployed the Adhara liquidity management and international payments platform for a blockchain-based international remittance pilot.  
  • Potential for national and regional collaboration/network development.
  • For details on the i2i project, download the full case study here, watch the 4-minute video clip.

3. Controlled Capture presented by Truepic

  • Truepic is a technology company specializing in digital image and video authentication. Truepic’s Controlled Capture technology uses cutting-edge computer vision, AI, and cryptography technologies to test images and video for signs of manipulation, designating only those that pass its rigorous verification tests are authenticated. Through the public blockchain, Truepic creates an immutable record for each photo and video captured through this process, such that their authenticity can be proven, meeting the highest evidentiary standards. This technology has been used in over 100 countries by citizen journalists, activists, international development organizations, NGOs, insurance companies, lenders and online platforms. 
  • One of Truepic’s innovative strategic partners, the UN Capital Development Fund (another participant of the roundtable), has been testing the possibility of using this technology for monitoring and evaluation of development projects. For example, the following Truepic tracks the date, time, and geolocation of the latest progress of a factory in Uganda. 
  • Controlled Capture requires Wifi or at least 3G/4G connectivity to fully authenticate images/video and write them to the public blockchain, which can be a challenge in low connectivity instances, for example in least-developed countries for UNCDF. 
  • As a work around to connectivity issues, Truepic’s partners have used Satellite Internet connections – such as a Thuraya or Iridium device to successfully capture verified images anywhere. 
  • Public blockchain – Truepic is currently using two different public blockchains, testing cost versus time in an effort to continually shorten the time from capture to closing chain of custody (currently around 8-12 seconds). 
  • Cost – The blockchain component is not actually too expensive; the heaviest investment is in the computer vision technology used to authenticate the images/video, for example to detect rebroadcasting, as in taking a picture of a picture to pass off the metadata.
  • Rights of the image is the owner’s – Truepic does not have rights over the image/video but keeps a copy on its servers in case the user’s phone/tablet is lost, stolen, or broken. And most importantly, so that Truepic can produce the original image on its verification page when shared or disseminated publicly. 
  • Court + evidentiary value: the technology and public-facing verification pages are designed to meet the highest evidentiary standards. 
    • Tested in courts; currently being testing at the international level but cannot disclose specifics due to confidentiality reasons.
  • Privacy and security are key priorities, especially for working in conflict zones, such as Syria. Truepic does not use 2-step authentication because the technology is focused on authenticating the images/video; it is not relevant who the source is and this way it keeps the source as anonymous as possible. Truepic works with its partners to educate on best practices to maintain high levels of anonymity in any scenario. 
  • Biggest challenge is usage by implementing partners – it is very easy to use, however the behavioral change to use the platform has been challenging. 
    • Other challenge: you bring the solution to an implementer, and the implementer says you have to get the donor to integrate it into their RFP scopes; then the donors recommend that we speak to implementing partners. 
  • Storage capacity issues? Storage is not currently a problem; Truepic has plans in place to address any storage issues that may arise with scale. 

How did implementers measure success in their blockchain pilots?

  • Measurement was both quantitative and qualitative 
  • The organizations worked with clients to ensure people who needed the MEL were able to access and use it
  • Concerns with publicizing information or difficulties with NDAs were handled on a case-by-case basis

The MEL space is an excellent place to have a conversation about the use of blockchain for international development – many aspects of MEL hinge on the need for immutability (in record keeping), transparency (in the expenditure and impact of funds) and security (in the data and the identities of implementers and beneficiaries). Many use cases in developing countries and for social impact have been documented (see Stanford report Blockchain for Social Impact, Moving Beyond the Hype). (Editor’s note: see also Blockchain and Distributed Ledger Technologies in the Humanitarian Sector and Distributed Ledger Identification Systems in the Humanitarian Sector).

The original search for evidence on the impact of blockchain sought a level of data fidelity that is difficult to capture and validate, even under the least challenging circumstances. Not finding it at that time, the research team sought the next best solution, which was not to discount the technology, but to suggest ways to cope with the knowledge gaps they encountered by recommending a learning agenda. The roundtable helped to stimulate robust conversation of the three case studies, contributing to that learning agenda.

Most importantly, the experience highlighted several interesting takeaways about innovation in public-private partnerships more broadly: 

  • The initial MERL Tech session publicly and transparently drew attention to the gaps that were identified from the researchers’ thirty thousand-foot view of evaluating innovation. 
  • This transparency drew out engagement and collaboration between and amongst those best-positioned to move quickly and calibrate effectively with the government’s needs: the private sector. 
  • This small discussion that focused on the utility and promise of blockchain highlighted the broader role of government (as funder/buyer/donor) in both providing the problem statement and anchoring the non-governmental, private sector, and civil society’s strengths and capabilities. 

One year later…

So, a year after the much-debated blockchain blogpost, what has changed? A lot. There is a growing body of reporting that adds to the lessons learned literature and practical insights from projects that were powered or supported by blockchain technology. The question remains: do we have any greater documentation or evidence of the results blockchain was purported to have achieved in these claims? It seems that while reporting has improved, it still has a long way to go. 

It’s worth pointing out that the international development industry, with far more experts and funding dedicated to working on improving MERL than emerging tech companies, also has some distance to go in meeting its own evidence standards.  Fortunately, the volume and frequency of hype seems to have decreased (or perhaps the news cycle has simply moved on?), thereby leaving blockchain (and its investors and developers) the space they need to refine the technology.

In closing, we, like the co-authors of the 2018 post, remain optimistic that blockchain, a still emerging technology, will be given the time and space needed to mature and prove its potential. And, whether you believe in “crypto-winter” or not, hopefully the lull in the hype cycle will prove to be the breathing space that blockchain needs to keep evolving in a productive direction.

Author Bios

Shailee Adinolfi: Shailee works on Public Sector solutions at ConsenSys, a global blockchain technology company building the infrastructure, applications, and practices that enable a decentralized world. She has 20 years of experience at the intersection of technology, financial inclusion, trade, and government, including 11 years on USAID funded projects in Africa, Asia and the Middle East.

John Burg: John was a co-author on the original MERL Tech DC 2018 blog, referenced in this blog. He is an international development professional with almost 20 years of cross-sectoral experience across 17 countries in six global regions. He enjoys following the impact of emerging technology in international development contexts.

Tara Vassefi: Tara is Truepic’s Washington Director of Strategic Initiatives. Her background is as a human rights lawyer where she worked on optimizing the use of digital evidence and understanding how the latest technologies are used and weighed in courts around the world. 

Four Reflections on the 2019 MERL Tech Dashboards Competition

by Amanda Makulec, Excella Labs. This post first appeared here.

Data visualization (viz) has come a long way in our MERL Tech community. Four years ago the conversation was around “so you think you want a dashboard?” which evolved to a debate on dashboards as the silver bullet solution (spoiler: they’re not). Fast forward to 2019, when we had the first plenary competition of dashboard designs on the main stage!

Wayan Vota and Linda Raftree, MERL Tech Organizers, were kind enough to invite me to be a judge for the dashboard competition. Let me say: judging is far less stressful than presenting. Having spoken at MERL Tech every year on a data viz topic since 2015, it felt novel to not be frantically reviewing slides the morning of the conference.

The competition sparked some reflections on how we’ve grown and where we can continue to improve as we use data visualization as one item in our MERL toolbox.

1 – We’ve moved beyond conversations about ‘pretty’ and are talking about how people use our dashboards.

Thankfully, our judging criteria and final selection were not limited to which dashboard was the most beautiful. Instead, we focused on the goal, how the data was structured, why the design was chosen, and the impact it created.

One of the best stories from the stage came from DAI’s Carmen Tedesco (one of three competition winners), who demoed a highly visual interface that even included custom spatial displays of how safe girls felt in different locations throughout a school. When the team demoed the dashboard to their Chief of Party, he was underwhelmed… because he was colorblind and couldn’t make sense of many of the visuals. They pivoted, added more tabular, text-focused, grayscale views, and the team was thrilled.

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green https://twitter.com/siobhangreen/status/1169675846761758724

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green: https://twitter.com/siobhangreen/status/1169675846761758724

Having a competition judged on impact, not just display, matters. What gets measured gets done, right? We need to reward and encourage the design and development of data visualization that has a purpose and helps someone do something – whether it’s raising awareness, making a decision, or something else – not just creating charts for the sake of telling a donor that we have a dashboard.

2 – Our conversations about data visualization need to be anchored in larger dialogues about data culture and data literacy.

We need to continue to move beyond talking about what we’re building and focus on for who, why, and what else is needed for the visualizations to be used.

Creating a “data culture” on a small project team is complicated. In a large global organization or slow-to-change government agency, it can feel impossible. Making data visual, nurturing that skillset within a team, and building a culture of data visualization is one part of the puzzle, but we need champions outside of the data and M&E (monitoring and evaluation) teams who support that organizational change. A Thursday morning MERL Tech session dug into eight dimensions of a data readiness, all of which are critical to having dashboards actually get used – learn more about this work here.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data filtered to a specific user level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

3 – Our data dashboards look far more diverse in scope, purpose, and design than the cluttered widgets of early days.

The three winners we picked were diverse in their project goals and displays, including a complex map, a PowerBI project dashboard, and a simple interface of bar charts designed for various user levels on local enterprise success metrics.

One of the winners – Fraym – was a complex, interactive map display allowing users to zoom in to the square kilometer level. Layers for various metrics, from energy to health, can be turned on or off depending on the use case. Huge volumes of data had to be ingested, including both spatial and quantitative datasets, to make the UI possible.

In contrast, the People’s Choice winner wasn’t a quantitative dashboard of charts and maps. Matter of Focus’ OutNav tool instead makes the certainty around elements of theory of change visual, has visual encodings in the form of colors, saturation, and layout within a workflow, and helps organizations show where they’ve contributed to change.

Seeing the diversity of displays, I’m hopeful that we’re moving away from one-size-fits-all solutions or reliance on a single tech stack (whether Excel, Tableau, PowerBI or something else) and continuing to focus more on crafting products that solve problems for someone, which may require us to continue to expand our horizons regarding the tools and designs we use.

4 – Design still matters though, and data and design nerds should collaborate more often.

That said, there remain huge opportunities for more design in our data displays. Last year, I gave a MERL tech lightning talk on why no one is using your dashboard that focused on the need for more integration of design principles in our data visualization development, and those principles still resonate today.

Principles from graphic design, UX, and other disciplines can take a specific visualization from good to great – the more data nerds and designers collaborate, the better (IMHO). Otherwise, we’ll continue the an epidemic of dashboards, many of which are tools designed to do ALL THE THINGS without being tailored enough to be usable by the most important audiences.

An invitation to join the Data Viz Society

If you’re interested in more discourse around data viz, consider joining the Data Viz Society (DVS) and connect with more than 8,000 members from around the globe (it’s free!) who have joined since we launched in February.

DVS connects visualization enthusiasts across disciplines, tech stacks, and expertise, and aims to collect and establish best practices, fostering a community that supports members as they grow and develop data visualization skills.

We (I’m the volunteer Operations Director) have a vibrant Slack workspace packed with topic and location channels (you’ll get an invite when you join), two-week long moderated Topics in DataViz conversations, data viz challenges, our journal (Nightingale), and more.

More on ways to get involved in this thread – including our data viz practitioner survey results challenge closing 30 September 2019 that has some fabulous cash prizes for your data viz submissions!

We’re actively looking for more diversity in our geographic representation, and would particularly welcome voices from countries outside of North America. A recent conversation about data viz in LMICs (low and middle income countries) was primarily voices from headquarters staff – we’d love to hear more from the field.

I can’t wait to see what the data viz conversations are at MERL Tech 2020!

Wrapping up MERL Tech DC

On September 6, we wrapped up three days of learning, reflecting, debating and sharing at MERL Tech DC. The conference kicked off with four pre-workshops on September 4: Big Data and Evaluation; Text Analytics; Spatial Statistics and Responsible Data. Then, on September 5-6, we had our regular two-day conference, including opening talks from Tariq Khokhar, The Rockefeller Foundation; and Yvonne MacPherson, BBC Media Action; one-hour sessions, two-hour sessions, lightning talks, a dashboard contest, a plenary session and two happy hour events.

This year’s theme was “The State of the Field” of MERL Tech and we aimed to explore what we as a field know about our work and what gaps remain in the evidence base. Conference strands included: Tech and Traditional MERL; Data, Data, Data; Emerging Approaches; and The Future of MERL.

Zach Tilton, University of Western Michigan; Kerry Bruce, Clear Outcomes; and Alexandra Robinson, Moonshot Global; update the plenary on the State of the Field Research that MERL Tech has undertaken over the past year. Photo by Christopher Neu.
Tariq Khokhar of The Rockefeller Foundation on “What Next for Data Science in Development? Photo by Christopher Neu.
Participants checking out what session to attend next. Photo by Christopher Neu.
Silvia Salinas, Strategic Director FuturaLab; Veronica Olazabal, The Rockefeller Foundation; and Adeline Sibanda, South to South Evaluation Cooperation; talk in plenary about Decolonizing Data and Technology, whether we are designing evaluations within a colonial mindset, and the need to disrupt our own minds. Photo by Christopher Neu.
What is holding women back from embracing tech in development? Patricia Mechael, HealthEnabled; Carmen Tedesco, DAI; Jaclyn Carlsen, USAID; and Priyanka Pathak, Samaj Studio; speak at their “Confidence not Competence” panel on women in tech. Photo by Christopher Neu.
Reid Porter, DevResults; Vidya Mahadevan, Bluesquare; Christopher Robert, Dobility; and Sherri Haas, Management Sciences for Health; go beyond “Make versus Buy” in a discussion on how to bridge the MERL – Tech gap. Photo by Christopher Neu.
Participants had plenty of comments and questions as well. Photo by Christopher Neu.
Drones, machine learning, text analytics, and more. Ariel Frankel, Clear Outcomes, facilitates a group in the session on Emerging MERL approaches. Photo by Christopher Neu.
The Principles for Digital Development have been heavily adopted by the MERL Tech sector as a standard for Digital Development. Allana Nelson, DIAL, shares thoughts on how the Principles can be used as an evaluative tool. Photo by Christopher Neu.
Kate Sciafe Diaz, TechnoServe; explains the “Marie Kondo” approach to MERL Tech in her Lightning Talk: “Does Your Tech Spark Joy? The Minimalist’s Approach to MERL Tech.” Photo by Christopher Neu.

In addition to learning and sharing, one of our main goals at MERL Tech is to create community. “I didn’t know there were other people working on the same thing as I am!” and “This MERL Tech conference is like therapy!” were some of the things we heard on Friday night as we closed down.

Stay tuned for blog posts about sessions and overall impressions, as well as our conference report once feedback surveys are in!

MERL Tech DC Session Ideas are due Monday, Apr 22!

MERL Tech is coming up in September 2019, and there are only a few days left to get your session ideas in for consideration! We’re actively seeking practitioners in monitoring, evaluation, research, learning, data science, technology (and other related areas) to facilitate every session.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. Submit your session ideas by midnight ET on April 22, 2019. You will hear back from us by May 20 and, if selected, you will be asked to submit the final session title, summary and outline by June 17.

Submit your session ideas here by April 22, midnight ET

This year’s conference theme is MERL Tech: Taking Stock

Conference strands include:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines. Please consider these in your session proposals and in how you are choosing your speakers and facilitators.

Submit your session ideas now!

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone. Please review our Code of Conduct. Session submissions are reviewed and selected by our steering committee.

Join us for MERL Tech DC, Sept 5-6th!

MERL Tech DC: Taking Stock

September 5-6, 2019

FHI 360 Academy Hall, 8th Floor
1825 Connecticut Avenue NW
Washington, DC 20009

We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.

The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.

Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.

Conference strands:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.

Submit your session ideas, register to attend the conference, or reserve a demo table for MERL Tech DC now!

You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.

MERL for Blockchain Interventions: Integrating MERL into Token Design

Guest post by Michael Cooper, Mike is a Senior Social Scientist at Emergence who advises foreign assistance funders, service providers and evaluators on blockchain applications. He can be reached at emergence.cooper@gmail.com 

Tokens Could be Our Focus

There is no real evidence base about what does and does not work for applying blockchain technology to interventions seeking social impacts.  Most current blockchain interventions are driven by developers (programmers) and visionary entrepreneurs. There is little thinking in current blockchain interventions around designing for “social” impact (there is an over abundant trust in technology to achieve the outcomes and little focus on the humans interacting with the technology) and integrating relevant evidence from behavioral economics, behavior change design, human centered design, etc.

To build the needed evidence base, Monitoring, Evaluation, Research and Learning (MERL) practitioners will have to not only get to know the broad strokes of blockchain technology but the specifics of token design and tokenomics (the political economics of tokenized ecosystems).  Token design could become the focal point for MERL on blockchain interventions since:

  • If not all, the vast majority of blockchain interventions will involve some type of desired behavior change
  • The token provides the link between the ledger (which is the blockchain) and the social ecosystem created by the token in which the behavior change is meant to happen
  • Hence the token is the “nudge” meant to leverage behavior change in the social ecosystem while governing the transactions on the blockchain ledger. 

(While this blog will focus on these points, it will not go into a full discussion of what tokens are and how they create ecosystems. But there are some very good resources out there that do this which you can review at your leisure and to the degree that works for you.  The Complexity Institute has published a book exploring the various attributes of complexity and main themes involved with tokenomics while Outlier Ventures has published, what I consider, to be the best guidance on token design.  The Outlier Ventures guidance contains many of the tools MERL practitioners will be familiar with (problem analysis, stakeholder mapping, etc.) and should be consulted.) 

Hence it could be that by understanding token design and its requirements and mapping it against our current MERL thinking, tools and practices, we can develop new thinking and tools that could be the beginning point in building our much-needed evidence base. 

What is a “blockchain intervention”? 

As MERL practitioners we roughly define an “intervention” as a group of inputs and activities meant to leverage outcomes within a given eco-system.  “Interventions” are what we are usually mandated to asses, evaluate and help improve.

When thinking about MERL and blockchain, it is useful to think of two categories of “blockchain interventions”. 

1) Integrating the blockchain into MERL data collection, entry, management, analysis or dissemination practices and

2) MERL strategies for interventions using the blockchain in some way shape or form. 

Here we will focus on the #2 and in so doing demonstrate that while the blockchain is an innovative, potentially disruptive technology, evaluating its applications on social outcomes is still an issue of assessing behavior change against dimensions of intervention design. 

Designing for Behavior Change

We generally design interventions (programs, projects, activities) to “nudge” a certain type of behavior (stated as outcomes in a theory of change) amongst a certain population (beneficiaries, stakeholders, etc.).  We often attempt to integrate mechanisms of change into our intervention design, but often do not for a variety of reasons (lack of understanding, lack of resources, lack of political will, etc.).  This lack of due diligence in design is partly responsible for the lack of evidence around what works and what does not work in our current universe of interventions. 

Enter blockchain technology, which as MERL practitioners, we will be responsible for assessing in the foreseeable future.  Hence, we will need to determine how interventions using the blockchain attempt to nudge behavior, what behaviors they seek to nudge, amongst whom, when and how well the design of the intervention accomplishes these functions.  In order to do that we will need to better understand how blockchains use tokens to nudge behavior. 

The Centrality of the Token

We have all used tokens before.  Stores issue coupons that can only be used at those stores, we get receipts for groceries as soon as we pay, arcades make you buy tokens instead of just using quarters.  The coupons and arcade tokens can be considered utility tokens, meaning that they can only be used in a specific “ecosystem” which in this case is a store and arcade respectively.  The grocery store receipt is a token because it demonstrates ownership, if you are stopped on the way out the store and you show your receipt you are demonstrating that you now have rights to ownership over the foodstuffs in your bag. 

Whether you realize it or not at the time, these tokens are trying to nudge your behavior.  The store gives you the coupon because the more time you spend in their store trying to redeem coupons, the greatly likelihood you will spend additional money there.  The grocery store wants you to pay for all your groceries while the arcade wants you to buy more tokens than you end up using. 

If needed, we could design MERL strategies to assess how well these different tokens nudged the desired behaviors. We would do this, in part, by thinking about how each token is designed relative to the behavior it wants (i.e. the value, frequency and duration of coupons, etc.).

Thinking about these ecosystems and their respective tokens will help us understand the interdependence between 1) the blockchain as a ledger that records transactions, 2) the token that captures the governance structures for how transactions are stored on the blockchain ledger as well as the incentive models for 3) the mechanisms of change in the social eco-system created by the token. 

Figure #1:  The inter-relationship between the blockchain (ledger), token and social eco-system

Token Design as Intervention Design  

Just as we assess theories of change and their mechanisms against intervention design, we will assess blockchain based interventions against their token design in much the same way.  This is because blockchain tokens capture all the design dimensions of an intervention; namely the problem to be solved, stakeholders and how they influence the problem (and thus the solution), stakeholder attributes (as mapped out in something like a stakeholder analysis), the beneficiary population, assumptions/risks, etc. 

Outlier Ventures has adapted what they call a Token Utility Canvas as a milestone in their token design process.  The canvas can be correlated to the various dimensions of an evaluability assessment tool (I am using the evaluability assessment tool as a demonstration of the necessary dimensions of an interventions design, meaning that the evaluability assessment tool assesses the health of all the components of an intervention design).  The Token Utility Canvas is a useful milestone in the token design process that captures many of the problem diagnostic, stakeholder assessment and other due diligence tools that are familiar to MERL practitioners who have seen them used in intervention design.  Hence token design could be largely thought of as intervention design and evaluated as such.

Table#1: Comparing Token Design with Dimensions of Program Design (as represented in an Evaluability Assessment)

This table is not meant to be exhaustive and not all of the fields will be explained here but in general, it could be a useful starting point in developing our own thinking and tools for this emerging space. 

The Token as a Tool for Behavior Change

Coming up with a taxonomy of blockchain interventions and relevant tokens is a necessary task, but all blockchains that need to nudge behavior will have to have a token.

Consider supply chain management.  Blockchains are increasingly being used as the ledger system for supply chain management.  Supply chains are typically comprised of numerous actors packaging, shipping, receiving, applying quality control protocols to various goods, all with their own ledgers of the relevant goods as they snake their way through the supply chain.  This leads to ample opportunities for fraud, theft and high costs associated with reconciling the different ledgers of the different actors at different points in the supply chain.  Using the blockchain as the common ledger system, many of these costs are diminished as a single ledger is used with trusted data, hence transactions (shipping, receiving, repackaging, etc.) can happen more seamlessly and reconciliation costs drop.

However even in “simple” applications such as this there are behavior change implications. We still want the supply chain actors to perform their functions in a manner that adds value to the supply chain ecosystem as a whole, rewarding them for good behavior within the ecosystem and punishing for bad.

What if those shippers trying to pass on a faulty product had already deposited a certain value of currency in an escrow account (housed in a smart contract on the blockchain)? Meaning that if they are found to be attempting a prohibited behavior (passing on faulty products) they surrender a certain amount automatically from the escrow account in the blockchain smart contract.  How much should be deposited in the escrow account?  What is the ratio between the degree of punishment and undesired action?  These are behavior questions around a mechanism of change that are dimensions of current intervention designs and will be increasingly relevant in token design.

The point of this is to demonstrate that even “benign” applications of the blockchain, like supply chain management, have behavior change implications and thus require good due diligence in token design.

There is a lot that could be said about the validation function of this process, who validates that the bad behavior has taken place and should be punished or that good behavior should be rewarded?  There are lessons to be learned from results based contracting and the role of the validator in such a contracting vehicle.  This “validating” function will need to be thought out in terms of what can be automated and what needs a “human touch” (and who is responsible, what methods they should use, etc.).   

Implications for MERL

If tokens are fundamental to MERL strategies for blockchain interventions, there are several critical implications:

  • MERL practitioners will need to be heavily integrated into the due diligence processes and tools for token design
  • MERL strategies will need to be highly formative, if not developmental, in facilitating the timeliness and overall effectiveness of the feedback loops informing token design
  • New thinking and tools will need to be developed to assess the relationships between blockchain governance, token design and mechanisms of change in the resulting social ecosystem. 

The opportunity cost for impact and “learning” could go up the less MERL practitioners are integrated into the due diligence of token design.  This is because the costs to adapt token design are relatively low compared to current social interventions, partly due to the ability to integrate automated feedback. 

Blockchain based interventions present us with significant learning opportunities due to our ability to use the technology itself as a data collection/management tool in learning about what does and does not work.  Feedback from an appropriate MERL strategy could inform decision making around token design that could be coded into the token on an iterative basis.  For example as incentives of stakeholder’s shift (i.e. supply chain shippers incur new costs and their value proposition changes) token adaptation can respond in a timely fashion so long as the MERL feedback that informs the token design is accurate.

There is need to determine what components of these feedback loops can be completed by automated functions and what requires a “human touch”.  For example, what dimensions of token design can be informed by smart infrastructure (i.e. temp gauges on shipping containers in the supply chain) versus household surveys completed by enumerators?  This will be a task to complete and iteratively improve starting with initial token design and lasting through the lifecycle of the intervention.  Token design dimensions, outlined in the Token Utility Canvas, and decision-making will need to result in MERL questions that are correlated to the best strategy to answer them, automated or human, much the same as we do now in current interventions. 

While many of our current due diligence tools used in both intervention and evaluation design (things like stakeholder mapping, problem analysis, cost benefit analysis, value propositions, etc.), will need to be adapted to the type of relationships that are within a tokenized eco-systems.  These include the relationships of influence between the social eco-system as well as the blockchain ledger itself (or more specifically the governance of that ledger) as demonstrated in figure #1.  

This could be our, as MERL practitioners, biggest priority.  While blockchain interventions could create incredible opportunities for social experimentation, the need for human centered due diligence (incentivizing humans for positive behavior change) in token design is critical.  Over reliance on the technology to drive social outcomes is already a well evidenced opportunity cost that could be avoided with blockchain-based solutions if the gap between technologists, social scientists and practitioners can be bridged.    

Blockchain for International Development: Using a Learning Agenda to Address Knowledge Gaps

Guest post by John Burg, Christine Murphy, and Jean Paul Pétraud, international development professionals who presented a one-hour session at the  MERL Tech DC 2018 conference on Sept. 7, 2018. Their presentation focused on the topic of creating a learning agenda to help MERL practitioners gauge the value of blockchain technology for development programming. Opinions and work expressed here are their own.

We attended the MERL Tech DC 2018 conference held on Sept. 7, 2018 and led a session related to the creation of a learning agenda to help MERL practitioners gauge the value of blockchain technology for development programming.

As a trio of monitoring, evaluation, research, and learning, (MERL) practitioners in international development, we are keenly aware of the quickly growing interest in blockchain technology. Blockchain is a type of distributed database that creates a nearly unalterable record of cryptographically secure peer-to-peer transactions without a central, trusted administrator. While it was originally designed for digital financial transactions, it is also being applied to a wide variety of interventions, including land registries, humanitarian aid disbursement in refugee camps, and evidence-driven education subsidies. International development actors, including government agencies, multilateral organizations, and think tanks, are looking at blockchain to improve effectiveness or efficiency in their work.

Naturally, as MERL practitioners, we wanted to learn more. Could this radically transparent, shared database managed by its users, have important benefits for data collection, management, and use? As MERL practice evolves to better suit adaptive management, what role might blockchain play? For example, one inherent feature of blockchain is the unbreakable and traceable linkages between blocks of data. How might such a feature improve the efficiency or effectiveness of data collection, management, and use? What are the advantages of blockchain over other more commonly used technologies? To guide our learning we started with an inquiry designed to help us determine if, and to what degree, the various features of blockchain add value to the practice of MERL. With our agenda established, we set out eagerly to find a blockchain case study to examine, with the goal of presenting our findings at the September 2018 MERL Tech DC conference.

What we did

We documented 43 blockchain use-cases through internet searches, most of which were described with glowing claims like “operational costs… reduced up to 90%,” or with the assurance of “accurate and secure data capture and storage.” We found a proliferation of press releases, white papers, and persuasively written articles. However, we found no documentation or evidence of the results blockchain was purported to have achieved in these claims. We also did not find lessons learned or practical insights, as are available for other technologies in development.

We fared no better when we reached out directly to several blockchain firms, via email, phone, and in person. Not one was willing to share data on program results, MERL processes, or adaptive management for potential scale-up. Despite all the hype about how blockchain will bring unheralded transparency to processes and operations in low-trust environments, the industry is itself opaque. From this, we determined the lack of evidence supporting value claims of blockchain in the international development space is a critical gap for potential adopters.

What we learned

Blockchain firms supporting development pilots are not practicing what they preach — improving transparency — by sharing data and lessons learned about what is working, what isn’t working, and why. There are many generic decision trees and sales pitches available to convince development practitioners of the value blockchain will add to their work. But, there is a lack of detailed data about what happens when development interventions use blockchain technology.

Since the function of MERL is to bridge knowledge gaps and help decision-makers take action informed by evidence, we decided to explore the crucial questions MERL practitioners may ask before determining whether blockchain will add value to data collection, management, and use. More specifically, rather than a go/no-go decision tool, we propose using a learning agenda to probe the role of blockchain in data collection, data management and data use at each stage of project implementation.   “Before you embark on that shiny blockchain project, you need to have a very clear idea of why you are using a blockchain.”  

Avoiding the Pointless Blockchain Project, Gideon Greenspan (2015)

Typically, “A learning agenda is a set of questions, assembled by an organization or team, that identifies what needs to be learned before a project can be planned and implemented.” The process of developing and finding answers to learning questions is most useful when it’s employed continuously throughout the duration of project implementation, so that changes can be made based on what is learned about changes in the project’s context, and to support the process of applying evidence to decision-making in adaptive management.

We explored various learning agenda questions for data collection, management and use that should continue to be developed and answered throughout the project cycle. However, because the content of a learning agenda is highly context-dependent, we focused on general themes. Examples of questions that might be asked by beneficiaries, implementing partners, donors, and host-country governments, include:

  • What could each of a project’s stakeholder groups gain from the use of blockchain across the stages of design and implementation, and, would the benefits of blockchain incentivize them to participate?
  • Can blockchain resolve trust or transparency issues between disparate stakeholder groups, e.g. to ensure that data reported represent reality, or that they are of sufficient quality for decision-making?
  • Are there less-expensive, more appropriate, or easier to execute, existing technologies that already meet each group’s MERL needs?
  • Are there unaddressed MERL management needs blockchain could help address, or capabilities blockchain offers that might inspire new and innovative thinking about what is done, and how it gets done?

This approach resonated with other MERL for development practitioners

We presented this approach to a diverse group of professionals at MERL Tech DC, including other MERL practitioners and IT support professionals, representing organizations from multilateral development banks to US-based NGOs. Facilitated as a participatory roundtable, the session participants discussed how MERL professionals could use learning agendas to help their organizations both decide whether blockchain is appropriate for intervention design, as well as guide learning during implementation to strengthen adaptive management.

Questions and issues raised by the session participants ranged widely, from how blockchain works, to expressing doubt that organizational leaders would have the risk appetite required to pilot blockchain when time and costs (financial and human resource) were unknown. Session participants demonstrated an intense interest in this topic and our approach. Our session ran over time and side conversations continued into the corridors long after the session had ended.

Next Steps

Our approach, as it turns out, echoes others in the field who question whether the benefits of blockchain add value above and beyond existing technologies, or accrue to stakeholders beyond the donors that fund them. This trio of practitioners will continue to explore ways MERL professionals can help their teams learn about the benefits of blockchain technology for international development. But, in the end, it may turn out that the real value of blockchain wasn’t the application of the technology itself, but rather as an impetus to question what we do, why we do it, and how we could do it better.

Creative Commons License
Blockchain for International Development: Using a Learning Agenda to Address Knowledge Gaps by John Burg, Christine Murphy, and Jean-Paul Petraud is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License