MERL Tech News

What a Difference a Year Makes: Contributing to the Blockchain Learning Agenda

by Shailee Adinolfi, John Burg and Tara Vassefi

In September 2018, a three-member team of international development professionals presented a session called “Blockchain Learning Agenda: Practical MERL Workshop” at MERL Tech DC. Following the session, the team published a blog post about the session stating that the authors had “… found no documentation or evidence of the results blockchain was purported to have achieved in these claims [of radical improvements]. [They] also did not find lessons learned or practical insights, as are available for other technologies in development.”

The blog post inspired a barrage of unanticipated discussion online. Unfortunately, in some cases readers (and re-posters) misinterpreted the point as disparaging of blockchain. Rather, the post authors were simply asserting ways to cope with uncertain situations related to piloting blockchain projects. Perhaps the most important outcome of the session and post, however, is that they motivated a coordinated response from several organizations who wanted to delve deeper into the blockchain learning agenda.

To do that, on March 5, 2019, Chemonics, Truepic, and Consensys hosted a roundtable titled “How to Successfully Apply Blockchain in International Development.” All three organizations are applying blockchain in different and complementary ways relevant to international development — including project monitoring, evaluation, learning (MEL) innovations as well as back-end business systems. The roundtable enabled an open dialogue about how blockchain is being tested and leveraged to achieve better international development outcomes. The aim was to explore and engage with real case studies of blockchain in development and share lessons learned within a community of development practitioners in order to reduce the level of opacity surrounding this innovative and rapidly evolving technology.

Three case studies were highlighted:

1. “One-click Biodata Solution” by Chemonics 

  • Chemonics’ Blockchain for Development Solutions Lab designed and implemented a RegTech solution for the USAID foreign assistance and contracting space that sought to leverage the blockchain-based identity platform created by BanQu to dramatically expedite and streamline the collection and verification of USAID biographical data sheets (biodatas), improve personal data protection, and reduce incidents of error and fraud in the hiring process for professionals and consultants hired under USAID contracts.
  • Chemonics processes several thousand biodatas per year and accordingly devotes significant labor effort and cost to support the current paper-based workflow.
  • Chemonics’ technology partner, BanQu, used a private, permissioned blockchain on the Ethereum network to pilot a biodata solution.
  • Chemonics successfully piloted the solution with BanQu, resulting in 8 blockchain-based biodatas being fully processed in compliance with donor requirements.
  • Improved data protection was a priority for the pilot. One goal of the solution was to make it possible for individuals to maintain control over their back-up documentation, like passports, diplomas, and salary information, which could be shared temporarily with Chemonics through the use of an encrypted key, rather than having documentation emailed and saved to less secure corporate digital file systems.
  • Following the pilot, Chemonics determined through qualitative feedback that users across the biodata ecosystem found the blockchain solution to be easy to use and succeeded at reducing level of effort on the biodata completion process. 
  • Chemonics also compiled lessons-learned, including refinements to the technical requirements, options to scale the solution, and additional user feedback and concerns about the technology to inform decision-making around further biodata pilots. 

2. Project i2i presented by Consensys

  • Problem Statement: 35% of the Filipino population is unbanked, and 56% lives in rural areas. The Philippines economy relies heavily on domestic remittances. Unionbank sought to partner with hundreds of rural banks that didn’t have access to electronic banking services that the larger commercial banks do.
  • In 2017, to continue the Central Bank of the Philippines’ national strategy for financial inclusion, the central banks of Singapore and the Philippines announced that they would collaborate on financial technology by employing the regulatory sandbox approach. This will provide industry stakeholders with the room and time to experiment before regulators enact potentially restrictive policies that could stifle innovation and growth. As part of the agreement, the central banks will share resources, best practices, research, and collaborate to “elevate financial innovation” in both economies.
  • Solution design assumptions for Philippines context:
    • It can be easily operated and implemented with limited integration, even in low-tech settings;
    • It enables lower transaction time and lower transaction cost;
    • It enables more efficient operations for rural banks, including reduction of reconciliations and simplification of accounting processes.
  • Unionbank worked with ConsenSys and participating rural banks to create an interbank ledger with tokenization. The payment platform is private, Ethereum-based.
  • In the initial pilot, 20 steps were eliminated in the process.
  • Technology partners: ConsenSys, Azure (Microsoft), Kaleido, Amazon Web Services.
  • In follow up to the i2i project, Union bank partnered with Singapore-based OCBC Bank, wherein the parties deployed the Adhara liquidity management and international payments platform for a blockchain-based international remittance pilot.  
  • Potential for national and regional collaboration/network development.
  • For details on the i2i project, download the full case study here, watch the 4-minute video clip.

3. Controlled Capture presented by Truepic

  • Truepic is a technology company specializing in digital image and video authentication. Truepic’s Controlled Capture technology uses cutting-edge computer vision, AI, and cryptography technologies to test images and video for signs of manipulation, designating only those that pass its rigorous verification tests are authenticated. Through the public blockchain, Truepic creates an immutable record for each photo and video captured through this process, such that their authenticity can be proven, meeting the highest evidentiary standards. This technology has been used in over 100 countries by citizen journalists, activists, international development organizations, NGOs, insurance companies, lenders and online platforms. 
  • One of Truepic’s innovative strategic partners, the UN Capital Development Fund (another participant of the roundtable), has been testing the possibility of using this technology for monitoring and evaluation of development projects. For example, the following Truepic tracks the date, time, and geolocation of the latest progress of a factory in Uganda. 
  • Controlled Capture requires Wifi or at least 3G/4G connectivity to fully authenticate images/video and write them to the public blockchain, which can be a challenge in low connectivity instances, for example in least-developed countries for UNCDF. 
  • As a work around to connectivity issues, Truepic’s partners have used Satellite Internet connections – such as a Thuraya or Iridium device to successfully capture verified images anywhere. 
  • Public blockchain – Truepic is currently using two different public blockchains, testing cost versus time in an effort to continually shorten the time from capture to closing chain of custody (currently around 8-12 seconds). 
  • Cost – The blockchain component is not actually too expensive; the heaviest investment is in the computer vision technology used to authenticate the images/video, for example to detect rebroadcasting, as in taking a picture of a picture to pass off the metadata.
  • Rights of the image is the owner’s – Truepic does not have rights over the image/video but keeps a copy on its servers in case the user’s phone/tablet is lost, stolen, or broken. And most importantly, so that Truepic can produce the original image on its verification page when shared or disseminated publicly. 
  • Court + evidentiary value: the technology and public-facing verification pages are designed to meet the highest evidentiary standards. 
    • Tested in courts; currently being testing at the international level but cannot disclose specifics due to confidentiality reasons.
  • Privacy and security are key priorities, especially for working in conflict zones, such as Syria. Truepic does not use 2-step authentication because the technology is focused on authenticating the images/video; it is not relevant who the source is and this way it keeps the source as anonymous as possible. Truepic works with its partners to educate on best practices to maintain high levels of anonymity in any scenario. 
  • Biggest challenge is usage by implementing partners – it is very easy to use, however the behavioral change to use the platform has been challenging. 
    • Other challenge: you bring the solution to an implementer, and the implementer says you have to get the donor to integrate it into their RFP scopes; then the donors recommend that we speak to implementing partners. 
  • Storage capacity issues? Storage is not currently a problem; Truepic has plans in place to address any storage issues that may arise with scale. 

How did implementers measure success in their blockchain pilots?

  • Measurement was both quantitative and qualitative 
  • The organizations worked with clients to ensure people who needed the MEL were able to access and use it
  • Concerns with publicizing information or difficulties with NDAs were handled on a case-by-case basis

The MEL space is an excellent place to have a conversation about the use of blockchain for international development – many aspects of MEL hinge on the need for immutability (in record keeping), transparency (in the expenditure and impact of funds) and security (in the data and the identities of implementers and beneficiaries). Many use cases in developing countries and for social impact have been documented (see Stanford report Blockchain for Social Impact, Moving Beyond the Hype). (Editor’s note: see also Blockchain and Distributed Ledger Technologies in the Humanitarian Sector and Distributed Ledger Identification Systems in the Humanitarian Sector).

The original search for evidence on the impact of blockchain sought a level of data fidelity that is difficult to capture and validate, even under the least challenging circumstances. Not finding it at that time, the research team sought the next best solution, which was not to discount the technology, but to suggest ways to cope with the knowledge gaps they encountered by recommending a learning agenda. The roundtable helped to stimulate robust conversation of the three case studies, contributing to that learning agenda.

Most importantly, the experience highlighted several interesting takeaways about innovation in public-private partnerships more broadly: 

  • The initial MERL Tech session publicly and transparently drew attention to the gaps that were identified from the researchers’ thirty thousand-foot view of evaluating innovation. 
  • This transparency drew out engagement and collaboration between and amongst those best-positioned to move quickly and calibrate effectively with the government’s needs: the private sector. 
  • This small discussion that focused on the utility and promise of blockchain highlighted the broader role of government (as funder/buyer/donor) in both providing the problem statement and anchoring the non-governmental, private sector, and civil society’s strengths and capabilities. 

One year later…

So, a year after the much-debated blockchain blogpost, what has changed? A lot. There is a growing body of reporting that adds to the lessons learned literature and practical insights from projects that were powered or supported by blockchain technology. The question remains: do we have any greater documentation or evidence of the results blockchain was purported to have achieved in these claims? It seems that while reporting has improved, it still has a long way to go. 

It’s worth pointing out that the international development industry, with far more experts and funding dedicated to working on improving MERL than emerging tech companies, also has some distance to go in meeting its own evidence standards.  Fortunately, the volume and frequency of hype seems to have decreased (or perhaps the news cycle has simply moved on?), thereby leaving blockchain (and its investors and developers) the space they need to refine the technology.

In closing, we, like the co-authors of the 2018 post, remain optimistic that blockchain, a still emerging technology, will be given the time and space needed to mature and prove its potential. And, whether you believe in “crypto-winter” or not, hopefully the lull in the hype cycle will prove to be the breathing space that blockchain needs to keep evolving in a productive direction.

Author Bios

Shailee Adinolfi: Shailee works on Public Sector solutions at ConsenSys, a global blockchain technology company building the infrastructure, applications, and practices that enable a decentralized world. She has 20 years of experience at the intersection of technology, financial inclusion, trade, and government, including 11 years on USAID funded projects in Africa, Asia and the Middle East.

John Burg: John was a co-author on the original MERL Tech DC 2018 blog, referenced in this blog. He is an international development professional with almost 20 years of cross-sectoral experience across 17 countries in six global regions. He enjoys following the impact of emerging technology in international development contexts.

Tara Vassefi: Tara is Truepic’s Washington Director of Strategic Initiatives. Her background is as a human rights lawyer where she worked on optimizing the use of digital evidence and understanding how the latest technologies are used and weighed in courts around the world. 

Four Reflections on the 2019 MERL Tech Dashboards Competition

by Amanda Makulec, Excella Labs. This post first appeared here.

Data visualization (viz) has come a long way in our MERL Tech community. Four years ago the conversation was around “so you think you want a dashboard?” which evolved to a debate on dashboards as the silver bullet solution (spoiler: they’re not). Fast forward to 2019, when we had the first plenary competition of dashboard designs on the main stage!

Wayan Vota and Linda Raftree, MERL Tech Organizers, were kind enough to invite me to be a judge for the dashboard competition. Let me say: judging is far less stressful than presenting. Having spoken at MERL Tech every year on a data viz topic since 2015, it felt novel to not be frantically reviewing slides the morning of the conference.

The competition sparked some reflections on how we’ve grown and where we can continue to improve as we use data visualization as one item in our MERL toolbox.

1 – We’ve moved beyond conversations about ‘pretty’ and are talking about how people use our dashboards.

Thankfully, our judging criteria and final selection were not limited to which dashboard was the most beautiful. Instead, we focused on the goal, how the data was structured, why the design was chosen, and the impact it created.

One of the best stories from the stage came from DAI’s Carmen Tedesco (one of three competition winners), who demoed a highly visual interface that even included custom spatial displays of how safe girls felt in different locations throughout a school. When the team demoed the dashboard to their Chief of Party, he was underwhelmed… because he was colorblind and couldn’t make sense of many of the visuals. They pivoted, added more tabular, text-focused, grayscale views, and the team was thrilled.

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green https://twitter.com/siobhangreen/status/1169675846761758724

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green: https://twitter.com/siobhangreen/status/1169675846761758724

Having a competition judged on impact, not just display, matters. What gets measured gets done, right? We need to reward and encourage the design and development of data visualization that has a purpose and helps someone do something – whether it’s raising awareness, making a decision, or something else – not just creating charts for the sake of telling a donor that we have a dashboard.

2 – Our conversations about data visualization need to be anchored in larger dialogues about data culture and data literacy.

We need to continue to move beyond talking about what we’re building and focus on for who, why, and what else is needed for the visualizations to be used.

Creating a “data culture” on a small project team is complicated. In a large global organization or slow-to-change government agency, it can feel impossible. Making data visual, nurturing that skillset within a team, and building a culture of data visualization is one part of the puzzle, but we need champions outside of the data and M&E (monitoring and evaluation) teams who support that organizational change. A Thursday morning MERL Tech session dug into eight dimensions of a data readiness, all of which are critical to having dashboards actually get used – learn more about this work here.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data filtered to a specific user level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

3 – Our data dashboards look far more diverse in scope, purpose, and design than the cluttered widgets of early days.

The three winners we picked were diverse in their project goals and displays, including a complex map, a PowerBI project dashboard, and a simple interface of bar charts designed for various user levels on local enterprise success metrics.

One of the winners – Fraym – was a complex, interactive map display allowing users to zoom in to the square kilometer level. Layers for various metrics, from energy to health, can be turned on or off depending on the use case. Huge volumes of data had to be ingested, including both spatial and quantitative datasets, to make the UI possible.

In contrast, the People’s Choice winner wasn’t a quantitative dashboard of charts and maps. Matter of Focus’ OutNav tool instead makes the certainty around elements of theory of change visual, has visual encodings in the form of colors, saturation, and layout within a workflow, and helps organizations show where they’ve contributed to change.

Seeing the diversity of displays, I’m hopeful that we’re moving away from one-size-fits-all solutions or reliance on a single tech stack (whether Excel, Tableau, PowerBI or something else) and continuing to focus more on crafting products that solve problems for someone, which may require us to continue to expand our horizons regarding the tools and designs we use.

4 – Design still matters though, and data and design nerds should collaborate more often.

That said, there remain huge opportunities for more design in our data displays. Last year, I gave a MERL tech lightning talk on why no one is using your dashboard that focused on the need for more integration of design principles in our data visualization development, and those principles still resonate today.

Principles from graphic design, UX, and other disciplines can take a specific visualization from good to great – the more data nerds and designers collaborate, the better (IMHO). Otherwise, we’ll continue the an epidemic of dashboards, many of which are tools designed to do ALL THE THINGS without being tailored enough to be usable by the most important audiences.

An invitation to join the Data Viz Society

If you’re interested in more discourse around data viz, consider joining the Data Viz Society (DVS) and connect with more than 8,000 members from around the globe (it’s free!) who have joined since we launched in February.

DVS connects visualization enthusiasts across disciplines, tech stacks, and expertise, and aims to collect and establish best practices, fostering a community that supports members as they grow and develop data visualization skills.

We (I’m the volunteer Operations Director) have a vibrant Slack workspace packed with topic and location channels (you’ll get an invite when you join), two-week long moderated Topics in DataViz conversations, data viz challenges, our journal (Nightingale), and more.

More on ways to get involved in this thread – including our data viz practitioner survey results challenge closing 30 September 2019 that has some fabulous cash prizes for your data viz submissions!

We’re actively looking for more diversity in our geographic representation, and would particularly welcome voices from countries outside of North America. A recent conversation about data viz in LMICs (low and middle income countries) was primarily voices from headquarters staff – we’d love to hear more from the field.

I can’t wait to see what the data viz conversations are at MERL Tech 2020!

Wrapping up MERL Tech DC

On September 6, we wrapped up three days of learning, reflecting, debating and sharing at MERL Tech DC. The conference kicked off with four pre-workshops on September 4: Big Data and Evaluation; Text Analytics; Spatial Statistics and Responsible Data. Then, on September 5-6, we had our regular two-day conference, including opening talks from Tariq Khokhar, The Rockefeller Foundation; and Yvonne MacPherson, BBC Media Action; one-hour sessions, two-hour sessions, lightning talks, a dashboard contest, a plenary session and two happy hour events.

This year’s theme was “The State of the Field” of MERL Tech and we aimed to explore what we as a field know about our work and what gaps remain in the evidence base. Conference strands included: Tech and Traditional MERL; Data, Data, Data; Emerging Approaches; and The Future of MERL.

Zach Tilton, University of Western Michigan; Kerry Bruce, Clear Outcomes; and Alexandra Robinson, Moonshot Global; update the plenary on the State of the Field Research that MERL Tech has undertaken over the past year. Photo by Christopher Neu.
Tariq Khokhar of The Rockefeller Foundation on “What Next for Data Science in Development? Photo by Christopher Neu.
Participants checking out what session to attend next. Photo by Christopher Neu.
Silvia Salinas, Strategic Director FuturaLab; Veronica Olazabal, The Rockefeller Foundation; and Adeline Sibanda, South to South Evaluation Cooperation; talk in plenary about Decolonizing Data and Technology, whether we are designing evaluations within a colonial mindset, and the need to disrupt our own minds. Photo by Christopher Neu.
What is holding women back from embracing tech in development? Patricia Mechael, HealthEnabled; Carmen Tedesco, DAI; Jaclyn Carlsen, USAID; and Priyanka Pathak, Samaj Studio; speak at their “Confidence not Competence” panel on women in tech. Photo by Christopher Neu.
Reid Porter, DevResults; Vidya Mahadevan, Bluesquare; Christopher Robert, Dobility; and Sherri Haas, Management Sciences for Health; go beyond “Make versus Buy” in a discussion on how to bridge the MERL – Tech gap. Photo by Christopher Neu.
Participants had plenty of comments and questions as well. Photo by Christopher Neu.
Drones, machine learning, text analytics, and more. Ariel Frankel, Clear Outcomes, facilitates a group in the session on Emerging MERL approaches. Photo by Christopher Neu.
The Principles for Digital Development have been heavily adopted by the MERL Tech sector as a standard for Digital Development. Allana Nelson, DIAL, shares thoughts on how the Principles can be used as an evaluative tool. Photo by Christopher Neu.
Kate Sciafe Diaz, TechnoServe; explains the “Marie Kondo” approach to MERL Tech in her Lightning Talk: “Does Your Tech Spark Joy? The Minimalist’s Approach to MERL Tech.” Photo by Christopher Neu.

In addition to learning and sharing, one of our main goals at MERL Tech is to create community. “I didn’t know there were other people working on the same thing as I am!” and “This MERL Tech conference is like therapy!” were some of the things we heard on Friday night as we closed down.

Stay tuned for blog posts about sessions and overall impressions, as well as our conference report once feedback surveys are in!

The MERL Tech DC agenda is ready!

Thanks to the hard work of our presenters, session leads, and Steering Committee, the 2019 MERL Tech DC agenda is ready, and we’re really excited about it!

On September 5-6, we’ll have our regular two days of lightning talks, break-out sessions, panels, Fail Fest, demo tables, and networking with folks from diverse sectors who all coincide at the intersection of MERL and Tech, Take a peek at the agenda here.

This year we are offering four pre-workshops on September 4 (separate registration required). Register at one of the links below. (Space is limited, so secure your spot now!)

Dashboard Contest!

You can also enter the MERL Tech Dashboard Contest and win a prize for the best dashboard (based on four criteria, and according to an esteemed panel of judges)! If you have a dashboard that you’d like to show off, sign up here to enter.

Fail Fest!

As usual, Wayan Vota will be organizing a “Fail Fest” during our Happy Hour Reception on Thursday, Sept 5th. More information coming shortly, but start thinking about what you may want to share….

Register Now!

Registration is open, and we normally sell out, so get your tickets soon! We also have a few Demo Tables left, and you can reserve one here.

Please get in touch with any questions, and we’re looking forward to seeing you there!  

MERL Tech DC Session Ideas are due Monday, Apr 22!

MERL Tech is coming up in September 2019, and there are only a few days left to get your session ideas in for consideration! We’re actively seeking practitioners in monitoring, evaluation, research, learning, data science, technology (and other related areas) to facilitate every session.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. Submit your session ideas by midnight ET on April 22, 2019. You will hear back from us by May 20 and, if selected, you will be asked to submit the final session title, summary and outline by June 17.

Submit your session ideas here by April 22, midnight ET

This year’s conference theme is MERL Tech: Taking Stock

Conference strands include:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines. Please consider these in your session proposals and in how you are choosing your speakers and facilitators.

Submit your session ideas now!

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone. Please review our Code of Conduct. Session submissions are reviewed and selected by our steering committee.

Upping the Ex Ante: Explorations in evaluation and frontier technologies

Guest post from Jo Kaybryn, an international development consultant currently directing evaluation frameworks, evaluation quality assurance services, and leading evaluations for UN agencies and INGOs.

“Upping the Ex Ante” is a series of articles aimed at evaluators in international development exploring how our work is affected by – and affects – digital data and technology. I’ve been having lots of exciting conversations with people from all corners of the universe about our brave new world. But I’ve also been conscious that for those who have not engaged a lot with the rapid changes in technologies around us, it can be a bit daunting to know where to start. These articles explore a range of technologies and innovations against the backdrop of international development and the particular context of evaluation.  For readers not yet well versed in technology there are lots of sources to do further research on areas of interest.

The series is half way through, with 4 articles published.

Computation? Evaluate it!

So far in Part 1 the series has gone back to the olden days (1948!) to consider the origin story of cybernetics and the influences that are present right now in algorithms and big data. The philosophical and ethical dilemmas are a recurring theme in later articles.

Distance still matters

Part 2 examines the problems of distance which is something that technology offers huge strides forwards in, and yet it remains never fully solved, with a discussion on what blockchains mean for the veracity of data.

Doing things the ways it’s always been done but better (Qualified)

Part 3 considers qualitative data and shines a light on the gulf between our digital data-centric and analogue-centric worlds and the need for data scientists and social scientists to cooperate to make sense of it.

Doing things the ways it’s always been done but better (Quantified)

Part 4 looks at quantitative data and the implications for better decision making, why evaluators really don’t like an algorithmic “black box”; and reflections on how humans’ assumptions and biases leak into our technologies whether digital or analogue.

What’s next?

The next few articles will see a focus on ethics, psychology and bias; a case study on a hypothetical machine learning intervention to identify children at risk of maltreatment (lots more risk and ethical considerations), and some thoughts about putting it all in perspective (i.e. Don’t Panic!).

Good, Cheap, Fast — Pick Two!

By Chris Gegenheimer, Director of Monitoring, Evaluating and Learning Technology at Chemonics International; and Leslie Sage, Director of Data Science at DevResults. (Originally posted here).

Back in September, Chemonics and DevResults spoke at MERL Tech DC about the inherent compromise involved when purchasing enterprise software. In short, if you want good software that does everything you want exactly the way you want it, cheap software that is affordable and sustainable, and fast software that is available immediately and responsive to emerging needs, you may have to relax one of those requirements. In other words: “good, cheap, fast – pick two!”

Of course, no buyer or vendor would ever completely neglect any one of those dimensions to maximize the other two; instead, we all try to balance these competing priorities as best we can as our circumstances will allow. It’s not an “all or nothing” compromise. It’s not even a monolithic compromise: both buyer and vendor can choose which services and domains will prioritize quality and speed over affordability, or affordability and quality over speed, or affordability and speed over quality (although that last one does sometimes come back to bite).

Chemonics and DevResults have been working together to support Chemonics’ projects and its monitoring and evaluation (M&E) needs since 2014, and we’ve had to learn from each other how best to achieve the mythical balance of quality, affordability, and speed. We haven’t always gotten it right, but we do have a few suggestions on how technological partnerships can ensure long-term success.

Observations from an implementer

As a development implementer, Chemonics recognizes that technology advances development outcomes and enables us to do our work faster and more efficiently. While we work in varied contexts, we generally don’t have time to reinvent technology solutions for each project. Vendors bring value when they can supply configurable products that meet our needs in the real world faster and cheaper than building something custom. Beyond the core product functionality, vendors offer utility with staff who maintain the IT infrastructure, continually upgrade product features, and ensure compliance with standards, such as the General Data Protection Regulation (GDPR) or the International Aid Transparency Initiative (IATI). Not every context is right for off-the-shelf solutions. Just because a product exists, it doesn’t mean the collaboration with a software vendor will be successful. But, from an implementer’s perspective, here are a few key factors for success:

Aligned incentives

Vendors should have a keen interest in ensuring that their product meets your requirements. When they are primarily interested in your success in delivering your core product or service — and not just selling you a product — the relationship is off to a good start. If the vendor does not understand or have a fundamental interest in your core business, this can lead to diverging paths, both in features and in long-term support. In some cases, fresh perspectives from non-development outsiders are constructive, but being the outlier client can contribute to project failure.

Inclusion in roadmap

Assuming the vendor’s incentives are aligned with your own, it should be interested in your feedback as well as responsive to making changes, even to some core features. As our staff puts systems through their paces, we regularly come up with feature requests, user interface improvements, and other feedback. We realize that not every feature request will make it into code, but behind every request is a genuine need, and vendors should be willing to talk through each need to figure out how to address it.

Straight talk

There’s a tendency for tech vendors, especially sales teams, to have a generous interpretation of system capabilities. Unmet expectations can result from a client’s imprecise requirements or a vendor’s excessive zeal, which leads to disappointment when you get what you asked for, but not what you wanted. A good vendor will clearly state up front what its product can do, cannot do, and will not ever do. In return, implementers have a responsibility to make their technical requirements as specific, well-scoped, and operational as possible.

Establish support liaisons

Many vendors offer training, help articles, on-demand support, and various other resources for turning new users into power users, but relying on the vendor to shoulder this burden serves no one. By establishing a solid internal front-line support system, you can act as intermediaries and translators between end users and the software vendor. Doing so has meant that our users don’t have to be conversant in developer-speak or technical language, nor does our technology partner have to field requests coming from every corner of our organization.

Observations from a tech vendor

DevResults’ software is used to manage international development data in 145 countries, and we support M&E projects around the world. We’ve identified three commonalities among organizations that implement our software most effectively: 1) the person who does the most work has the authority to make decisions, 2) the person with the most responsibility has technical aptitude and a whatever-it-takes attitude, and 3) breadth of adoption is achieved when the right responsibilities are delegated to the project staff, building capacity and creating buy-in.

Organizational structure

We’ve identified two key factors that predict organizational success: dedicated staff resources and their level of authority. Most of our clients are implementing a global M&E system for the first time, so the responsibility for managing the rollout is often added to someone’s already full list of duties, which is a recipe for burnout. Even if a “system owner” is established and space is made in their job description, if they don’t have the authority to request resources or make decisions, it restricts their ability to do their job well. Technology projects are regularly entrusted to younger, more junior employees, who are often fast technical learners, but their effectiveness is hindered by having to constantly appeal to their boss’ boss’ boss about every fork in the road. Middle-sized organizations are typically advantaged here because they have enough staff to dedicate to managing the rollout, yet few enough layers of bureaucracy that such a person can act with authority.

Staffing

Technical expertise is critical when it comes to managing software implementations. Too often, technical duties are foisted upon under-prepared (or less-than-willing) staffers. This may be a reality in an era of constrained budgets, but asking experts in one thing to operate outside of their wheelhouse is another recipe for burnout. In the software industry, we conduct technical exams for all new hires. We would be thrilled to see the practice extended across the ICT4D space, even for roles that don’t involve programming but do involve managing technical products. Even so, there’s a certain aspect of the ideal implementation lead that comes down to personality and resourcefulness. The most successful teams we work with have at least one person who has the willingness and the ability to do whatever it takes to make a new system work. Call it ‘ownership,’ call it a ‘can-do’ attitude, but whatever it is, it works!

Timing and resource allocation

Change management is hard, and introducing a new system requires a lot of work up front. There’s a lot that headquarters personnel can do to unburden project staff (configuring the system, developing internal guidance and policies, etc.), but sometimes it’s better to involve project staff directly and early. When project staff are involved in the system configuration and decision-making process, we’ve seen them demonstrate more ownership of the system and less resentment of “another thing coming down from headquarters.” System setup and configuration can also be a training opportunity, further developing internal capacity across the organization. Changing systems requires conversations across the entire org chart; well-designed software can facilitate those conversations. But even when implementers do everything right, they should always expect challenges, plan for change management, and adopt an agile approach to managing a system rollout.

Good, cheap, fast: pick THREE!

As we said, there are ways to balance these three dimensions. We’ve managed to strike a successful balance in this partnership because we understand the incentives, constraints, and priorities of our counterpart. The software as a service (SaaS) model is instrumental here because it ensures software is well-suited to multiple clients across the industry (good), more affordable than custom builds (cheap), and immediately available on day one (fast). The implicit tradeoff is that no one client can control the product roadmap, but when each and every customer has a say, the end product represents the collective wisdom, best practice, and feedback of everyone. It may not be perfectly tailored to each and every client’s preferences, but in the end, that’s usually a good thing..

Join us for MERL Tech DC, Sept 5-6th!

MERL Tech DC: Taking Stock

September 5-6, 2019

FHI 360 Academy Hall, 8th Floor
1825 Connecticut Avenue NW
Washington, DC 20009

We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.

The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.

Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.

Conference strands:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.

Submit your session ideas, register to attend the conference, or reserve a demo table for MERL Tech DC now!

You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.

MERL for Blockchain Interventions: Integrating MERL into Token Design

Guest post by Michael Cooper, Mike is a Senior Social Scientist at Emergence who advises foreign assistance funders, service providers and evaluators on blockchain applications. He can be reached at emergence.cooper@gmail.com 

Tokens Could be Our Focus

There is no real evidence base about what does and does not work for applying blockchain technology to interventions seeking social impacts.  Most current blockchain interventions are driven by developers (programmers) and visionary entrepreneurs. There is little thinking in current blockchain interventions around designing for “social” impact (there is an over abundant trust in technology to achieve the outcomes and little focus on the humans interacting with the technology) and integrating relevant evidence from behavioral economics, behavior change design, human centered design, etc.

To build the needed evidence base, Monitoring, Evaluation, Research and Learning (MERL) practitioners will have to not only get to know the broad strokes of blockchain technology but the specifics of token design and tokenomics (the political economics of tokenized ecosystems).  Token design could become the focal point for MERL on blockchain interventions since:

  • If not all, the vast majority of blockchain interventions will involve some type of desired behavior change
  • The token provides the link between the ledger (which is the blockchain) and the social ecosystem created by the token in which the behavior change is meant to happen
  • Hence the token is the “nudge” meant to leverage behavior change in the social ecosystem while governing the transactions on the blockchain ledger. 

(While this blog will focus on these points, it will not go into a full discussion of what tokens are and how they create ecosystems. But there are some very good resources out there that do this which you can review at your leisure and to the degree that works for you.  The Complexity Institute has published a book exploring the various attributes of complexity and main themes involved with tokenomics while Outlier Ventures has published, what I consider, to be the best guidance on token design.  The Outlier Ventures guidance contains many of the tools MERL practitioners will be familiar with (problem analysis, stakeholder mapping, etc.) and should be consulted.) 

Hence it could be that by understanding token design and its requirements and mapping it against our current MERL thinking, tools and practices, we can develop new thinking and tools that could be the beginning point in building our much-needed evidence base. 

What is a “blockchain intervention”? 

As MERL practitioners we roughly define an “intervention” as a group of inputs and activities meant to leverage outcomes within a given eco-system.  “Interventions” are what we are usually mandated to asses, evaluate and help improve.

When thinking about MERL and blockchain, it is useful to think of two categories of “blockchain interventions”. 

1) Integrating the blockchain into MERL data collection, entry, management, analysis or dissemination practices and

2) MERL strategies for interventions using the blockchain in some way shape or form. 

Here we will focus on the #2 and in so doing demonstrate that while the blockchain is an innovative, potentially disruptive technology, evaluating its applications on social outcomes is still an issue of assessing behavior change against dimensions of intervention design. 

Designing for Behavior Change

We generally design interventions (programs, projects, activities) to “nudge” a certain type of behavior (stated as outcomes in a theory of change) amongst a certain population (beneficiaries, stakeholders, etc.).  We often attempt to integrate mechanisms of change into our intervention design, but often do not for a variety of reasons (lack of understanding, lack of resources, lack of political will, etc.).  This lack of due diligence in design is partly responsible for the lack of evidence around what works and what does not work in our current universe of interventions. 

Enter blockchain technology, which as MERL practitioners, we will be responsible for assessing in the foreseeable future.  Hence, we will need to determine how interventions using the blockchain attempt to nudge behavior, what behaviors they seek to nudge, amongst whom, when and how well the design of the intervention accomplishes these functions.  In order to do that we will need to better understand how blockchains use tokens to nudge behavior. 

The Centrality of the Token

We have all used tokens before.  Stores issue coupons that can only be used at those stores, we get receipts for groceries as soon as we pay, arcades make you buy tokens instead of just using quarters.  The coupons and arcade tokens can be considered utility tokens, meaning that they can only be used in a specific “ecosystem” which in this case is a store and arcade respectively.  The grocery store receipt is a token because it demonstrates ownership, if you are stopped on the way out the store and you show your receipt you are demonstrating that you now have rights to ownership over the foodstuffs in your bag. 

Whether you realize it or not at the time, these tokens are trying to nudge your behavior.  The store gives you the coupon because the more time you spend in their store trying to redeem coupons, the greatly likelihood you will spend additional money there.  The grocery store wants you to pay for all your groceries while the arcade wants you to buy more tokens than you end up using. 

If needed, we could design MERL strategies to assess how well these different tokens nudged the desired behaviors. We would do this, in part, by thinking about how each token is designed relative to the behavior it wants (i.e. the value, frequency and duration of coupons, etc.).

Thinking about these ecosystems and their respective tokens will help us understand the interdependence between 1) the blockchain as a ledger that records transactions, 2) the token that captures the governance structures for how transactions are stored on the blockchain ledger as well as the incentive models for 3) the mechanisms of change in the social eco-system created by the token. 

Figure #1:  The inter-relationship between the blockchain (ledger), token and social eco-system

Token Design as Intervention Design  

Just as we assess theories of change and their mechanisms against intervention design, we will assess blockchain based interventions against their token design in much the same way.  This is because blockchain tokens capture all the design dimensions of an intervention; namely the problem to be solved, stakeholders and how they influence the problem (and thus the solution), stakeholder attributes (as mapped out in something like a stakeholder analysis), the beneficiary population, assumptions/risks, etc. 

Outlier Ventures has adapted what they call a Token Utility Canvas as a milestone in their token design process.  The canvas can be correlated to the various dimensions of an evaluability assessment tool (I am using the evaluability assessment tool as a demonstration of the necessary dimensions of an interventions design, meaning that the evaluability assessment tool assesses the health of all the components of an intervention design).  The Token Utility Canvas is a useful milestone in the token design process that captures many of the problem diagnostic, stakeholder assessment and other due diligence tools that are familiar to MERL practitioners who have seen them used in intervention design.  Hence token design could be largely thought of as intervention design and evaluated as such.

Table#1: Comparing Token Design with Dimensions of Program Design (as represented in an Evaluability Assessment)

This table is not meant to be exhaustive and not all of the fields will be explained here but in general, it could be a useful starting point in developing our own thinking and tools for this emerging space. 

The Token as a Tool for Behavior Change

Coming up with a taxonomy of blockchain interventions and relevant tokens is a necessary task, but all blockchains that need to nudge behavior will have to have a token.

Consider supply chain management.  Blockchains are increasingly being used as the ledger system for supply chain management.  Supply chains are typically comprised of numerous actors packaging, shipping, receiving, applying quality control protocols to various goods, all with their own ledgers of the relevant goods as they snake their way through the supply chain.  This leads to ample opportunities for fraud, theft and high costs associated with reconciling the different ledgers of the different actors at different points in the supply chain.  Using the blockchain as the common ledger system, many of these costs are diminished as a single ledger is used with trusted data, hence transactions (shipping, receiving, repackaging, etc.) can happen more seamlessly and reconciliation costs drop.

However even in “simple” applications such as this there are behavior change implications. We still want the supply chain actors to perform their functions in a manner that adds value to the supply chain ecosystem as a whole, rewarding them for good behavior within the ecosystem and punishing for bad.

What if those shippers trying to pass on a faulty product had already deposited a certain value of currency in an escrow account (housed in a smart contract on the blockchain)? Meaning that if they are found to be attempting a prohibited behavior (passing on faulty products) they surrender a certain amount automatically from the escrow account in the blockchain smart contract.  How much should be deposited in the escrow account?  What is the ratio between the degree of punishment and undesired action?  These are behavior questions around a mechanism of change that are dimensions of current intervention designs and will be increasingly relevant in token design.

The point of this is to demonstrate that even “benign” applications of the blockchain, like supply chain management, have behavior change implications and thus require good due diligence in token design.

There is a lot that could be said about the validation function of this process, who validates that the bad behavior has taken place and should be punished or that good behavior should be rewarded?  There are lessons to be learned from results based contracting and the role of the validator in such a contracting vehicle.  This “validating” function will need to be thought out in terms of what can be automated and what needs a “human touch” (and who is responsible, what methods they should use, etc.).   

Implications for MERL

If tokens are fundamental to MERL strategies for blockchain interventions, there are several critical implications:

  • MERL practitioners will need to be heavily integrated into the due diligence processes and tools for token design
  • MERL strategies will need to be highly formative, if not developmental, in facilitating the timeliness and overall effectiveness of the feedback loops informing token design
  • New thinking and tools will need to be developed to assess the relationships between blockchain governance, token design and mechanisms of change in the resulting social ecosystem. 

The opportunity cost for impact and “learning” could go up the less MERL practitioners are integrated into the due diligence of token design.  This is because the costs to adapt token design are relatively low compared to current social interventions, partly due to the ability to integrate automated feedback. 

Blockchain based interventions present us with significant learning opportunities due to our ability to use the technology itself as a data collection/management tool in learning about what does and does not work.  Feedback from an appropriate MERL strategy could inform decision making around token design that could be coded into the token on an iterative basis.  For example as incentives of stakeholder’s shift (i.e. supply chain shippers incur new costs and their value proposition changes) token adaptation can respond in a timely fashion so long as the MERL feedback that informs the token design is accurate.

There is need to determine what components of these feedback loops can be completed by automated functions and what requires a “human touch”.  For example, what dimensions of token design can be informed by smart infrastructure (i.e. temp gauges on shipping containers in the supply chain) versus household surveys completed by enumerators?  This will be a task to complete and iteratively improve starting with initial token design and lasting through the lifecycle of the intervention.  Token design dimensions, outlined in the Token Utility Canvas, and decision-making will need to result in MERL questions that are correlated to the best strategy to answer them, automated or human, much the same as we do now in current interventions. 

While many of our current due diligence tools used in both intervention and evaluation design (things like stakeholder mapping, problem analysis, cost benefit analysis, value propositions, etc.), will need to be adapted to the type of relationships that are within a tokenized eco-systems.  These include the relationships of influence between the social eco-system as well as the blockchain ledger itself (or more specifically the governance of that ledger) as demonstrated in figure #1.  

This could be our, as MERL practitioners, biggest priority.  While blockchain interventions could create incredible opportunities for social experimentation, the need for human centered due diligence (incentivizing humans for positive behavior change) in token design is critical.  Over reliance on the technology to drive social outcomes is already a well evidenced opportunity cost that could be avoided with blockchain-based solutions if the gap between technologists, social scientists and practitioners can be bridged.    

Can digital tools be used to help young mothers in Kenya form new habits?

Guest post from Haanim Galvaan, Content Designer at Every1Mobile

A phone is no longer just a phone. It’s your connection to the rest of the world, it’s your personal assistant, and now, it’s your best friend who gives you encouragement and reinforcement for your good habits.

At least that’s what mobile phones have become for those who make use of habit-boosting apps.

If you’re trying to quit smoking and want to build a streak of puff-free days, the HabitBull app can help you do that. Want to establish a habit in your team that makes use of social accountability? Try Habitica. Do you want positive reinforcement for your activities in a motivational, rewarding voice? Productive is the app for that.

But what if you’re a young mum, living in the urban slums of Nairobi and you want to improve the health and wellbeing of your children? Try U Afya’s 10-Day Challenge.

U Afya is an online community for young mothers and mothers-to-be to learn about topics related to health, hygiene and family life. The site takes a holistic approach to giving young mothers the knowledge and confidence they need to enact certain healthy behaviours. It’s a place to discuss, give and receive advice, take free online courses, and now, to establish good habits with a custom-built habit tracking tool.

The 10-Day Handwashing Challenge was launched using new habit-tracking functionality. Users were encouraged to perform an activity related to handwashing each day, e.g. wash your hands with soap for 20 seconds. The challenges were formulated around the Lifebuoy “5 Key Moments” model. Participants were required to log their activity on the site by completing a survey.

Each day the site fed users a different hygiene-related tip, as well as links to additional content. At the end of the challenge, users were pushed to take a pledge and make a commitment to handwashing.

U Afya’s Habit Tracker is different from other habit boosting apps in that it is not an app! It has been built onto a low-data usage site that has been optimised for the data-sensitive target audience in the Nairobi slums. The tracker provides a rich, visual experience, which makes use of simple functionality compatible on both feature phone and smartphone.

We created a sense of urgency.

Users were required to log their activity for 10 days within a 30-day period. Attaching a “deadline” added a measure of urgency to the activity. There is no space for procrastination. The message is: establish your habit now or you never will!

It is based on behaviour change levers.

The 10-Day Handwashing Challenge and its accompanying content around the site were all based on the behaviour change approach employed by Lifebuoy in Way of Life, namely Awareness, Commitment, Reinforcement and Reward.

The approach was executed in the following ways:

Awareness: Introducing the handwashing theme with engaging, educational content that linked to and from the 10-Day Handwashing Challenge:

  • Diseases caused by lack of handwashing (article)
  • 5 Tips for washing your hands correctly (article)
  • Global Handwashing Day! – The 5 Key times to wash our hands (article)
  • How much do you know about handwashing? (quiz)

Commitment: Encouraging users to take the Handwashing Pledge

Reinforcement: Habit tracker, come back to self-report your daily activity

Reward: Participants stood the chance to win a hygiene gift bag

Contents of the hygiene gift bag given to 5 winners.

The results

86 users started the challenge and 26 users completed it within the 30-day challenge period. That makes a completion rate of 30% overall. Considering that users had to return to the challenge 10 times, the response rate is quite high.

The biggest drop-off happened between Day 1 and Day 2, with 28 users falling away and drop-off rates decreased gradually throughout the 10 days. The graph below shows that most users who made it to day 5 ended up completing the challenge. Only 11 users dropped off between Day 5 and Day 10.

26 out of 86 users created a habit.

In addition to participatory data, additional feedback was gathered by interspersing survey questions into the challenge. This additional questioning determined that 91% of challenge-takers feel they can afford to buy soap for their families.

Feedback:

Users had overwhelmingly positive feedback about the challenge.

“It was so educating and hygienically I have improved. It’s now a routine to me, washing hands in any case”

Learnings:

Keep it simple

It’s not always necessary to create a fancy app to push a new activity. The U Afya 10-Day Challenge was built on a platform that users are already familiar with. By building it into their current environment, it offered them something new and exciting on their visit.

Users were required to do one thing each day and report it with one action i.e. taking a single-question survey. Requiring minimal effort from your users can maximise uptake.

Overall the approach was simplicity. Simplicity in the design of the functionality, simplicity in the daily action and simplicity in creating a habit.

With this approach the U Afya 10-Day Handwashing Challenge helped 26 young mothers to create a new habit of washing their hands every day at key moments.

Conclusion:

This first iteration of U Afya’s 10-Day Handwashing Challenge was a pilot, but the results suggest that it is possible to use low-cost, low-tech means to encourage habit formation. It is also possible for sophisticated behaviour change theory and practice to reach some of the most vulnerable groups, using the very phones they have in their hands.

It is also a useful tool to help us to understand the impact of our behaviour change campaigns in the real world.

Next steps

All the user feedback and learnings mentioned above will be analysed to understand how the approach can be strengthened to reach even more people, increase compliance and and encourage positive habit creation.