Tag Archives: MERL Tech

What’s Happening with Tech and MERL?

by Linda Raftree, Independent Consultant and MERL Tech organizer

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before roll-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Ethics and unintended consequences: The answers are sometimes questions

by Jo Kaybryn

Our MERL Tech DC session, “Ethics and unintended consequences of digital programs and digital MERL” was a facilitated discussion about some of the challenges we face in the Wild West of digital and technology-enabled MERL and the data that it generates. Here are some of the things that stood out from discussions with participants and our experience.

Purposes

Sometimes we are not clear on why we are collecting data.  ‘Just because we can’ is not a valid reason to collect or use data and technology.  What purposes are driving our data collection and use of technology? What is the problem we are trying to solve? A lack of specificity can allow us stray into speculative data collection — if we’re collecting data on X, then it’s a good opportunity to collect data on Y “in case we need it in the future”. Do we ever really need it in the future? And if we do go back to it, we often find that because we didn’t collect the data on Y with a specific purpose, it’s not the “right” data for our needs. So, let’s always ask ourselves why are we collecting this data, do we really need it?

Tensions

Projects are increasingly under pressure to be more efficient and cost-effective in their data collection, yet the need or desire to conduct more robust assessments can requires the collection of data on multiple dimensions within a community. These two dynamics are often in conflict with each other. Here are three questions that can help guide our decision making:

  • Are there existing data sets that are “good enough” to meet the M&E needs of a project? Often there are, and they are collected regularly enough to be useful. Lean on partners who understand the data space to help map out what exists and what really needs to be collected. Leverage partners who are innovating in the data space – can machine learning and AI-produced data meet 80% of your needs? If so, consider it.
  • What data are we critically in need of to assess a project? Build an efficient data collection methodology that considers respondent burden and potentially includes multiple channels for receiving responses to increase inclusivity.
  • What will the data be used for? Sensitive contexts and life or death decisions require a different level of specificity and periodicity than less sensitive projects. Think about data from this lens when deciding which information to collect, how often to collect it, and who to collect it from.

Access

It is worth exploring questions of access in our data collection practices. Who has access to the data and the technology?  Do the people about whom the data is, have access to it?  Have we considered the harms that could come from the collection, storage, and use of data? For instance, while it can be useful to know where all the clients are who are accessing a pregnancy clinic to design better services, an unintended consequence may involve others having the ability to identify people who are pregnant, which pregnant people might not like these others to know. What can we do to protect the privacy of vulnerable populations? Also, going digital can be helpful, but if a person or community implicated in a data collection endeavour does not have access to technology or to a charging point – are we not just increasing or reinforcing inequality?

Transparency

While we often advocate for transparency in many parts of our industry, we are not always transparent about our data practices. Are we willing to tell others, to tell community members, why we are collecting data, using technology, and how we are using information?  If we are clear on our purpose, but not willing for it to be transparent, then it might be a good reason to reconsider. Yet, transparency does not equate accountability, so what are the mechanisms for ensuring greater accountability towards the people and communities we seek to serve?

Power and patience

One of the issues we’re facing is power imbalances. The demands that are made of us from donors about data, and the technology solutions that are presented to us, all make us feel like we’re not in control. But the rules haven’t been written yet — we get to write them.

One of the lessons from the responsible data workshop leading up to the conference was that organisations can get out in front of demands for data by developing their own data management and privacy policies. From this position it is easier to enter into dialogues and negotiations, with the organisational policy as your backstop. Therefore, it is worth asking, Who has power? For what? Where does it reside and how can we rebalance it?

Literacy underpins much of this – linguistic, digital, identity, ethical literacy.  Often when it comes to ‘digital’ we immediately fall under the spell of the tyranny of the urgent.  Therefore,  in what ways can we adopt a more ‘patient’ or ‘reflective’ practice with respect to digital?

For more information, see:

Big Data, Big Responsibilities

by Catherine Gwin

Big data comes with big responsibilities, where both the funder and recipient organization have ethical and data security obligations.

Big data allows organizations to count and bring visibility to marginalized populations and to improve on decision-making. However, concerns of data privacy, security and integrity pose challenges within data collection and data preservation. What does informed consent look like in data collection? What are the potential risks we bring to populations? What are the risks of compliance?

Throughout the MERL Tech DC panel, “Big Data, Big Responsibilities,” Mollie Woods, Senior Monitoring, Evaluation and Learning (MEL) Advisor of ChildFund International and Michael Roytman, Founder and Board Director of Dharma Platform, unpacked some of the challenges based on their experiences. Sam Scarpino, Dharma’s Chief Strategy Officer, served as the session moderator, posing important questions about this area.

The session highlighted three takeaways organizations should consider when approaching data security.

1) Language Barriers between Evaluators and Data Scientists

Both Roytman and Woods agreed that the divide between evaluators and data scientists is the lack of knowledge of the others’ field. How do you ask a question when you didn’t know you had to?

In Woods’ experience, the Monitoring and Evaluation team and IT team each have a role in data security, but work independently. The evolving field of M+E inhibits time for staying attuned to what data security needs. Additionally, the organization’s limited resources can impede the IT team from supporting programmatic data security. 

A potential solution ChildFund has considered is investing in an IT person with a focus on MERL who has experience and knowledge in the international or humanitarian sphere. However, many organizations fall short when it comes to financing data security. In addition, identifying an individual with these skills can be challenging.

2) Data Collection

Data breaches exposes confidential information, which puts vulnerable populations at risk of exploitative use of their data and potential harm. As we gather data, this constitutes a question about what informed consent looks like? Are we communicating the risks to beneficiaries of releasing their personal information? 

In Woods’ experience, ChildFund approaches data security through a child-safeguarding lens across stakeholders and program participants, where all are responsible for data security. Its child safeguarding policy entails data security protocol and privacy; however, Woods mentioned the dissemination and implementation across countries is a lingering question. Many in-country civil society organizations lack capacity, knowledge, and resources to implement data security protocols, especially if they are working in a country context that does not have laws, regulations or frameworks related to data security and privacy. Currently, ChildFund is advocating for refresher trainings on policy for all involved global partnerships to be updated on organizational changes.

3) Data Preservation

The issue of data breaches is a privacy concern when organizations’ data includes sensitive information of individuals. This puts beneficiaries at-risk of exploitation by bad actors. Roytman explained that there are specific actors, risks, and threats that affect specific kinds of data; though, humanitarian aid organizations are not always a primary target. Nonetheless, this shouldn’t distract organizations from potential risks, but open discussion around how to identify and mitigate risks? 

Protecting sensitive data requires a proper security system, something that not all platforms provide, especially if they are free. Ultimately, security is a financial investment that requires time in order to avoid and mitigate risks and potential threats. In order to increase support and investment in security, ChildFund is working with Dharma to pilot a small program to demonstrate the use of big data analytics with a built in data security system.

Roytman suggested approaching ethical concerns by applying the CIA Triad: Confidentiality, Availability and Integrity. There will always be tradeoffs, he said. If we don’t properly invest in data security and mitigate potential risks, there will be additional challenges to data collection. If we don’t understand data security, how can we ensure informed consent?

Many organizations find themselves doing more harm than good due to lack of funding. Big data can be an inexpensive approach to collecting large quantities of data, but if it leads to harm, there is a problem. This is is a complex issue to resolve, however, as Roytman concluded, the opposite of complexity is not simplicity, but rather transparency.

See Dharma’s blog post about this session here.

Related Resources and Articles

5 tips for operationalizing Responsible Data policy

By Alexandra Robinson and Linda Raftree

MERL and development practitioners have long wrestled with complex ethical, regulatory, and technical aspects of adopting new data approaches and technologies. The topic of responsible data has gained traction over the past 5 years or so, and a handful of early adopters have developed and begun to operationalize institutional RD policies. Translating policy into practical action, however, can feel daunting to organizations. Constrained budgets, complex internal bureaucracies, and ever-evolving technology and regulatory landscapes make it hard to even know where to start. 

The Principles for Digital Development provide helpful high level standards, and donor guidelines (such as USAID’s Responsible Data Considerations) offer additional framing. But there’s no one-size-fits-all policy or implementation plan that organizations can simply copy and paste in order to tick all the responsible data boxes. 

We don’t think organizations should do that anyway, given that each organization’s context and operating approach is different, and policy means nothing if it’s not rolled out through actual practice and behavior change!

In September, we hosted a MERL Tech pre-workshop on Operationalizing Responsible Data to discuss and share different ways of turning responsible data policy into practice. Below we’ve summarized some tips shared at the workshop. RD champions in organizations of any size can consider these when developing and implementing RD policy.

1. Understand Your Context & Extend Empathy

  • Before developing policy, conduct a non-punitive assessment (a.k.a. a landscape assessment, self-assessment or staff research process) on existing data practices, norms, and decision-making structures . This should engage everyone who will using or affected by the new policies and practices. Help everyone relax and feel comfortable sharing how they’ve been managing data up to now so that the organization can then improve. (Hint: avoid the term ‘audit’ which makes everyone nervous.)
  • Create ‘safe space’ to share and learn through the assessment process:
    • Allow staff to speak anonymously about their challenges and concerns whenever possible
    • Highlight and reinforce promising existing practices
    • Involve people in a ‘self-assessment’
    • Use participatory workshops (e.g. work with a team to map a project’s data flows or conduct a Privacy Impact Assessment or a Risk-Benefits Assessment) – this allows everyone who participates to gain RD awareness while also learning new practical tools along with highlighting any areas that need attention. The workshop lead or “RD champion” can also then get a better sense of the wider organizations knowledge, attitudes and practices as related to RD
    • Acknowledge (and encourages institutional leaders to affirm) that most staff don’t have “RD expert” written into their JDs; reinforce that staff will not be ‘graded’ or evaluated on skills they weren’t hired for.
  • Identify organizational stakeholders likely to shape, implement, or own aspects of RD policy and tailor your engagement strategies to their perspectives, motivations, and concerns. Some may feel motivated financially (avoiding fines or the cost of a data breach); others may be motivated by human rights or ethics; whereas some others might be most concerned with RD with respect to reputation, trust, funding and PR.
  • Map organizational policies, major processes (like procurement, due diligence, grants management), and decision making structures to assess how RD policy can be integrated into these existing activities.

2. Consider Alternative Models to Develop RD Policy 

  • There is no ‘one size fits all’ approach to developing RD policy. As the (still small, but promising) number of organizations adopting policy grows, different approaches are emerging. Here are some that we’ve seen:
    • Top-down: An institutional-level policy is developed, normally at the request of someone on the leadership team/senior management. It is then adapted and applied across projects, offices, etc. 
      • Works best when there is strong leadership buy-in for RD policy and a focal point (e.g. an ‘Executive Sponsor’) coordinating policy formation and navigating stakeholders
    • Bottom-up: A group of staff are concerned about RD but do not have support or interest from senior leadership, so they ‘self-start’ the learning process and begin shaping their own practices, joining together, meeting, and communicating regularly until they have wider buy-in and can approach leadership with a use case and budget request for an organization-wide approach.
      • Good option if there is little buy-in at the top and you need to build a case for why RD matters.
    • Project- or Team-Generated: Development and application of RD policies are piloted within a targeted project or projects or on one team. Based on this smaller slice of the organization, the project or team documents its challenges, process, and lessons learned to build momentum for and inform the development of future organization-wide policy. 
      • Promising option when organizational awareness and buy-in for RD is still nascent and/or resources to support RD policy formation and adoption (staff, financial, etc.) are limited.
    • Hybrid approach: Organizational policy/policies are developed through pilot testing across a reasonably-representative sample of projects or contexts. For example, an organization with diverse programmatic and geographical scope develops and pilots policies in a select set of country offices that can offer different learning and experiences; e.g., a humanitarian-focused setting, a development-focused setting, and a mixed setting; a small office, medium sized office and large office; 3-4 offices in different regions; offices that are funded in various ways; etc.  
      • Promising option when an organization is highly decentralized and works across a diverse country contexts and settings. Supports the development of approaches that are relevant and responsive to diverse capacities and data contexts.

3. Couple Policy with Practical Tools, and Pilot Tools Early and Often

  • In order to translate policy into action, couple it with practical tools that support existing organizational practices. 
  • Make sure tools and processes empower staff to make decisions and relate clearly to policy standards or components; for example:
    • If the RD policy includes a high-level standard such as, “We ensure that our partnerships with technology companies align with our RD values,” give staff tools and guidance to assess that alignment. 
  • When developing tools and processes, involve target users early and iteratively. Don’t worry if draft tools aren’t perfectly formatted. Design with users to ensure tools are actually useful before you sink time into tools that will sit on a shelf at best, and confuse or overburden staff at worst. 

4. Integrate and “Right-Size” Solutions 

  • As RD champions, it can be tempting to approach RD policy in a silo, forgetting it is one of many organizational priorities. Be careful to integrate RD into existing processes, align RD with decision-making structures and internal culture, and do not place unrealistic burdens on staff.
  • When building tools and processes, work with stakeholders to develop responsibility assignment charts (e.g. RACI, MOCHA) and determine decision makers.
  • When developing responsibility matrices, estimate the hours each stakeholder (including partners, vendors, and grantees) will dedicate to a particular tool or process. Work with anticipated end users to ensure that processes:
    • Can realistically be carried out within a normal workload
    • Will not excessively burden staff and partners
    • Are realistically proportionate to the size, complexity, and risk involved in a particular investment or project

5. Bridge Policy and Behavior Change through Accompaniment & Capacity Building 

  • Integrating RD policy and practices requires behavior change and can feel technically intimidating to staff. Remember to reassure staff that no one (not even the best resourced technology firms!), has responsible data mastered, and that perfection is not the goal.
  • In order to feel confident using new tools and approaches to make decisions, staff need knowledge to analyze information. Skills and knowledge required will be different according to role, so training should be adapted accordingly. While IT staff may need to know the ins and outs of network security, general program officers certainly do not. 
  • Accompany staff as they integrate RD processes into their work. Walk alongside them, answering questions along the way, but more importantly, helping staff build confidence to develop their own internal RD compass. That way the pool of RD champions will grow!

What approaches have you seen work in your organization?

MERL Tech DC 2019 Feedback Report

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of the conference and related community are to:

  • Improve development, tech, data & MERL literacy
  • Help people find and use evidence & good practices
  • Promote ethical and appropriate use of technology
  • Build and strengthen a “MERL Tech community”
  • Spot trends and future-scope for the sector
  • Transform and modernize MERL in an intentionally responsible and inclusive way

Our sixth MERL Tech DC conference took place on September 5-6, 2019, and we held four pre-workshops on September 4. Some 350 people from 194 organizations joined us for the 2-days, and another 100 people attended the pre-workshops. About 56% of participants attended for the first time, whereas 44% were returnees.

Who attended?

Attendees came from a wide range of organization types and professions.

Conference Themes

The theme for this year’s conference was “Taking Stock” and we had 4 sub-themes:

  1. Tech and Traditional MERL
  2. Data, Data, Data
  3. Emerging Approaches to MERL
  4. The Future of MERL

State of the Field Research

A small team shared their research on “The MERL Tech State of the Field” organized into the above 4 themes. The research will be completed and shared on the MERL Tech site before the end of 2019. (We’ll be presenting it at the South African Evaluation Association Conference in October and at the American Evaluation Association conference in November)

As always, MERL Tech conference sessions were related to: technology for MERL, MERL on ICT4D and Digital Development programs, MERL of MERL Tech, data for decision-making, ethical and responsible data approaches and cross-disciplinary community building. (See the full agenda here):

We checked in with participants on the last day to see how the field had shifted since 2015, when our keynote speaker (Ben Ramalingam) gave some suggestions on how tech could improve MERL.

Ben’s future vision
Where MERL Tech 2019 sessions fell on the expired-tired-wired schematic.
What participants would add to the schematic to update it for 2019 and beyond.

Diversity and Inclusion

We have been making an effort to improve diversity and inclusion at the conference and in the MERL Tech space. An unofficial estimate on speaker racial and gender diversity is below. As compared to 2018 when we first began tracking, the number of women of color speakers increased by 5% and women of color by 2%. The number of white female speakers decreased by 6% and the number of white male speakers went down by 1%. Our gender balance remained fairly consistent.

Where we are failing on diversity and inclusion is at having speakers and participants from outside of North America and Europe – that likely has to do with cost and visas which affect who can attend. It also has to do with who organizations select to represent them at MERL Tech. We’re continuing to try to find ways to collaborate with groups working on MERL Tech in different regions. We believe that new and/or historically marginalized voices should be more involved in shaping the future of the sector and the future of MERL Tech. (If you would like to support us on this or get involved, please contact Linda!)

Post Conference Feedback

Some 25% of participants filled in the post-conference survey and 85% rated their experience “good” or “awesome” (up from 70% in 2018). Answers did not significantly differ based on whether a participant had attended previously or not. Another 8.5% rated sessions via the “Sched” conference agenda app, with an average session satisfaction rating of 9.1 out of 10.

The top rated session was on “Decolonizing Data and Technology in MERL.” As one participant said, “It shook me out of my complacency. It is very easy to think of the tech side of the work we do as ‘value free’, but this is not the case. Being a development practitioner it is important for me to think about inequality in tech and data further than just through the implementation of the projects we run.” Another noted that “As a white, gay male who has a background in international and intercultural education, it was great to see other fields bringing to light the decolonizing mindset in an interactive way. The session was enlightening and brought up conversation that is typically talked about in small groups, but now it was highlighted in front of the entire audience.”

Sign up for MERL Tech News if you’d like to read more about this and other sessions. We’re posting a series of posts and session summaries.

Key suggestions for improving next time were similar to those we hear every year: less showcasing and pitching, ensure that titles match what is actually delivered at the session, ensuring that presenters are well-prepared, and making sessions relevant, practical and applicable.

Additionally, several people commented that the venue had some issues with noise from conversations in the common area spilling into breakout rooms and making it hard to focus. Participants also complained that there was a large amount of trash and waste produced, and suggested more eco-friendly catering for next time.

Access the full feedback report here.

Where/when should the conference happen?

As noted, we are interested in finding a model for MERL Tech that allows for more diversity of voices and experiences, so we asked participants how often and where they thought we should do MERL Tech in the future. The majority (44.3%) felt we should run MERL Tech in DC every 2 years and somewhere else in the year in between. Some 23% said to keep it in DC every year, and around 15% suggested multiple MERL Tech conferences each year in DC and elsewhere. (We were pleased that no one selected the option of “stop doing MERL Tech altogether, it’s unnecessary.”)

Given this response, we will continue exploring options for partners who would like to support financially and logistically to enable MERL Tech to happen outside of DC. Please contact Linda if you’d like to be involved or have ideas on how to make this happen.

New ways to get involved!

Last year, the idea of having a GitHub repository was raised, and this year we were excited to have GitHub join us. They had come up with the idea of creating a MERL Tech Center on GitHub as well, so it was a perfect match! More info here.

We also had a request to create a MERL Tech Slack channel (which we have done). Please get in touch with Linda by email or via Slack if you’d like to join us there for ongoing conversations on data collection, open source, technology (or other channels you request!)

As always you can also follow us on Twitter and MERL Tech News.

Blockchain: Can we talk about impact yet?

by Shailee Adinolfi, John Burg and Tara Vassefi

In September 2018, a three-member team of international development professionals presented a session called “Blockchain Learning Agenda: Practical MERL Workshop” at MERL Tech DC. Following the session, the team published a blog post about the session stating that the authors had “… found no documentation or evidence of the results blockchain was purported to have achieved in these claims [of radical improvements]. [They] also did not find lessons learned or practical insights, as are available for other technologies in development.”

The blog post inspired a barrage of unanticipated discussion online. Unfortunately, in some cases readers (and re-posters) misinterpreted the point as disparaging of blockchain. Rather, the post authors were simply asserting ways to cope with uncertain situations related to piloting blockchain projects. Perhaps the most important outcome of the session and post, however, is that they motivated a coordinated response from several organizations who wanted to delve deeper into the blockchain learning agenda.

To do that, on March 5, 2019, Chemonics, Truepic, and Consensys hosted a roundtable titled “How to Successfully Apply Blockchain in International Development.” All three organizations are applying blockchain in different and complementary ways relevant to international development — including project monitoring, evaluation, learning (MEL) innovations as well as back-end business systems. The roundtable enabled an open dialogue about how blockchain is being tested and leveraged to achieve better international development outcomes. The aim was to explore and engage with real case studies of blockchain in development and share lessons learned within a community of development practitioners in order to reduce the level of opacity surrounding this innovative and rapidly evolving technology.

Three case studies were highlighted:

1. “One-click Biodata Solution” by Chemonics 

  • Chemonics’ Blockchain for Development Solutions Lab designed and implemented a RegTech solution for the USAID foreign assistance and contracting space that sought to leverage the blockchain-based identity platform created by BanQu to dramatically expedite and streamline the collection and verification of USAID biographical data sheets (biodatas), improve personal data protection, and reduce incidents of error and fraud in the hiring process for professionals and consultants hired under USAID contracts.
  • Chemonics processes several thousand biodatas per year and accordingly devotes significant labor effort and cost to support the current paper-based workflow.
  • Chemonics’ technology partner, BanQu, used a private, permissioned blockchain on the Ethereum network to pilot a biodata solution.
  • Chemonics successfully piloted the solution with BanQu, resulting in 8 blockchain-based biodatas being fully processed in compliance with donor requirements.
  • Improved data protection was a priority for the pilot. One goal of the solution was to make it possible for individuals to maintain control over their back-up documentation, like passports, diplomas, and salary information, which could be shared temporarily with Chemonics through the use of an encrypted key, rather than having documentation emailed and saved to less secure corporate digital file systems.
  • Following the pilot, Chemonics determined through qualitative feedback that users across the biodata ecosystem found the blockchain solution to be easy to use and succeeded at reducing level of effort on the biodata completion process. 
  • Chemonics also compiled lessons-learned, including refinements to the technical requirements, options to scale the solution, and additional user feedback and concerns about the technology to inform decision-making around further biodata pilots. 

2. Project i2i presented by Consensys

  • Problem Statement: 35% of the Filipino population is unbanked, and 56% lives in rural areas. The Philippines economy relies heavily on domestic remittances. Unionbank sought to partner with hundreds of rural banks that didn’t have access to electronic banking services that the larger commercial banks do.
  • In 2017, to continue the Central Bank of the Philippines’ national strategy for financial inclusion, the central banks of Singapore and the Philippines announced that they would collaborate on financial technology by employing the regulatory sandbox approach. This will provide industry stakeholders with the room and time to experiment before regulators enact potentially restrictive policies that could stifle innovation and growth. As part of the agreement, the central banks will share resources, best practices, research, and collaborate to “elevate financial innovation” in both economies.
  • Solution design assumptions for Philippines context:
    • It can be easily operated and implemented with limited integration, even in low-tech settings;
    • It enables lower transaction time and lower transaction cost;
    • It enables more efficient operations for rural banks, including reduction of reconciliations and simplification of accounting processes.
  • Unionbank worked with ConsenSys and participating rural banks to create an interbank ledger with tokenization. The payment platform is private, Ethereum-based.
  • In the initial pilot, 20 steps were eliminated in the process.
  • Technology partners: ConsenSys, Azure (Microsoft), Kaleido, Amazon Web Services.
  • In follow up to the i2i project, Union bank partnered with Singapore-based OCBC Bank, wherein the parties deployed the Adhara liquidity management and international payments platform for a blockchain-based international remittance pilot.  
  • Potential for national and regional collaboration/network development.
  • For details on the i2i project, download the full case study here, watch the 4-minute video clip.

3. Controlled Capture presented by Truepic

  • Truepic is a technology company specializing in digital image and video authentication. Truepic’s Controlled Capture technology uses cutting-edge computer vision, AI, and cryptography technologies to test images and video for signs of manipulation, designating only those that pass its rigorous verification tests are authenticated. Through the public blockchain, Truepic creates an immutable record for each photo and video captured through this process, such that their authenticity can be proven, meeting the highest evidentiary standards. This technology has been used in over 100 countries by citizen journalists, activists, international development organizations, NGOs, insurance companies, lenders and online platforms. 
  • One of Truepic’s innovative strategic partners, the UN Capital Development Fund (another participant of the roundtable), has been testing the possibility of using this technology for monitoring and evaluation of development projects. For example, the following Truepic tracks the date, time, and geolocation of the latest progress of a factory in Uganda. 
  • Controlled Capture requires Wifi or at least 3G/4G connectivity to fully authenticate images/video and write them to the public blockchain, which can be a challenge in low connectivity instances, for example in least-developed countries for UNCDF. 
  • As a work around to connectivity issues, Truepic’s partners have used Satellite Internet connections – such as a Thuraya or Iridium device to successfully capture verified images anywhere. 
  • Public blockchain – Truepic is currently using two different public blockchains, testing cost versus time in an effort to continually shorten the time from capture to closing chain of custody (currently around 8-12 seconds). 
  • Cost – The blockchain component is not actually too expensive; the heaviest investment is in the computer vision technology used to authenticate the images/video, for example to detect rebroadcasting, as in taking a picture of a picture to pass off the metadata.
  • Rights of the image is the owner’s – Truepic does not have rights over the image/video but keeps a copy on its servers in case the user’s phone/tablet is lost, stolen, or broken. And most importantly, so that Truepic can produce the original image on its verification page when shared or disseminated publicly. 
  • Court + evidentiary value: the technology and public-facing verification pages are designed to meet the highest evidentiary standards. 
    • Tested in courts; currently being testing at the international level but cannot disclose specifics due to confidentiality reasons.
  • Privacy and security are key priorities, especially for working in conflict zones, such as Syria. Truepic does not use 2-step authentication because the technology is focused on authenticating the images/video; it is not relevant who the source is and this way it keeps the source as anonymous as possible. Truepic works with its partners to educate on best practices to maintain high levels of anonymity in any scenario. 
  • Biggest challenge is usage by implementing partners – it is very easy to use, however the behavioral change to use the platform has been challenging. 
    • Other challenge: you bring the solution to an implementer, and the implementer says you have to get the donor to integrate it into their RFP scopes; then the donors recommend that we speak to implementing partners. 
  • Storage capacity issues? Storage is not currently a problem; Truepic has plans in place to address any storage issues that may arise with scale. 

How did implementers measure success in their blockchain pilots?

  • Measurement was both quantitative and qualitative 
  • The organizations worked with clients to ensure people who needed the MEL were able to access and use it
  • Concerns with publicizing information or difficulties with NDAs were handled on a case-by-case basis

The MEL space is an excellent place to have a conversation about the use of blockchain for international development – many aspects of MEL hinge on the need for immutability (in record keeping), transparency (in the expenditure and impact of funds) and security (in the data and the identities of implementers and beneficiaries). Many use cases in developing countries and for social impact have been documented (see Stanford report Blockchain for Social Impact, Moving Beyond the Hype). (Editor’s note: see also Blockchain and Distributed Ledger Technologies in the Humanitarian Sector and Distributed Ledger Identification Systems in the Humanitarian Sector).

The original search for evidence on the impact of blockchain sought a level of data fidelity that is difficult to capture and validate, even under the least challenging circumstances. Not finding it at that time, the research team sought the next best solution, which was not to discount the technology, but to suggest ways to cope with the knowledge gaps they encountered by recommending a learning agenda. The roundtable helped to stimulate robust conversation of the three case studies, contributing to that learning agenda.

Most importantly, the experience highlighted several interesting takeaways about innovation in public-private partnerships more broadly: 

  • The initial MERL Tech session publicly and transparently drew attention to the gaps that were identified from the researchers’ thirty thousand-foot view of evaluating innovation. 
  • This transparency drew out engagement and collaboration between and amongst those best-positioned to move quickly and calibrate effectively with the government’s needs: the private sector. 
  • This small discussion that focused on the utility and promise of blockchain highlighted the broader role of government (as funder/buyer/donor) in both providing the problem statement and anchoring the non-governmental, private sector, and civil society’s strengths and capabilities. 

One year later…

So, a year after the much-debated blockchain blogpost, what has changed? A lot. There is a growing body of reporting that adds to the lessons learned literature and practical insights from projects that were powered or supported by blockchain technology. The question remains: do we have any greater documentation or evidence of the results blockchain was purported to have achieved in these claims? It seems that while reporting has improved, it still has a long way to go. 

It’s worth pointing out that the international development industry, with far more experts and funding dedicated to working on improving MERL than emerging tech companies, also has some distance to go in meeting its own evidence standards.  Fortunately, the volume and frequency of hype seems to have decreased (or perhaps the news cycle has simply moved on?), thereby leaving blockchain (and its investors and developers) the space they need to refine the technology.

In closing, we, like the co-authors of the 2018 post, remain optimistic that blockchain, a still emerging technology, will be given the time and space needed to mature and prove its potential. And, whether you believe in “crypto-winter” or not, hopefully the lull in the hype cycle will prove to be the breathing space that blockchain needs to keep evolving in a productive direction.

Author Bios

Shailee Adinolfi: Shailee works on Public Sector solutions at ConsenSys, a global blockchain technology company building the infrastructure, applications, and practices that enable a decentralized world. She has 20 years of experience at the intersection of technology, financial inclusion, trade, and government, including 11 years on USAID funded projects in Africa, Asia and the Middle East.

John Burg: John was a co-author on the original MERL Tech DC 2018 blog, referenced in this blog. He is an international development professional with almost 20 years of cross-sectoral experience across 17 countries in six global regions. He enjoys following the impact of emerging technology in international development contexts.

Tara Vassefi: Tara is Truepic’s Washington Director of Strategic Initiatives. Her background is as a human rights lawyer where she worked on optimizing the use of digital evidence and understanding how the latest technologies are used and weighed in courts around the world. 

Four Reflections on the 2019 MERL Tech Dashboards Competition

by Amanda Makulec, Excella Labs. This post first appeared here.

Data visualization (viz) has come a long way in our MERL Tech community. Four years ago the conversation was around “so you think you want a dashboard?” which evolved to a debate on dashboards as the silver bullet solution (spoiler: they’re not). Fast forward to 2019, when we had the first plenary competition of dashboard designs on the main stage!

Wayan Vota and Linda Raftree, MERL Tech Organizers, were kind enough to invite me to be a judge for the dashboard competition. Let me say: judging is far less stressful than presenting. Having spoken at MERL Tech every year on a data viz topic since 2015, it felt novel to not be frantically reviewing slides the morning of the conference.

The competition sparked some reflections on how we’ve grown and where we can continue to improve as we use data visualization as one item in our MERL toolbox.

1 – We’ve moved beyond conversations about ‘pretty’ and are talking about how people use our dashboards.

Thankfully, our judging criteria and final selection were not limited to which dashboard was the most beautiful. Instead, we focused on the goal, how the data was structured, why the design was chosen, and the impact it created.

One of the best stories from the stage came from DAI’s Carmen Tedesco (one of three competition winners), who demoed a highly visual interface that even included custom spatial displays of how safe girls felt in different locations throughout a school. When the team demoed the dashboard to their Chief of Party, he was underwhelmed… because he was colorblind and couldn’t make sense of many of the visuals. They pivoted, added more tabular, text-focused, grayscale views, and the team was thrilled.

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green https://twitter.com/siobhangreen/status/1169675846761758724

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green: https://twitter.com/siobhangreen/status/1169675846761758724

Having a competition judged on impact, not just display, matters. What gets measured gets done, right? We need to reward and encourage the design and development of data visualization that has a purpose and helps someone do something – whether it’s raising awareness, making a decision, or something else – not just creating charts for the sake of telling a donor that we have a dashboard.

2 – Our conversations about data visualization need to be anchored in larger dialogues about data culture and data literacy.

We need to continue to move beyond talking about what we’re building and focus on for who, why, and what else is needed for the visualizations to be used.

Creating a “data culture” on a small project team is complicated. In a large global organization or slow-to-change government agency, it can feel impossible. Making data visual, nurturing that skillset within a team, and building a culture of data visualization is one part of the puzzle, but we need champions outside of the data and M&E (monitoring and evaluation) teams who support that organizational change. A Thursday morning MERL Tech session dug into eight dimensions of a data readiness, all of which are critical to having dashboards actually get used – learn more about this work here.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data filtered to a specific user level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

3 – Our data dashboards look far more diverse in scope, purpose, and design than the cluttered widgets of early days.

The three winners we picked were diverse in their project goals and displays, including a complex map, a PowerBI project dashboard, and a simple interface of bar charts designed for various user levels on local enterprise success metrics.

One of the winners – Fraym – was a complex, interactive map display allowing users to zoom in to the square kilometer level. Layers for various metrics, from energy to health, can be turned on or off depending on the use case. Huge volumes of data had to be ingested, including both spatial and quantitative datasets, to make the UI possible.

In contrast, the People’s Choice winner wasn’t a quantitative dashboard of charts and maps. Matter of Focus’ OutNav tool instead makes the certainty around elements of theory of change visual, has visual encodings in the form of colors, saturation, and layout within a workflow, and helps organizations show where they’ve contributed to change.

Seeing the diversity of displays, I’m hopeful that we’re moving away from one-size-fits-all solutions or reliance on a single tech stack (whether Excel, Tableau, PowerBI or something else) and continuing to focus more on crafting products that solve problems for someone, which may require us to continue to expand our horizons regarding the tools and designs we use.

4 – Design still matters though, and data and design nerds should collaborate more often.

That said, there remain huge opportunities for more design in our data displays. Last year, I gave a MERL tech lightning talk on why no one is using your dashboard that focused on the need for more integration of design principles in our data visualization development, and those principles still resonate today.

Principles from graphic design, UX, and other disciplines can take a specific visualization from good to great – the more data nerds and designers collaborate, the better (IMHO). Otherwise, we’ll continue the an epidemic of dashboards, many of which are tools designed to do ALL THE THINGS without being tailored enough to be usable by the most important audiences.

An invitation to join the Data Viz Society

If you’re interested in more discourse around data viz, consider joining the Data Viz Society (DVS) and connect with more than 8,000 members from around the globe (it’s free!) who have joined since we launched in February.

DVS connects visualization enthusiasts across disciplines, tech stacks, and expertise, and aims to collect and establish best practices, fostering a community that supports members as they grow and develop data visualization skills.

We (I’m the volunteer Operations Director) have a vibrant Slack workspace packed with topic and location channels (you’ll get an invite when you join), two-week long moderated Topics in DataViz conversations, data viz challenges, our journal (Nightingale), and more.

More on ways to get involved in this thread – including our data viz practitioner survey results challenge closing 30 September 2019 that has some fabulous cash prizes for your data viz submissions!

We’re actively looking for more diversity in our geographic representation, and would particularly welcome voices from countries outside of North America. A recent conversation about data viz in LMICs (low and middle income countries) was primarily voices from headquarters staff – we’d love to hear more from the field.

I can’t wait to see what the data viz conversations are at MERL Tech 2020!

Wrapping up MERL Tech DC

On September 6, we wrapped up three days of learning, reflecting, debating and sharing at MERL Tech DC. The conference kicked off with four pre-workshops on September 4: Big Data and Evaluation; Text Analytics; Spatial Statistics and Responsible Data. Then, on September 5-6, we had our regular two-day conference, including opening talks from Tariq Khokhar, The Rockefeller Foundation; and Yvonne MacPherson, BBC Media Action; one-hour sessions, two-hour sessions, lightning talks, a dashboard contest, a plenary session and two happy hour events.

This year’s theme was “The State of the Field” of MERL Tech and we aimed to explore what we as a field know about our work and what gaps remain in the evidence base. Conference strands included: Tech and Traditional MERL; Data, Data, Data; Emerging Approaches; and The Future of MERL.

Zach Tilton, University of Western Michigan; Kerry Bruce, Clear Outcomes; and Alexandra Robinson, Moonshot Global; update the plenary on the State of the Field Research that MERL Tech has undertaken over the past year. Photo by Christopher Neu.
Tariq Khokhar of The Rockefeller Foundation on “What Next for Data Science in Development? Photo by Christopher Neu.
Participants checking out what session to attend next. Photo by Christopher Neu.
Silvia Salinas, Strategic Director FuturaLab; Veronica Olazabal, The Rockefeller Foundation; and Adeline Sibanda, South to South Evaluation Cooperation; talk in plenary about Decolonizing Data and Technology, whether we are designing evaluations within a colonial mindset, and the need to disrupt our own minds. Photo by Christopher Neu.
What is holding women back from embracing tech in development? Patricia Mechael, HealthEnabled; Carmen Tedesco, DAI; Jaclyn Carlsen, USAID; and Priyanka Pathak, Samaj Studio; speak at their “Confidence not Competence” panel on women in tech. Photo by Christopher Neu.
Reid Porter, DevResults; Vidya Mahadevan, Bluesquare; Christopher Robert, Dobility; and Sherri Haas, Management Sciences for Health; go beyond “Make versus Buy” in a discussion on how to bridge the MERL – Tech gap. Photo by Christopher Neu.
Participants had plenty of comments and questions as well. Photo by Christopher Neu.
Drones, machine learning, text analytics, and more. Ariel Frankel, Clear Outcomes, facilitates a group in the session on Emerging MERL approaches. Photo by Christopher Neu.
The Principles for Digital Development have been heavily adopted by the MERL Tech sector as a standard for Digital Development. Allana Nelson, DIAL, shares thoughts on how the Principles can be used as an evaluative tool. Photo by Christopher Neu.
Kate Sciafe Diaz, TechnoServe; explains the “Marie Kondo” approach to MERL Tech in her Lightning Talk: “Does Your Tech Spark Joy? The Minimalist’s Approach to MERL Tech.” Photo by Christopher Neu.

In addition to learning and sharing, one of our main goals at MERL Tech is to create community. “I didn’t know there were other people working on the same thing as I am!” and “This MERL Tech conference is like therapy!” were some of the things we heard on Friday night as we closed down.

Stay tuned for blog posts about sessions and overall impressions, as well as our conference report once feedback surveys are in!

MERL Tech DC Session Ideas are due Monday, Apr 22!

MERL Tech is coming up in September 2019, and there are only a few days left to get your session ideas in for consideration! We’re actively seeking practitioners in monitoring, evaluation, research, learning, data science, technology (and other related areas) to facilitate every session.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. Submit your session ideas by midnight ET on April 22, 2019. You will hear back from us by May 20 and, if selected, you will be asked to submit the final session title, summary and outline by June 17.

Submit your session ideas here by April 22, midnight ET

This year’s conference theme is MERL Tech: Taking Stock

Conference strands include:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines. Please consider these in your session proposals and in how you are choosing your speakers and facilitators.

Submit your session ideas now!

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone. Please review our Code of Conduct. Session submissions are reviewed and selected by our steering committee.

Join us for MERL Tech DC, Sept 5-6th!

MERL Tech DC: Taking Stock

September 5-6, 2019

FHI 360 Academy Hall, 8th Floor
1825 Connecticut Avenue NW
Washington, DC 20009

We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.

The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.

Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.

Conference strands:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.

Submit your session ideas, register to attend the conference, or reserve a demo table for MERL Tech DC now!

You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.