MERL Tech News

MERL Tech DC 2019 Feedback Report

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of the conference and related community are to:

  • Improve development, tech, data & MERL literacy
  • Help people find and use evidence & good practices
  • Promote ethical and appropriate use of technology
  • Build and strengthen a “MERL Tech community”
  • Spot trends and future-scope for the sector
  • Transform and modernize MERL in an intentionally responsible and inclusive way

Our sixth MERL Tech DC conference took place on September 5-6, 2019, and we held four pre-workshops on September 4. Some 350 people from 194 organizations joined us for the 2-days, and another 100 people attended the pre-workshops. About 56% of participants attended for the first time, whereas 44% were returnees.

Who attended?

Attendees came from a wide range of organization types and professions.

Conference Themes

The theme for this year’s conference was “Taking Stock” and we had 4 sub-themes:

  1. Tech and Traditional MERL
  2. Data, Data, Data
  3. Emerging Approaches to MERL
  4. The Future of MERL

State of the Field Research

A small team shared their research on “The MERL Tech State of the Field” organized into the above 4 themes. The research will be completed and shared on the MERL Tech site before the end of 2019. (We’ll be presenting it at the South African Evaluation Association Conference in October and at the American Evaluation Association conference in November)

As always, MERL Tech conference sessions were related to: technology for MERL, MERL on ICT4D and Digital Development programs, MERL of MERL Tech, data for decision-making, ethical and responsible data approaches and cross-disciplinary community building. (See the full agenda here):

We checked in with participants on the last day to see how the field had shifted since 2015, when our keynote speaker (Ben Ramalingam) gave some suggestions on how tech could improve MERL.

Ben’s future vision
Where MERL Tech 2019 sessions fell on the expired-tired-wired schematic.
What participants would add to the schematic to update it for 2019 and beyond.

Diversity and Inclusion

We have been making an effort to improve diversity and inclusion at the conference and in the MERL Tech space. An unofficial estimate on speaker racial and gender diversity is below. As compared to 2018 when we first began tracking, the number of women of color speakers increased by 5% and women of color by 2%. The number of white female speakers decreased by 6% and the number of white male speakers went down by 1%. Our gender balance remained fairly consistent.

Where we are failing on diversity and inclusion is at having speakers and participants from outside of North America and Europe – that likely has to do with cost and visas which affect who can attend. It also has to do with who organizations select to represent them at MERL Tech. We’re continuing to try to find ways to collaborate with groups working on MERL Tech in different regions. We believe that new and/or historically marginalized voices should be more involved in shaping the future of the sector and the future of MERL Tech. (If you would like to support us on this or get involved, please contact Linda!)

Post Conference Feedback

Some 25% of participants filled in the post-conference survey and 85% rated their experience “good” or “awesome” (up from 70% in 2018). Answers did not significantly differ based on whether a participant had attended previously or not. Another 8.5% rated sessions via the “Sched” conference agenda app, with an average session satisfaction rating of 9.1 out of 10.

The top rated session was on “Decolonizing Data and Technology in MERL.” As one participant said, “It shook me out of my complacency. It is very easy to think of the tech side of the work we do as ‘value free’, but this is not the case. Being a development practitioner it is important for me to think about inequality in tech and data further than just through the implementation of the projects we run.” Another noted that “As a white, gay male who has a background in international and intercultural education, it was great to see other fields bringing to light the decolonizing mindset in an interactive way. The session was enlightening and brought up conversation that is typically talked about in small groups, but now it was highlighted in front of the entire audience.”

Sign up for MERL Tech News if you’d like to read more about this and other sessions. We’re posting a series of posts and session summaries.

Key suggestions for improving next time were similar to those we hear every year: less showcasing and pitching, ensure that titles match what is actually delivered at the session, ensuring that presenters are well-prepared, and making sessions relevant, practical and applicable.

Additionally, several people commented that the venue had some issues with noise from conversations in the common area spilling into breakout rooms and making it hard to focus. Participants also complained that there was a large amount of trash and waste produced, and suggested more eco-friendly catering for next time.

Access the full feedback report here.

Where/when should the conference happen?

As noted, we are interested in finding a model for MERL Tech that allows for more diversity of voices and experiences, so we asked participants how often and where they thought we should do MERL Tech in the future. The majority (44.3%) felt we should run MERL Tech in DC every 2 years and somewhere else in the year in between. Some 23% said to keep it in DC every year, and around 15% suggested multiple MERL Tech conferences each year in DC and elsewhere. (We were pleased that no one selected the option of “stop doing MERL Tech altogether, it’s unnecessary.”)

Given this response, we will continue exploring options for partners who would like to support financially and logistically to enable MERL Tech to happen outside of DC. Please contact Linda if you’d like to be involved or have ideas on how to make this happen.

New ways to get involved!

Last year, the idea of having a GitHub repository was raised, and this year we were excited to have GitHub join us. They had come up with the idea of creating a MERL Tech Center on GitHub as well, so it was a perfect match! More info here.

We also had a request to create a MERL Tech Slack channel (which we have done). Please get in touch with Linda by email or via Slack if you’d like to join us there for ongoing conversations on data collection, open source, technology (or other channels you request!)

As always you can also follow us on Twitter and MERL Tech News.

Designing a MERL GitHub “Center”

by Mala Kumar, GitHub Open Source for Good program

Some Context

My name is Mala, and I lead a program at GitHub called Open Source for Good under our Social Impact team. Before joining GitHub, I spent the better part of a decade wandering around the world designing, managing, implementing and deploying tech for international development (ICT4D) software products. Throughout my career, I was told repeatedly that open source (OS) would revolutionize the ICT4D industry. While I have indeed worked on a few interesting OS products, I began suspecting that statement was more complicated than had been presented.

Indeed, after joining GitHub this past April, I confirmed my suspicion. Overall, the adoption of OS in the social sector – defined as the collection of entities that positively advance or promote human rights – lags far behind the commercial, private sector. Why? You may ask.

Here’s one hypothesis we have at GitHub:

After our team’s many years of experience working in the social sector and through the hundreds of conversations we’ve had with fellow social sector actors, we’ve come to believe that IT teams in the social sector have significantly less decision making power and autonomy than commercial, private sector IT teams. This is irrespective of the size, the geographic location, or even the core mission of the organization or company.

In other words, decision-making power in the social sector does not lie with the techies who typically have the best understanding of the technology landscape. Rather, it’s non-techies who tend to make an organization’s IT budgetary decisions. Consequently, when budgetary decision-makers come to GitHub to assess OS tools and they see something like the below, a GitHub repo, they have no idea what they’re seeing. And this is a problem for the sector at large.

We want to help bridge that gap between private sector and social sector tech development. The social sector is quite large, however, so we’ve had to narrow our focus. We’ve decided to target the social sector’s M&E vertical. This is for several reasons:

  • M&E as a discipline is growing in the social sector
  • Increasingly more M&E data is being collected digitally
  • It’s cross-sectoral
  • It’s easy to identify a target audience
  • Linda is great. ☺

How We Hope to Help

Our basic idea is to build a middle “layer” between a GitHub repo and a decision maker’s final budget. I’m calling that a MERL GitHub “Center” until I can come up with a better name. 

As a sponsor of MERL Tech DC 2019, we set up our booth smack dab in front of the food and near the coffee, and we took advantage of this prime real estate to learn more about what our potential users would find valuable. 

We spent two days talking with as many MERL conference attendees as we could and asked them to complete some exercises. One such exercise was to prioritize the possible features a MERL GitHub Center might have. We’ve summarized the results in the chart below. The top right has two types of features: 1) features most commonly sorted as helpful in using open source and 2) features potential Center users would actually use. From this exercise, we’ve learned that our minimum viable product (MVP) should include all or some of the following:

  1. Use case studies of open source tools
  2. Description of listed tools
  3. Tags/categories
  4. A way to search in the Center
  5. Security assessments of the tools
  6. Beginner’s Guide to Open Source for the Social Sector
  7. Installation guides for listed tools

Aggregation of prioritization exercise from ~10 participants

We also spoke to an additional 30+ attendees about the OS tools they currently use. Anecdotally, mobile data collection, GIS, and data visualization were the most common use cases. A few tools are built on or with DHIS2. Many attendees we spoke with are data scientists using R and Python notebooks. DFID and GIZ were mentioned as two large donor organizations that are thinking about OS for MERL funding.

What’s Next

In the coming weeks, we’re going to reach out to many of the attendees we spoke to at MERL Tech to conduct user testing for our forthcoming Center prototype. In the spirit of open source and not duplicating work, we are also speaking with a few potential partners working on different angles to our problem to align our efforts. It’s our hope to build out new tools and product features that will help the MERL community better use and develop OS tools.

How can you get Involved?

Email malakumar85@github.com with a brief intro to you and your work in OS for social good.

Confidence Not Competence: What’s Holding Women Back from Embracing Tech in Development

by Carmen Tedesco, DAI. Original first published here.

Throughout my life, I’ve heard women grumble about using technology—from my mom, from friends in school, and from work colleagues—yet these are highly educated, often extremely logical thinkers that excel at, well, Excel!

The irony of the situation has been troubling me in the past few months. Why? Because there is a clear contrast in attention paid to the benefits of empowering women and girls through technology in low-and middle-income countries, with the attention paid to empowering women and girls through technology in high-income environments.

As an international development community, we spend a lot of resources promoting the use of technology among women and girls within the communities where we work—with good results. And yet, as a community of women development practitioners, we are failing at embracing technology ourselves. The gender gap in science, technology, education, and mathematics (STEM) exists around the world, and society continues to fail women and girls by not expecting them to know much about technical matters. This plays out in our day-to-day work in the monitoring, evaluation, research, and learning (MERL) sector. Whether it’s learning new software to improve our results monitoring or using new mobile tools in the field, there seems to be a hesitance, and lack of confidence, often accompanied by a self-deprecation that our male counterparts lack.

What is holding back women from embracing technology in our own work, even as we tout it for others in the field? These questions motivated me to take the topic to a broader audience at the recent MERLTech Conference in Washington, D.C.

Panel.JPG

Panelists discuss their own experiences as women working in the tech space. From left to right Dr. Patty Mecheal (Co-founder and Policy Lead, HealthEnabled), Carmen Tedesco (author), Jaclyn Carlsen (Policy Advisor, Development Informatics team, USAID), Priyanka Pathak (Principal, Samaj Studio).

But first, a bit of history.

How Did We Get Here?

In her article from the Center for Media Literacy, Margaret Brenston explains: “In our society, boys and men are expected to learn about machines, tools and how things work. In addition, they absorb, ideally, a ‘technological world view’ that grew up along with industrial society. Such a world view emphasizes objectiv­ity, rationality, control over nature, and distance from human emotions. Con­versely, girls and women are not expected to know much about technical matters. Instead, they are to be good at interper­sonal relationships and to focus on people and emotion.”

She goes on to outline how those differences play out when technology is seen as a language, and one in which women “are silenced.” She writes: “It is very difficult for women to discuss technical problems, particularly experi­mental ones, with male peers—they either condescend or they want to perform whatever task is at issue themselves. In either case, asking a question or raising a problem in discussion is proof (if any is needed) that women don’t know what they are doing. Male students, needless to say, do not get this treatment.” An interesting literature review of gender differences in technology usage highlights a 2003 study that details how women are more anxious than men with IT utilization, which reduces their self-effectiveness and increases the perception that IT requires more effort.

I organized a panel at MERLTech, where we discussed our experiences as women in tech working in monitoring, evaluation, and learning (MEL), some of the data behind the gender gap in STEM, and why women struggle to embrace technology.

So many conference attendees echoed the above findings, mentioning that tech savvy is seen as smart, but smart is not seen as feminine. There is a misconception about what technology is by women. The “imposter syndrome” or a fear of failure, has a real impact on women in our lives, and the reaction by men to women’s discomfort with tech is often compounded by mocking or dismissal, making many women even more hesitant to engage.

How Can We Fix This?

The Global Fund for Women states, “Access to technology, control of it, and the ability to create and shape it, is a fundamental issue of women’s human rights.” The Fund does this by, “help[ing] end the gender technology gap and empower[ing] women and girls to create innovative solutions to advance equality in their communities.”

Based on our discussion, here are five tips to help bridge the technology gender divide within our own field.

  1. Be, or find, a mentor. Women will benefit from mentors and allies in this space, whether you plan to go into a tech field, or just want to ask a question without fear of looking uninformed.
  2. Become a role model where you can. Find allies, men and women to help you build confidence.
  3. Increase representation. When women can be brought to the table in discussions of tech, they should be. Slowly, this will permeate the culture of the organization. Having more women involved in the process of explaining and building tech in our companies will normalize the use of tech and take away some of the gendered dynamics that exist now.
  4. Confront bias head-on. Addressing gender assumptions when they occur can be hard but pointing out the bias is not enough. Countering the action with a specific recommendation for course correction works best.
  5. Build confidence. Personal development can play a role in building confidence, as can much of the point listed above. Confidence is the foundation for competence.

Both men and women should be aware of the history and social context behind women’s hesitation in the technology space. It is in all our best interests to be aware of this bias and find ways to help correct it. In taking some of these small steps, we can pave the way for increased confidence in the tech space for women.

Blockchain: Can we talk about impact yet?

by Shailee Adinolfi, John Burg and Tara Vassefi

In September 2018, a three-member team of international development professionals presented a session called “Blockchain Learning Agenda: Practical MERL Workshop” at MERL Tech DC. Following the session, the team published a blog post about the session stating that the authors had “… found no documentation or evidence of the results blockchain was purported to have achieved in these claims [of radical improvements]. [They] also did not find lessons learned or practical insights, as are available for other technologies in development.”

The blog post inspired a barrage of unanticipated discussion online. Unfortunately, in some cases readers (and re-posters) misinterpreted the point as disparaging of blockchain. Rather, the post authors were simply asserting ways to cope with uncertain situations related to piloting blockchain projects. Perhaps the most important outcome of the session and post, however, is that they motivated a coordinated response from several organizations who wanted to delve deeper into the blockchain learning agenda.

To do that, on March 5, 2019, Chemonics, Truepic, and Consensys hosted a roundtable titled “How to Successfully Apply Blockchain in International Development.” All three organizations are applying blockchain in different and complementary ways relevant to international development — including project monitoring, evaluation, learning (MEL) innovations as well as back-end business systems. The roundtable enabled an open dialogue about how blockchain is being tested and leveraged to achieve better international development outcomes. The aim was to explore and engage with real case studies of blockchain in development and share lessons learned within a community of development practitioners in order to reduce the level of opacity surrounding this innovative and rapidly evolving technology.

Three case studies were highlighted:

1. “One-click Biodata Solution” by Chemonics 

  • Chemonics’ Blockchain for Development Solutions Lab designed and implemented a RegTech solution for the USAID foreign assistance and contracting space that sought to leverage the blockchain-based identity platform created by BanQu to dramatically expedite and streamline the collection and verification of USAID biographical data sheets (biodatas), improve personal data protection, and reduce incidents of error and fraud in the hiring process for professionals and consultants hired under USAID contracts.
  • Chemonics processes several thousand biodatas per year and accordingly devotes significant labor effort and cost to support the current paper-based workflow.
  • Chemonics’ technology partner, BanQu, used a private, permissioned blockchain on the Ethereum network to pilot a biodata solution.
  • Chemonics successfully piloted the solution with BanQu, resulting in 8 blockchain-based biodatas being fully processed in compliance with donor requirements.
  • Improved data protection was a priority for the pilot. One goal of the solution was to make it possible for individuals to maintain control over their back-up documentation, like passports, diplomas, and salary information, which could be shared temporarily with Chemonics through the use of an encrypted key, rather than having documentation emailed and saved to less secure corporate digital file systems.
  • Following the pilot, Chemonics determined through qualitative feedback that users across the biodata ecosystem found the blockchain solution to be easy to use and succeeded at reducing level of effort on the biodata completion process. 
  • Chemonics also compiled lessons-learned, including refinements to the technical requirements, options to scale the solution, and additional user feedback and concerns about the technology to inform decision-making around further biodata pilots. 

2. Project i2i presented by Consensys

  • Problem Statement: 35% of the Filipino population is unbanked, and 56% lives in rural areas. The Philippines economy relies heavily on domestic remittances. Unionbank sought to partner with hundreds of rural banks that didn’t have access to electronic banking services that the larger commercial banks do.
  • In 2017, to continue the Central Bank of the Philippines’ national strategy for financial inclusion, the central banks of Singapore and the Philippines announced that they would collaborate on financial technology by employing the regulatory sandbox approach. This will provide industry stakeholders with the room and time to experiment before regulators enact potentially restrictive policies that could stifle innovation and growth. As part of the agreement, the central banks will share resources, best practices, research, and collaborate to “elevate financial innovation” in both economies.
  • Solution design assumptions for Philippines context:
    • It can be easily operated and implemented with limited integration, even in low-tech settings;
    • It enables lower transaction time and lower transaction cost;
    • It enables more efficient operations for rural banks, including reduction of reconciliations and simplification of accounting processes.
  • Unionbank worked with ConsenSys and participating rural banks to create an interbank ledger with tokenization. The payment platform is private, Ethereum-based.
  • In the initial pilot, 20 steps were eliminated in the process.
  • Technology partners: ConsenSys, Azure (Microsoft), Kaleido, Amazon Web Services.
  • In follow up to the i2i project, Union bank partnered with Singapore-based OCBC Bank, wherein the parties deployed the Adhara liquidity management and international payments platform for a blockchain-based international remittance pilot.  
  • Potential for national and regional collaboration/network development.
  • For details on the i2i project, download the full case study here, watch the 4-minute video clip.

3. Controlled Capture presented by Truepic

  • Truepic is a technology company specializing in digital image and video authentication. Truepic’s Controlled Capture technology uses cutting-edge computer vision, AI, and cryptography technologies to test images and video for signs of manipulation, designating only those that pass its rigorous verification tests are authenticated. Through the public blockchain, Truepic creates an immutable record for each photo and video captured through this process, such that their authenticity can be proven, meeting the highest evidentiary standards. This technology has been used in over 100 countries by citizen journalists, activists, international development organizations, NGOs, insurance companies, lenders and online platforms. 
  • One of Truepic’s innovative strategic partners, the UN Capital Development Fund (another participant of the roundtable), has been testing the possibility of using this technology for monitoring and evaluation of development projects. For example, the following Truepic tracks the date, time, and geolocation of the latest progress of a factory in Uganda. 
  • Controlled Capture requires Wifi or at least 3G/4G connectivity to fully authenticate images/video and write them to the public blockchain, which can be a challenge in low connectivity instances, for example in least-developed countries for UNCDF. 
  • As a work around to connectivity issues, Truepic’s partners have used Satellite Internet connections – such as a Thuraya or Iridium device to successfully capture verified images anywhere. 
  • Public blockchain – Truepic is currently using two different public blockchains, testing cost versus time in an effort to continually shorten the time from capture to closing chain of custody (currently around 8-12 seconds). 
  • Cost – The blockchain component is not actually too expensive; the heaviest investment is in the computer vision technology used to authenticate the images/video, for example to detect rebroadcasting, as in taking a picture of a picture to pass off the metadata.
  • Rights of the image is the owner’s – Truepic does not have rights over the image/video but keeps a copy on its servers in case the user’s phone/tablet is lost, stolen, or broken. And most importantly, so that Truepic can produce the original image on its verification page when shared or disseminated publicly. 
  • Court + evidentiary value: the technology and public-facing verification pages are designed to meet the highest evidentiary standards. 
    • Tested in courts; currently being testing at the international level but cannot disclose specifics due to confidentiality reasons.
  • Privacy and security are key priorities, especially for working in conflict zones, such as Syria. Truepic does not use 2-step authentication because the technology is focused on authenticating the images/video; it is not relevant who the source is and this way it keeps the source as anonymous as possible. Truepic works with its partners to educate on best practices to maintain high levels of anonymity in any scenario. 
  • Biggest challenge is usage by implementing partners – it is very easy to use, however the behavioral change to use the platform has been challenging. 
    • Other challenge: you bring the solution to an implementer, and the implementer says you have to get the donor to integrate it into their RFP scopes; then the donors recommend that we speak to implementing partners. 
  • Storage capacity issues? Storage is not currently a problem; Truepic has plans in place to address any storage issues that may arise with scale. 

How did implementers measure success in their blockchain pilots?

  • Measurement was both quantitative and qualitative 
  • The organizations worked with clients to ensure people who needed the MEL were able to access and use it
  • Concerns with publicizing information or difficulties with NDAs were handled on a case-by-case basis

The MEL space is an excellent place to have a conversation about the use of blockchain for international development – many aspects of MEL hinge on the need for immutability (in record keeping), transparency (in the expenditure and impact of funds) and security (in the data and the identities of implementers and beneficiaries). Many use cases in developing countries and for social impact have been documented (see Stanford report Blockchain for Social Impact, Moving Beyond the Hype). (Editor’s note: see also Blockchain and Distributed Ledger Technologies in the Humanitarian Sector and Distributed Ledger Identification Systems in the Humanitarian Sector).

The original search for evidence on the impact of blockchain sought a level of data fidelity that is difficult to capture and validate, even under the least challenging circumstances. Not finding it at that time, the research team sought the next best solution, which was not to discount the technology, but to suggest ways to cope with the knowledge gaps they encountered by recommending a learning agenda. The roundtable helped to stimulate robust conversation of the three case studies, contributing to that learning agenda.

Most importantly, the experience highlighted several interesting takeaways about innovation in public-private partnerships more broadly: 

  • The initial MERL Tech session publicly and transparently drew attention to the gaps that were identified from the researchers’ thirty thousand-foot view of evaluating innovation. 
  • This transparency drew out engagement and collaboration between and amongst those best-positioned to move quickly and calibrate effectively with the government’s needs: the private sector. 
  • This small discussion that focused on the utility and promise of blockchain highlighted the broader role of government (as funder/buyer/donor) in both providing the problem statement and anchoring the non-governmental, private sector, and civil society’s strengths and capabilities. 

One year later…

So, a year after the much-debated blockchain blogpost, what has changed? A lot. There is a growing body of reporting that adds to the lessons learned literature and practical insights from projects that were powered or supported by blockchain technology. The question remains: do we have any greater documentation or evidence of the results blockchain was purported to have achieved in these claims? It seems that while reporting has improved, it still has a long way to go. 

It’s worth pointing out that the international development industry, with far more experts and funding dedicated to working on improving MERL than emerging tech companies, also has some distance to go in meeting its own evidence standards.  Fortunately, the volume and frequency of hype seems to have decreased (or perhaps the news cycle has simply moved on?), thereby leaving blockchain (and its investors and developers) the space they need to refine the technology.

In closing, we, like the co-authors of the 2018 post, remain optimistic that blockchain, a still emerging technology, will be given the time and space needed to mature and prove its potential. And, whether you believe in “crypto-winter” or not, hopefully the lull in the hype cycle will prove to be the breathing space that blockchain needs to keep evolving in a productive direction.

Author Bios

Shailee Adinolfi: Shailee works on Public Sector solutions at ConsenSys, a global blockchain technology company building the infrastructure, applications, and practices that enable a decentralized world. She has 20 years of experience at the intersection of technology, financial inclusion, trade, and government, including 11 years on USAID funded projects in Africa, Asia and the Middle East.

John Burg: John was a co-author on the original MERL Tech DC 2018 blog, referenced in this blog. He is an international development professional with almost 20 years of cross-sectoral experience across 17 countries in six global regions. He enjoys following the impact of emerging technology in international development contexts.

Tara Vassefi: Tara is Truepic’s Washington Director of Strategic Initiatives. Her background is as a human rights lawyer where she worked on optimizing the use of digital evidence and understanding how the latest technologies are used and weighed in courts around the world. 

Four Reflections on the 2019 MERL Tech Dashboards Competition

by Amanda Makulec, Excella Labs. This post first appeared here.

Data visualization (viz) has come a long way in our MERL Tech community. Four years ago the conversation was around “so you think you want a dashboard?” which evolved to a debate on dashboards as the silver bullet solution (spoiler: they’re not). Fast forward to 2019, when we had the first plenary competition of dashboard designs on the main stage!

Wayan Vota and Linda Raftree, MERL Tech Organizers, were kind enough to invite me to be a judge for the dashboard competition. Let me say: judging is far less stressful than presenting. Having spoken at MERL Tech every year on a data viz topic since 2015, it felt novel to not be frantically reviewing slides the morning of the conference.

The competition sparked some reflections on how we’ve grown and where we can continue to improve as we use data visualization as one item in our MERL toolbox.

1 – We’ve moved beyond conversations about ‘pretty’ and are talking about how people use our dashboards.

Thankfully, our judging criteria and final selection were not limited to which dashboard was the most beautiful. Instead, we focused on the goal, how the data was structured, why the design was chosen, and the impact it created.

One of the best stories from the stage came from DAI’s Carmen Tedesco (one of three competition winners), who demoed a highly visual interface that even included custom spatial displays of how safe girls felt in different locations throughout a school. When the team demoed the dashboard to their Chief of Party, he was underwhelmed… because he was colorblind and couldn’t make sense of many of the visuals. They pivoted, added more tabular, text-focused, grayscale views, and the team was thrilled.

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green https://twitter.com/siobhangreen/status/1169675846761758724

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green: https://twitter.com/siobhangreen/status/1169675846761758724

Having a competition judged on impact, not just display, matters. What gets measured gets done, right? We need to reward and encourage the design and development of data visualization that has a purpose and helps someone do something – whether it’s raising awareness, making a decision, or something else – not just creating charts for the sake of telling a donor that we have a dashboard.

2 – Our conversations about data visualization need to be anchored in larger dialogues about data culture and data literacy.

We need to continue to move beyond talking about what we’re building and focus on for who, why, and what else is needed for the visualizations to be used.

Creating a “data culture” on a small project team is complicated. In a large global organization or slow-to-change government agency, it can feel impossible. Making data visual, nurturing that skillset within a team, and building a culture of data visualization is one part of the puzzle, but we need champions outside of the data and M&E (monitoring and evaluation) teams who support that organizational change. A Thursday morning MERL Tech session dug into eight dimensions of a data readiness, all of which are critical to having dashboards actually get used – learn more about this work here.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data filtered to a specific user level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

3 – Our data dashboards look far more diverse in scope, purpose, and design than the cluttered widgets of early days.

The three winners we picked were diverse in their project goals and displays, including a complex map, a PowerBI project dashboard, and a simple interface of bar charts designed for various user levels on local enterprise success metrics.

One of the winners – Fraym – was a complex, interactive map display allowing users to zoom in to the square kilometer level. Layers for various metrics, from energy to health, can be turned on or off depending on the use case. Huge volumes of data had to be ingested, including both spatial and quantitative datasets, to make the UI possible.

In contrast, the People’s Choice winner wasn’t a quantitative dashboard of charts and maps. Matter of Focus’ OutNav tool instead makes the certainty around elements of theory of change visual, has visual encodings in the form of colors, saturation, and layout within a workflow, and helps organizations show where they’ve contributed to change.

Seeing the diversity of displays, I’m hopeful that we’re moving away from one-size-fits-all solutions or reliance on a single tech stack (whether Excel, Tableau, PowerBI or something else) and continuing to focus more on crafting products that solve problems for someone, which may require us to continue to expand our horizons regarding the tools and designs we use.

4 – Design still matters though, and data and design nerds should collaborate more often.

That said, there remain huge opportunities for more design in our data displays. Last year, I gave a MERL tech lightning talk on why no one is using your dashboard that focused on the need for more integration of design principles in our data visualization development, and those principles still resonate today.

Principles from graphic design, UX, and other disciplines can take a specific visualization from good to great – the more data nerds and designers collaborate, the better (IMHO). Otherwise, we’ll continue the an epidemic of dashboards, many of which are tools designed to do ALL THE THINGS without being tailored enough to be usable by the most important audiences.

An invitation to join the Data Viz Society

If you’re interested in more discourse around data viz, consider joining the Data Viz Society (DVS) and connect with more than 8,000 members from around the globe (it’s free!) who have joined since we launched in February.

DVS connects visualization enthusiasts across disciplines, tech stacks, and expertise, and aims to collect and establish best practices, fostering a community that supports members as they grow and develop data visualization skills.

We (I’m the volunteer Operations Director) have a vibrant Slack workspace packed with topic and location channels (you’ll get an invite when you join), two-week long moderated Topics in DataViz conversations, data viz challenges, our journal (Nightingale), and more.

More on ways to get involved in this thread – including our data viz practitioner survey results challenge closing 30 September 2019 that has some fabulous cash prizes for your data viz submissions!

We’re actively looking for more diversity in our geographic representation, and would particularly welcome voices from countries outside of North America. A recent conversation about data viz in LMICs (low and middle income countries) was primarily voices from headquarters staff – we’d love to hear more from the field.

I can’t wait to see what the data viz conversations are at MERL Tech 2020!

Wrapping up MERL Tech DC

On September 6, we wrapped up three days of learning, reflecting, debating and sharing at MERL Tech DC. The conference kicked off with four pre-workshops on September 4: Big Data and Evaluation; Text Analytics; Spatial Statistics and Responsible Data. Then, on September 5-6, we had our regular two-day conference, including opening talks from Tariq Khokhar, The Rockefeller Foundation; and Yvonne MacPherson, BBC Media Action; one-hour sessions, two-hour sessions, lightning talks, a dashboard contest, a plenary session and two happy hour events.

This year’s theme was “The State of the Field” of MERL Tech and we aimed to explore what we as a field know about our work and what gaps remain in the evidence base. Conference strands included: Tech and Traditional MERL; Data, Data, Data; Emerging Approaches; and The Future of MERL.

Zach Tilton, University of Western Michigan; Kerry Bruce, Clear Outcomes; and Alexandra Robinson, Moonshot Global; update the plenary on the State of the Field Research that MERL Tech has undertaken over the past year. Photo by Christopher Neu.
Tariq Khokhar of The Rockefeller Foundation on “What Next for Data Science in Development? Photo by Christopher Neu.
Participants checking out what session to attend next. Photo by Christopher Neu.
Silvia Salinas, Strategic Director FuturaLab; Veronica Olazabal, The Rockefeller Foundation; and Adeline Sibanda, South to South Evaluation Cooperation; talk in plenary about Decolonizing Data and Technology, whether we are designing evaluations within a colonial mindset, and the need to disrupt our own minds. Photo by Christopher Neu.
What is holding women back from embracing tech in development? Patricia Mechael, HealthEnabled; Carmen Tedesco, DAI; Jaclyn Carlsen, USAID; and Priyanka Pathak, Samaj Studio; speak at their “Confidence not Competence” panel on women in tech. Photo by Christopher Neu.
Reid Porter, DevResults; Vidya Mahadevan, Bluesquare; Christopher Robert, Dobility; and Sherri Haas, Management Sciences for Health; go beyond “Make versus Buy” in a discussion on how to bridge the MERL – Tech gap. Photo by Christopher Neu.
Participants had plenty of comments and questions as well. Photo by Christopher Neu.
Drones, machine learning, text analytics, and more. Ariel Frankel, Clear Outcomes, facilitates a group in the session on Emerging MERL approaches. Photo by Christopher Neu.
The Principles for Digital Development have been heavily adopted by the MERL Tech sector as a standard for Digital Development. Allana Nelson, DIAL, shares thoughts on how the Principles can be used as an evaluative tool. Photo by Christopher Neu.
Kate Sciafe Diaz, TechnoServe; explains the “Marie Kondo” approach to MERL Tech in her Lightning Talk: “Does Your Tech Spark Joy? The Minimalist’s Approach to MERL Tech.” Photo by Christopher Neu.

In addition to learning and sharing, one of our main goals at MERL Tech is to create community. “I didn’t know there were other people working on the same thing as I am!” and “This MERL Tech conference is like therapy!” were some of the things we heard on Friday night as we closed down.

Stay tuned for blog posts about sessions and overall impressions, as well as our conference report once feedback surveys are in!

The MERL Tech DC agenda is ready!

Thanks to the hard work of our presenters, session leads, and Steering Committee, the 2019 MERL Tech DC agenda is ready, and we’re really excited about it!

On September 5-6, we’ll have our regular two days of lightning talks, break-out sessions, panels, Fail Fest, demo tables, and networking with folks from diverse sectors who all coincide at the intersection of MERL and Tech, Take a peek at the agenda here.

This year we are offering four pre-workshops on September 4 (separate registration required). Register at one of the links below. (Space is limited, so secure your spot now!)

Dashboard Contest!

You can also enter the MERL Tech Dashboard Contest and win a prize for the best dashboard (based on four criteria, and according to an esteemed panel of judges)! If you have a dashboard that you’d like to show off, sign up here to enter.

Fail Fest!

As usual, Wayan Vota will be organizing a “Fail Fest” during our Happy Hour Reception on Thursday, Sept 5th. More information coming shortly, but start thinking about what you may want to share….

Register Now!

Registration is open, and we normally sell out, so get your tickets soon! We also have a few Demo Tables left, and you can reserve one here.

Please get in touch with any questions, and we’re looking forward to seeing you there!  

MERL Tech DC Session Ideas are due Monday, Apr 22!

MERL Tech is coming up in September 2019, and there are only a few days left to get your session ideas in for consideration! We’re actively seeking practitioners in monitoring, evaluation, research, learning, data science, technology (and other related areas) to facilitate every session.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. Submit your session ideas by midnight ET on April 22, 2019. You will hear back from us by May 20 and, if selected, you will be asked to submit the final session title, summary and outline by June 17.

Submit your session ideas here by April 22, midnight ET

This year’s conference theme is MERL Tech: Taking Stock

Conference strands include:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines. Please consider these in your session proposals and in how you are choosing your speakers and facilitators.

Submit your session ideas now!

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone. Please review our Code of Conduct. Session submissions are reviewed and selected by our steering committee.

Upping the Ex Ante: Explorations in evaluation and frontier technologies

Guest post from Jo Kaybryn, an international development consultant currently directing evaluation frameworks, evaluation quality assurance services, and leading evaluations for UN agencies and INGOs.

“Upping the Ex Ante” is a series of articles aimed at evaluators in international development exploring how our work is affected by – and affects – digital data and technology. I’ve been having lots of exciting conversations with people from all corners of the universe about our brave new world. But I’ve also been conscious that for those who have not engaged a lot with the rapid changes in technologies around us, it can be a bit daunting to know where to start. These articles explore a range of technologies and innovations against the backdrop of international development and the particular context of evaluation.  For readers not yet well versed in technology there are lots of sources to do further research on areas of interest.

The series is half way through, with 4 articles published.

Computation? Evaluate it!

So far in Part 1 the series has gone back to the olden days (1948!) to consider the origin story of cybernetics and the influences that are present right now in algorithms and big data. The philosophical and ethical dilemmas are a recurring theme in later articles.

Distance still matters

Part 2 examines the problems of distance which is something that technology offers huge strides forwards in, and yet it remains never fully solved, with a discussion on what blockchains mean for the veracity of data.

Doing things the ways it’s always been done but better (Qualified)

Part 3 considers qualitative data and shines a light on the gulf between our digital data-centric and analogue-centric worlds and the need for data scientists and social scientists to cooperate to make sense of it.

Doing things the ways it’s always been done but better (Quantified)

Part 4 looks at quantitative data and the implications for better decision making, why evaluators really don’t like an algorithmic “black box”; and reflections on how humans’ assumptions and biases leak into our technologies whether digital or analogue.

What’s next?

The next few articles will see a focus on ethics, psychology and bias; a case study on a hypothetical machine learning intervention to identify children at risk of maltreatment (lots more risk and ethical considerations), and some thoughts about putting it all in perspective (i.e. Don’t Panic!).

Good, Cheap, Fast — Pick Two!

By Chris Gegenheimer, Director of Monitoring, Evaluating and Learning Technology at Chemonics International; and Leslie Sage, Director of Data Science at DevResults. (Originally posted here).

Back in September, Chemonics and DevResults spoke at MERL Tech DC about the inherent compromise involved when purchasing enterprise software. In short, if you want good software that does everything you want exactly the way you want it, cheap software that is affordable and sustainable, and fast software that is available immediately and responsive to emerging needs, you may have to relax one of those requirements. In other words: “good, cheap, fast – pick two!”

Of course, no buyer or vendor would ever completely neglect any one of those dimensions to maximize the other two; instead, we all try to balance these competing priorities as best we can as our circumstances will allow. It’s not an “all or nothing” compromise. It’s not even a monolithic compromise: both buyer and vendor can choose which services and domains will prioritize quality and speed over affordability, or affordability and quality over speed, or affordability and speed over quality (although that last one does sometimes come back to bite).

Chemonics and DevResults have been working together to support Chemonics’ projects and its monitoring and evaluation (M&E) needs since 2014, and we’ve had to learn from each other how best to achieve the mythical balance of quality, affordability, and speed. We haven’t always gotten it right, but we do have a few suggestions on how technological partnerships can ensure long-term success.

Observations from an implementer

As a development implementer, Chemonics recognizes that technology advances development outcomes and enables us to do our work faster and more efficiently. While we work in varied contexts, we generally don’t have time to reinvent technology solutions for each project. Vendors bring value when they can supply configurable products that meet our needs in the real world faster and cheaper than building something custom. Beyond the core product functionality, vendors offer utility with staff who maintain the IT infrastructure, continually upgrade product features, and ensure compliance with standards, such as the General Data Protection Regulation (GDPR) or the International Aid Transparency Initiative (IATI). Not every context is right for off-the-shelf solutions. Just because a product exists, it doesn’t mean the collaboration with a software vendor will be successful. But, from an implementer’s perspective, here are a few key factors for success:

Aligned incentives

Vendors should have a keen interest in ensuring that their product meets your requirements. When they are primarily interested in your success in delivering your core product or service — and not just selling you a product — the relationship is off to a good start. If the vendor does not understand or have a fundamental interest in your core business, this can lead to diverging paths, both in features and in long-term support. In some cases, fresh perspectives from non-development outsiders are constructive, but being the outlier client can contribute to project failure.

Inclusion in roadmap

Assuming the vendor’s incentives are aligned with your own, it should be interested in your feedback as well as responsive to making changes, even to some core features. As our staff puts systems through their paces, we regularly come up with feature requests, user interface improvements, and other feedback. We realize that not every feature request will make it into code, but behind every request is a genuine need, and vendors should be willing to talk through each need to figure out how to address it.

Straight talk

There’s a tendency for tech vendors, especially sales teams, to have a generous interpretation of system capabilities. Unmet expectations can result from a client’s imprecise requirements or a vendor’s excessive zeal, which leads to disappointment when you get what you asked for, but not what you wanted. A good vendor will clearly state up front what its product can do, cannot do, and will not ever do. In return, implementers have a responsibility to make their technical requirements as specific, well-scoped, and operational as possible.

Establish support liaisons

Many vendors offer training, help articles, on-demand support, and various other resources for turning new users into power users, but relying on the vendor to shoulder this burden serves no one. By establishing a solid internal front-line support system, you can act as intermediaries and translators between end users and the software vendor. Doing so has meant that our users don’t have to be conversant in developer-speak or technical language, nor does our technology partner have to field requests coming from every corner of our organization.

Observations from a tech vendor

DevResults’ software is used to manage international development data in 145 countries, and we support M&E projects around the world. We’ve identified three commonalities among organizations that implement our software most effectively: 1) the person who does the most work has the authority to make decisions, 2) the person with the most responsibility has technical aptitude and a whatever-it-takes attitude, and 3) breadth of adoption is achieved when the right responsibilities are delegated to the project staff, building capacity and creating buy-in.

Organizational structure

We’ve identified two key factors that predict organizational success: dedicated staff resources and their level of authority. Most of our clients are implementing a global M&E system for the first time, so the responsibility for managing the rollout is often added to someone’s already full list of duties, which is a recipe for burnout. Even if a “system owner” is established and space is made in their job description, if they don’t have the authority to request resources or make decisions, it restricts their ability to do their job well. Technology projects are regularly entrusted to younger, more junior employees, who are often fast technical learners, but their effectiveness is hindered by having to constantly appeal to their boss’ boss’ boss about every fork in the road. Middle-sized organizations are typically advantaged here because they have enough staff to dedicate to managing the rollout, yet few enough layers of bureaucracy that such a person can act with authority.

Staffing

Technical expertise is critical when it comes to managing software implementations. Too often, technical duties are foisted upon under-prepared (or less-than-willing) staffers. This may be a reality in an era of constrained budgets, but asking experts in one thing to operate outside of their wheelhouse is another recipe for burnout. In the software industry, we conduct technical exams for all new hires. We would be thrilled to see the practice extended across the ICT4D space, even for roles that don’t involve programming but do involve managing technical products. Even so, there’s a certain aspect of the ideal implementation lead that comes down to personality and resourcefulness. The most successful teams we work with have at least one person who has the willingness and the ability to do whatever it takes to make a new system work. Call it ‘ownership,’ call it a ‘can-do’ attitude, but whatever it is, it works!

Timing and resource allocation

Change management is hard, and introducing a new system requires a lot of work up front. There’s a lot that headquarters personnel can do to unburden project staff (configuring the system, developing internal guidance and policies, etc.), but sometimes it’s better to involve project staff directly and early. When project staff are involved in the system configuration and decision-making process, we’ve seen them demonstrate more ownership of the system and less resentment of “another thing coming down from headquarters.” System setup and configuration can also be a training opportunity, further developing internal capacity across the organization. Changing systems requires conversations across the entire org chart; well-designed software can facilitate those conversations. But even when implementers do everything right, they should always expect challenges, plan for change management, and adopt an agile approach to managing a system rollout.

Good, cheap, fast: pick THREE!

As we said, there are ways to balance these three dimensions. We’ve managed to strike a successful balance in this partnership because we understand the incentives, constraints, and priorities of our counterpart. The software as a service (SaaS) model is instrumental here because it ensures software is well-suited to multiple clients across the industry (good), more affordable than custom builds (cheap), and immediately available on day one (fast). The implicit tradeoff is that no one client can control the product roadmap, but when each and every customer has a say, the end product represents the collective wisdom, best practice, and feedback of everyone. It may not be perfectly tailored to each and every client’s preferences, but in the end, that’s usually a good thing..