Tag Archives: 2019

Practicing Safe Monitoring and Evaluation in the 21st Century

By Stephen Porter. Adapted from the original post published here.

Monitoring and evaluation practice can do harm. It can harm:

  • the environment by prioritizing economic gain over species that have no voice
  • people who are invisible to us when we are in a position of power
  • by asking for information that can then be misused.

In the quest for understanding What Works, the focus is often too narrowly on program goals rather than the safety of people. A classic example in the environmental domain is the use of DDT: “promoted as a wonder-chemical, the simple solution to pest problems large and small. Today, nearly 40 years after DDT was banned in the U.S., we continue to live with its long-lasting effects.” The original evaluation of its effects had failed to identify harm and emphasized its benefits. Only when harm to the ecosystem became more apparent was evidence presented in Rachel Carson’s book Silent Spring. We should not have to wait for failure to be so apparent before evaluating for harm.

Join me, Veronica Olazabal, Rodney Hopson, Dugan Fraser and Linda Raftree, for a session on “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, 8-9am, Room CC M100 H, at the American Evaluation Association Conference in Minneapolis.

Ethical standards have been developed for evaluators, which are discussed at conferences and included in professional training. Yet institutional monitoring and evaluation practices still struggle to fully get to grips with the reality of harm in the pressure to get results reported. If we want monitoring and evaluation to be safer for the 21st Century we need to shift from training and evaluator-to-evaluator discussions to changing institutional practices.

At a workshop convened by Oxfam and the Rockefeller Foundation in 2019, we sought to identify core issues that could cause harm and get to grips with areas where institutions need to change practices. The workshop brought together partners from UN agencies, philanthropies, research organizations and NGOs. This meeting sought to give substance to issues. It was noted by a participant that though the UNEG Norms and Standards and UNDP’s evaluation policy are designed to make evaluation safe, in practice there is little consideration given to capturing or understanding the unintended or perverse consequences of programs or policies. The workshop explored this and other issues and identified three areas of practice that could help to reframe institutional monitoring and evaluation in a practical manner.

1. Data rights, privacy and protection: 

In working on rights in the 21st Century, data and Information are some of the most important ‘levers’ pulled to harm and disadvantage people. Oxfam has had a Responsible Data in Program policy in place since 2015 goes some way towards recognizing this.But we know we need to more fully implement data privacy and protection measures in our work.

At Oxfam, work is continuing to build a rights-based approach which already includes aligned confederation-wide Data Protection Policies, implementation of responsible data management policy and practices and other tools aligned with the Responsible Data Policy and European Privacy law, including a responsible data training pack.

Planned and future work includes stronger governance, standardized baseline measures of privacy & information security, and communications/guidance/change management. This includes changes in evaluation protocols related to how we assess risk to the people we work with, who gets access to the data and ensure consent for how the data will be used.

This is a start, but consistent implementation is hard and if we know we aren’t competent at operating the controls within our reach, it becomes more difficult in how we call others out if they are causing harm when they misuse theirs.

2. Harm prevention lens for evaluation

The discussion highlighted that evaluation has not often sought to understand the harm of practices or interventions. When they do, however, the results can powerfully shed new light on an issue. A case that starkly illustrates potential under-reporting is that of the UN Military Operation in Liberia (UNMIL). UNMIL was put in place with the aim “to consolidate peace, address insecurity and catalyze the broader development of Liberia”. Traditionally we would evaluate this objective. Taking a harm lens we may evaluate the sexual exploitation and abuse related to the deployment. The reporting system highlights low levels of abuse, 14 from 2007 – 2008 and 6 in 2015. A study by Beber, Gilligan, Guardado and Karim, however, estimated through representative randomized survey that more than half of eighteen- to thirty-year-old women in greater Monrovia have engaged in transactional sex and that most of them (more than three-quarters, or about 58,000 women) have done so with UN personnel, typically in exchange for money.

Changing evaluation practice should not just focus on harm in the human systems, but also provide insight in the broader ecosystem. Institutionally there needs to be championship for identifying harm within and through monitoring and evaluation practice and changes in practice.

3. Strengthening safeguarding and evaluation skills

We need to resource teams appropriately so they have the capacity to be responsive to harm and reflective on the potential for harm. This is both about tools and procedures and conceptual frames.

Tools and procedures can include, for example:

  • Codes-of-conduct that create a safe environment for reporting issues
  • Transparent reporting lines to safeguarding/safe programming advisors
  • Training based on actual cases
  • Safe data protocols (see above)

All of these fall by the way-side, however, if the values and concepts that guide implementation are absent. Rodney Hopson at the workshop, drawing on environmental policy and concepts of ecology, presented a frame to increasing evaluators’ usefulness in complex ecologies where safeguarding issues are prevalent, that emphasizes:

  • Relationships – the need to identify and relate to key interests, interactions, variables and stakeholders amid dynamic and complex issues in an honest manner that is based on building trust.
  • Responsibilities – acting with propriety, doing what is proper, fair, right, just in evaluation against standards.
  • Relevance – being accurate and meaningful technically, culturally and contextually.

Safe monitoring and evaluation in the 21st Century does not just seek ‘What Works’ and will need to be relentless at looking at ‘How we can work differently?’. This includes us understanding connectivity in harm between human and environmental systems. The three areas noted here are a start of a conversation and a challenge to institutions to think more about what it means to be safe in monitoring and evaluation practice.

Planning to attend the American Evaluation Association Conference this week? Join us for the session “Institutionalizing Doing no Harm in Monitoring and Evaluation” on Thursday, Nov 14, 2019, from 8- 9:00 AM) in room CC M100 H.

Panelists will discuss ideas to better address harm in regards to: (i) harm identification and mitigation in evaluation practice; (ii) responsible data practice evaluation in complex ecologies, (iii) understanding harm in an international development context, and (iv) evaluation in complex ecologies.

The panel will be chaired by  Veronica M Olazabal, (Senior Advisor & Director, Measurement, Evaluation and Organizational Performance, The Rockefeller Foundation) , with speakers Stephen Porter (Evaluation Strategy Advisor, World Bank), Linda Raftree (Independent Consultant, Organizer of MERL Tech), Dugan Fraser (Prof & Director CLEAR-AA – University of the Witwatersrand, Johannesburg) and Rodney Hopson (Prof of Evaluation, Department of Ed Psych, University of Illinois Urbana-Champaign). View the full program here: https://lnkd.in/g-CHMEj 

MERL Tech DC 2019 Feedback Report

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of the conference and related community are to:

  • Improve development, tech, data & MERL literacy
  • Help people find and use evidence & good practices
  • Promote ethical and appropriate use of technology
  • Build and strengthen a “MERL Tech community”
  • Spot trends and future-scope for the sector
  • Transform and modernize MERL in an intentionally responsible and inclusive way

Our sixth MERL Tech DC conference took place on September 5-6, 2019, and we held four pre-workshops on September 4. Some 350 people from 194 organizations joined us for the 2-days, and another 100 people attended the pre-workshops. About 56% of participants attended for the first time, whereas 44% were returnees.

Who attended?

Attendees came from a wide range of organization types and professions.

Conference Themes

The theme for this year’s conference was “Taking Stock” and we had 4 sub-themes:

  1. Tech and Traditional MERL
  2. Data, Data, Data
  3. Emerging Approaches to MERL
  4. The Future of MERL

State of the Field Research

A small team shared their research on “The MERL Tech State of the Field” organized into the above 4 themes. The research will be completed and shared on the MERL Tech site before the end of 2019. (We’ll be presenting it at the South African Evaluation Association Conference in October and at the American Evaluation Association conference in November)

As always, MERL Tech conference sessions were related to: technology for MERL, MERL on ICT4D and Digital Development programs, MERL of MERL Tech, data for decision-making, ethical and responsible data approaches and cross-disciplinary community building. (See the full agenda here):

We checked in with participants on the last day to see how the field had shifted since 2015, when our keynote speaker (Ben Ramalingam) gave some suggestions on how tech could improve MERL.

Ben’s future vision
Where MERL Tech 2019 sessions fell on the expired-tired-wired schematic.
What participants would add to the schematic to update it for 2019 and beyond.

Diversity and Inclusion

We have been making an effort to improve diversity and inclusion at the conference and in the MERL Tech space. An unofficial estimate on speaker racial and gender diversity is below. As compared to 2018 when we first began tracking, the number of women of color speakers increased by 5% and women of color by 2%. The number of white female speakers decreased by 6% and the number of white male speakers went down by 1%. Our gender balance remained fairly consistent.

Where we are failing on diversity and inclusion is at having speakers and participants from outside of North America and Europe – that likely has to do with cost and visas which affect who can attend. It also has to do with who organizations select to represent them at MERL Tech. We’re continuing to try to find ways to collaborate with groups working on MERL Tech in different regions. We believe that new and/or historically marginalized voices should be more involved in shaping the future of the sector and the future of MERL Tech. (If you would like to support us on this or get involved, please contact Linda!)

Post Conference Feedback

Some 25% of participants filled in the post-conference survey and 85% rated their experience “good” or “awesome” (up from 70% in 2018). Answers did not significantly differ based on whether a participant had attended previously or not. Another 8.5% rated sessions via the “Sched” conference agenda app, with an average session satisfaction rating of 9.1 out of 10.

The top rated session was on “Decolonizing Data and Technology in MERL.” As one participant said, “It shook me out of my complacency. It is very easy to think of the tech side of the work we do as ‘value free’, but this is not the case. Being a development practitioner it is important for me to think about inequality in tech and data further than just through the implementation of the projects we run.” Another noted that “As a white, gay male who has a background in international and intercultural education, it was great to see other fields bringing to light the decolonizing mindset in an interactive way. The session was enlightening and brought up conversation that is typically talked about in small groups, but now it was highlighted in front of the entire audience.”

Sign up for MERL Tech News if you’d like to read more about this and other sessions. We’re posting a series of posts and session summaries.

Key suggestions for improving next time were similar to those we hear every year: less showcasing and pitching, ensure that titles match what is actually delivered at the session, ensuring that presenters are well-prepared, and making sessions relevant, practical and applicable.

Additionally, several people commented that the venue had some issues with noise from conversations in the common area spilling into breakout rooms and making it hard to focus. Participants also complained that there was a large amount of trash and waste produced, and suggested more eco-friendly catering for next time.

Access the full feedback report here.

Where/when should the conference happen?

As noted, we are interested in finding a model for MERL Tech that allows for more diversity of voices and experiences, so we asked participants how often and where they thought we should do MERL Tech in the future. The majority (44.3%) felt we should run MERL Tech in DC every 2 years and somewhere else in the year in between. Some 23% said to keep it in DC every year, and around 15% suggested multiple MERL Tech conferences each year in DC and elsewhere. (We were pleased that no one selected the option of “stop doing MERL Tech altogether, it’s unnecessary.”)

Given this response, we will continue exploring options for partners who would like to support financially and logistically to enable MERL Tech to happen outside of DC. Please contact Linda if you’d like to be involved or have ideas on how to make this happen.

New ways to get involved!

Last year, the idea of having a GitHub repository was raised, and this year we were excited to have GitHub join us. They had come up with the idea of creating a MERL Tech Center on GitHub as well, so it was a perfect match! More info here.

We also had a request to create a MERL Tech Slack channel (which we have done). Please get in touch with Linda by email or via Slack if you’d like to join us there for ongoing conversations on data collection, open source, technology (or other channels you request!)

As always you can also follow us on Twitter and MERL Tech News.

Four Reflections on the 2019 MERL Tech Dashboards Competition

by Amanda Makulec, Excella Labs. This post first appeared here.

Data visualization (viz) has come a long way in our MERL Tech community. Four years ago the conversation was around “so you think you want a dashboard?” which evolved to a debate on dashboards as the silver bullet solution (spoiler: they’re not). Fast forward to 2019, when we had the first plenary competition of dashboard designs on the main stage!

Wayan Vota and Linda Raftree, MERL Tech Organizers, were kind enough to invite me to be a judge for the dashboard competition. Let me say: judging is far less stressful than presenting. Having spoken at MERL Tech every year on a data viz topic since 2015, it felt novel to not be frantically reviewing slides the morning of the conference.

The competition sparked some reflections on how we’ve grown and where we can continue to improve as we use data visualization as one item in our MERL toolbox.

1 – We’ve moved beyond conversations about ‘pretty’ and are talking about how people use our dashboards.

Thankfully, our judging criteria and final selection were not limited to which dashboard was the most beautiful. Instead, we focused on the goal, how the data was structured, why the design was chosen, and the impact it created.

One of the best stories from the stage came from DAI’s Carmen Tedesco (one of three competition winners), who demoed a highly visual interface that even included custom spatial displays of how safe girls felt in different locations throughout a school. When the team demoed the dashboard to their Chief of Party, he was underwhelmed… because he was colorblind and couldn’t make sense of many of the visuals. They pivoted, added more tabular, text-focused, grayscale views, and the team was thrilled.

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green https://twitter.com/siobhangreen/status/1169675846761758724

Carmen Tedesco presents a dashboard used by a USAID-funded education project in Honduras. Image from Siobhan Green: https://twitter.com/siobhangreen/status/1169675846761758724

Having a competition judged on impact, not just display, matters. What gets measured gets done, right? We need to reward and encourage the design and development of data visualization that has a purpose and helps someone do something – whether it’s raising awareness, making a decision, or something else – not just creating charts for the sake of telling a donor that we have a dashboard.

2 – Our conversations about data visualization need to be anchored in larger dialogues about data culture and data literacy.

We need to continue to move beyond talking about what we’re building and focus on for who, why, and what else is needed for the visualizations to be used.

Creating a “data culture” on a small project team is complicated. In a large global organization or slow-to-change government agency, it can feel impossible. Making data visual, nurturing that skillset within a team, and building a culture of data visualization is one part of the puzzle, but we need champions outside of the data and M&E (monitoring and evaluation) teams who support that organizational change. A Thursday morning MERL Tech session dug into eight dimensions of a data readiness, all of which are critical to having dashboards actually get used – learn more about this work here.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data someone needs filtered to their level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

Village Enterprise’s winning dashboard was simple in design, constructed of various bar charts on enterprise performance, but was tailored to different user roles to create customized displays. By serving up the data filtered to a specific user level, they encourage adoption and use instead of requiring a heavy mental load from users to filter to what they need.

3 – Our data dashboards look far more diverse in scope, purpose, and design than the cluttered widgets of early days.

The three winners we picked were diverse in their project goals and displays, including a complex map, a PowerBI project dashboard, and a simple interface of bar charts designed for various user levels on local enterprise success metrics.

One of the winners – Fraym – was a complex, interactive map display allowing users to zoom in to the square kilometer level. Layers for various metrics, from energy to health, can be turned on or off depending on the use case. Huge volumes of data had to be ingested, including both spatial and quantitative datasets, to make the UI possible.

In contrast, the People’s Choice winner wasn’t a quantitative dashboard of charts and maps. Matter of Focus’ OutNav tool instead makes the certainty around elements of theory of change visual, has visual encodings in the form of colors, saturation, and layout within a workflow, and helps organizations show where they’ve contributed to change.

Seeing the diversity of displays, I’m hopeful that we’re moving away from one-size-fits-all solutions or reliance on a single tech stack (whether Excel, Tableau, PowerBI or something else) and continuing to focus more on crafting products that solve problems for someone, which may require us to continue to expand our horizons regarding the tools and designs we use.

4 – Design still matters though, and data and design nerds should collaborate more often.

That said, there remain huge opportunities for more design in our data displays. Last year, I gave a MERL tech lightning talk on why no one is using your dashboard that focused on the need for more integration of design principles in our data visualization development, and those principles still resonate today.

Principles from graphic design, UX, and other disciplines can take a specific visualization from good to great – the more data nerds and designers collaborate, the better (IMHO). Otherwise, we’ll continue the an epidemic of dashboards, many of which are tools designed to do ALL THE THINGS without being tailored enough to be usable by the most important audiences.

An invitation to join the Data Viz Society

If you’re interested in more discourse around data viz, consider joining the Data Viz Society (DVS) and connect with more than 8,000 members from around the globe (it’s free!) who have joined since we launched in February.

DVS connects visualization enthusiasts across disciplines, tech stacks, and expertise, and aims to collect and establish best practices, fostering a community that supports members as they grow and develop data visualization skills.

We (I’m the volunteer Operations Director) have a vibrant Slack workspace packed with topic and location channels (you’ll get an invite when you join), two-week long moderated Topics in DataViz conversations, data viz challenges, our journal (Nightingale), and more.

More on ways to get involved in this thread – including our data viz practitioner survey results challenge closing 30 September 2019 that has some fabulous cash prizes for your data viz submissions!

We’re actively looking for more diversity in our geographic representation, and would particularly welcome voices from countries outside of North America. A recent conversation about data viz in LMICs (low and middle income countries) was primarily voices from headquarters staff – we’d love to hear more from the field.

I can’t wait to see what the data viz conversations are at MERL Tech 2020!