All posts by Linda Raftree

About Linda Raftree

Linda Raftree supports strategy, program design, research, and technology in international development initiatives. She co-founded MERLTech in 2014 and Kurante in 2013. Linda advises Girl Effect on digital safety, security and privacy and supports the organization with research and strategy. She is involved in developing responsible data policies for both Catholic Relief Services and USAID. Since 2011, she has been advising The Rockefeller Foundation’s Evaluation Office on the use of ICTs in monitoring and evaluation. Prior to becoming an independent consultant, Linda worked for 16 years with Plan International. Linda runs Technology Salons in New York City and advocates for ethical approaches for using ICTs and digital data in the humanitarian and development space. She is the co-author of several publications on technology and development, including Emerging Opportunities: Monitoring and Evaluation in a Tech-Enabled World with Michael Bamberger. Linda blogs at Wait… What? and tweets as @meowtree. See Linda’s full bio on LInkedIn.

What’s Happening with Tech and MERL?

by Linda Raftree, Independent Consultant and MERL Tech organizer

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before role-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Creating and Measuring Impact in Digital Social and Behavior Change Communication 

By Jana Melpolder

People are accessing the Internet, smartphones, and social media like never before, and the social and behavior change communication community is exploring the use of digital tools and social media for influencing behavior. The MERL Tech session, “Engaging for responsible change in a connected world: Good practices for measuring SBCC impact” was put together by Linda Raftree, Khwezi Magwaza, and Yvonne MacPherson, and it set out to help dive into Digital Social and Behavior Change Communication (SBCC).

Linda is the MERL Tech Organizer, but she also works as an independent consultant. She has worked as an Advisor for Girl Effect on research and digital safeguarding in digital behavior change programs with adolescent girls. She also recently wrote a landscaping paper for iMedia on Digital SBCC. Linda opened the session by sharing lessons from the paper, complemented by learning drawn from research and practice at Girl Effect.

Linda shares good practices from a recent landscape report on digital SBCC.

Digital SBCC is expanding due to smartphone access. In the work with Girl Effect, it was clear that even when girls in lower income communities did not own smartphones they often borrowed them. Project leaders should consider several relevant theories on influencing human behavior, such as social cognitive theory, behavioral economics, and social norm theory. Additionally, an ethical issue in SBCC projects is whether there is transparency about the behavior change efforts an organization is carrying out, and whether people even want their behaviors to be challenged or changed.

When it comes to creating a SBCC project, Linda shared a few tips: 

  • Users are largely unaware of data risks when sharing personal information online
  • We need to understand peoples’ habits. Being in tune with local context is important, as is design for habits, preferences, and interests.
  • Avoid being fooled by vanity metrics. For example, even if something had a lot of clicks, how do you know an action was taken afterwards? 
  • Data can be sensitive to deal with. For some, just looking at information online, such as facts on contraception, can put them at risk. Be sure to be careful of this when developing content.

The session’s second presenter was Khwezi Magwaza who has worked as a writer and radio, digital, and television producer. She worked as a content editor for Praekelt.org and also served as the Content Lead at Girl Effect. Khwezi is currently providing advisory to an International Rescue Committee platform in Tanzania that aims to support improved gender integration in refugee settings. Lessons from Khwezi from working in digital SBCC included:

  • Sex education can be taboo, and community healthcare workers are often people’s first touch point. 
  • There is a difference between social behavior change and, more precisely, individual behavior change. 
  • People and organizations working in SBCC need to think outside the box and learn how to measure it in non-traditional ways. 
  • Just because something is free doesn’t mean people will like it. We need to aim for high quality, modern, engaging content when creating SBCC programs.
  • It’s also critical to hire the right staff. Khwezi suggested building up engineering capacity in house rather than relying entirely on external developers. Having a digital company hand something over to you that you’re stuck with is like inheriting a dinosaur. Organizations need to have a real working relationship with their tech supplier and to make sure the tech can grow and adapt as the program does.
Panelists discuss digital SBCC with participants.

The third panelist from the session was Yvonne MacPherson, the U.S. Director of BBC Media Action, which is the BBC’s international NGO that was made to use communication and media to further development. Yvonne noted that:

  • Donors often want an app, but it’s important to push back on solely digital platforms. 
  • Face-to-face contact and personal connections are vital in programs, and social media should not be the only form of communication within SBCC programs.
  • There is a need to look at social media outreach experiences from various sectors to learn, but that the contexts that INGOs and national NGOs are working in is different from the environments where most people with digital engagement skills have worked, so we need more research and it’s critical to understand local context and behaviors of the populations we want to engage.
  • Challenges are being seen with so-called “dark channels,” (WhatsApp, Facebook Messenger) where many people are moving and where it becomes difficult to track behaviors. Ethical issues with dark channels have also emerged, as there are rich content options on them, but researchers have yet to figure out how to obtain consent to use these channels for research without interrupting the dynamic within channels.

I asked Yvonne if, in her experience and research, she thought Instagram or Facebook influencers (like celebrities) influenced young girls more than local community members could. She said there’s really no one answer for that one. There actually needs to be a detailed ethnographic research or study to understand the local context before making any decisions on design of an SBCC campaign. It’s critical to understand the target group — what ages they are, where do they come from, and other similar questions.

Resources for the Reader

To learn more about digital SBCC check out these resources, or get in touch with each of the speakers on Twitter:

5 tips for operationalizing Responsible Data policy

By Alexandra Robinson and Linda Raftree

MERL and development practitioners have long wrestled with complex ethical, regulatory, and technical aspects of adopting new data approaches and technologies. The topic of responsible data has gained traction over the past 5 years or so, and a handful of early adopters have developed and begun to operationalize institutional RD policies. Translating policy into practical action, however, can feel daunting to organizations. Constrained budgets, complex internal bureaucracies, and ever-evolving technology and regulatory landscapes make it hard to even know where to start. 

The Principles for Digital Development provide helpful high level standards, and donor guidelines (such as USAID’s Responsible Data Considerations) offer additional framing. But there’s no one-size-fits-all policy or implementation plan that organizations can simply copy and paste in order to tick all the responsible data boxes. 

We don’t think organizations should do that anyway, given that each organization’s context and operating approach is different, and policy means nothing if it’s not rolled out through actual practice and behavior change!

In September, we hosted a MERL Tech pre-workshop on Operationalizing Responsible Data to discuss and share different ways of turning responsible data policy into practice. Below we’ve summarized some tips shared at the workshop. RD champions in organizations of any size can consider these when developing and implementing RD policy.

1. Understand Your Context & Extend Empathy

  • Before developing policy, conduct a non-punitive assessment (a.k.a. a landscape assessment, self-assessment or staff research process) on existing data practices, norms, and decision-making structures . This should engage everyone who will using or affected by the new policies and practices. Help everyone relax and feel comfortable sharing how they’ve been managing data up to now so that the organization can then improve. (Hint: avoid the term ‘audit’ which makes everyone nervous.)
  • Create ‘safe space’ to share and learn through the assessment process:
    • Allow staff to speak anonymously about their challenges and concerns whenever possible
    • Highlight and reinforce promising existing practices
    • Involve people in a ‘self-assessment’
    • Use participatory workshops (e.g. work with a team to map a project’s data flows or conduct a Privacy Impact Assessment or a Risk-Benefits Assessment) – this allows everyone who participates to gain RD awareness while also learning new practical tools along with highlighting any areas that need attention. The workshop lead or “RD champion” can also then get a better sense of the wider organizations knowledge, attitudes and practices as related to RD
    • Acknowledge (and encourages institutional leaders to affirm) that most staff don’t have “RD expert” written into their JDs; reinforce that staff will not be ‘graded’ or evaluated on skills they weren’t hired for.
  • Identify organizational stakeholders likely to shape, implement, or own aspects of RD policy and tailor your engagement strategies to their perspectives, motivations, and concerns. Some may feel motivated financially (avoiding fines or the cost of a data breach); others may be motivated by human rights or ethics; whereas some others might be most concerned with RD with respect to reputation, trust, funding and PR.
  • Map organizational policies, major processes (like procurement, due diligence, grants management), and decision making structures to assess how RD policy can be integrated into these existing activities.

2. Consider Alternative Models to Develop RD Policy 

  • There is no ‘one size fits all’ approach to developing RD policy. As the (still small, but promising) number of organizations adopting policy grows, different approaches are emerging. Here are some that we’ve seen:
    • Top-down: An institutional-level policy is developed, normally at the request of someone on the leadership team/senior management. It is then adapted and applied across projects, offices, etc. 
      • Works best when there is strong leadership buy-in for RD policy and a focal point (e.g. an ‘Executive Sponsor’) coordinating policy formation and navigating stakeholders
    • Bottom-up: A group of staff are concerned about RD but do not have support or interest from senior leadership, so they ‘self-start’ the learning process and begin shaping their own practices, joining together, meeting, and communicating regularly until they have wider buy-in and can approach leadership with a use case and budget request for an organization-wide approach.
      • Good option if there is little buy-in at the top and you need to build a case for why RD matters.
    • Project- or Team-Generated: Development and application of RD policies are piloted within a targeted project or projects or on one team. Based on this smaller slice of the organization, the project or team documents its challenges, process, and lessons learned to build momentum for and inform the development of future organization-wide policy. 
      • Promising option when organizational awareness and buy-in for RD is still nascent and/or resources to support RD policy formation and adoption (staff, financial, etc.) are limited.
    • Hybrid approach: Organizational policy/policies are developed through pilot testing across a reasonably-representative sample of projects or contexts. For example, an organization with diverse programmatic and geographical scope develops and pilots policies in a select set of country offices that can offer different learning and experiences; e.g., a humanitarian-focused setting, a development-focused setting, and a mixed setting; a small office, medium sized office and large office; 3-4 offices in different regions; offices that are funded in various ways; etc.  
      • Promising option when an organization is highly decentralized and works across a diverse country contexts and settings. Supports the development of approaches that are relevant and responsive to diverse capacities and data contexts.

3. Couple Policy with Practical Tools, and Pilot Tools Early and Often

  • In order to translate policy into action, couple it with practical tools that support existing organizational practices. 
  • Make sure tools and processes empower staff to make decisions and relate clearly to policy standards or components; for example:
    • If the RD policy includes a high-level standard such as, “We ensure that our partnerships with technology companies align with our RD values,” give staff tools and guidance to assess that alignment. 
  • When developing tools and processes, involve target users early and iteratively. Don’t worry if draft tools aren’t perfectly formatted. Design with users to ensure tools are actually useful before you sink time into tools that will sit on a shelf at best, and confuse or overburden staff at worst. 

4. Integrate and “Right-Size” Solutions 

  • As RD champions, it can be tempting to approach RD policy in a silo, forgetting it is one of many organizational priorities. Be careful to integrate RD into existing processes, align RD with decision-making structures and internal culture, and do not place unrealistic burdens on staff.
  • When building tools and processes, work with stakeholders to develop responsibility assignment charts (e.g. RACI, MOCHA) and determine decision makers.
  • When developing responsibility matrices, estimate the hours each stakeholder (including partners, vendors, and grantees) will dedicate to a particular tool or process. Work with anticipated end users to ensure that processes:
    • Can realistically be carried out within a normal workload
    • Will not excessively burden staff and partners
    • Are realistically proportionate to the size, complexity, and risk involved in a particular investment or project

5. Bridge Policy and Behavior Change through Accompaniment & Capacity Building 

  • Integrating RD policy and practices requires behavior change and can feel technically intimidating to staff. Remember to reassure staff that no one (not even the best resourced technology firms!), has responsible data mastered, and that perfection is not the goal.
  • In order to feel confident using new tools and approaches to make decisions, staff need knowledge to analyze information. Skills and knowledge required will be different according to role, so training should be adapted accordingly. While IT staff may need to know the ins and outs of network security, general program officers certainly do not. 
  • Accompany staff as they integrate RD processes into their work. Walk alongside them, answering questions along the way, but more importantly, helping staff build confidence to develop their own internal RD compass. That way the pool of RD champions will grow!

What approaches have you seen work in your organization?

MERL Tech DC 2019 Feedback Report

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of the conference and related community are to:

  • Improve development, tech, data & MERL literacy
  • Help people find and use evidence & good practices
  • Promote ethical and appropriate use of technology
  • Build and strengthen a “MERL Tech community”
  • Spot trends and future-scope for the sector
  • Transform and modernize MERL in an intentionally responsible and inclusive way

Our sixth MERL Tech DC conference took place on September 5-6, 2019, and we held four pre-workshops on September 4. Some 350 people from 194 organizations joined us for the 2-days, and another 100 people attended the pre-workshops. About 56% of participants attended for the first time, whereas 44% were returnees.

Who attended?

Attendees came from a wide range of organization types and professions.

Conference Themes

The theme for this year’s conference was “Taking Stock” and we had 4 sub-themes:

  1. Tech and Traditional MERL
  2. Data, Data, Data
  3. Emerging Approaches to MERL
  4. The Future of MERL

State of the Field Research

A small team shared their research on “The MERL Tech State of the Field” organized into the above 4 themes. The research will be completed and shared on the MERL Tech site before the end of 2019. (We’ll be presenting it at the South African Evaluation Association Conference in October and at the American Evaluation Association conference in November)

As always, MERL Tech conference sessions were related to: technology for MERL, MERL on ICT4D and Digital Development programs, MERL of MERL Tech, data for decision-making, ethical and responsible data approaches and cross-disciplinary community building. (See the full agenda here):

We checked in with participants on the last day to see how the field had shifted since 2015, when our keynote speaker (Ben Ramalingam) gave some suggestions on how tech could improve MERL.

Ben’s future vision
Where MERL Tech 2019 sessions fell on the expired-tired-wired schematic.
What participants would add to the schematic to update it for 2019 and beyond.

Diversity and Inclusion

We have been making an effort to improve diversity and inclusion at the conference and in the MERL Tech space. An unofficial estimate on speaker racial and gender diversity is below. As compared to 2018 when we first began tracking, the number of women of color speakers increased by 5% and women of color by 2%. The number of white female speakers decreased by 6% and the number of white male speakers went down by 1%. Our gender balance remained fairly consistent.

Where we are failing on diversity and inclusion is at having speakers and participants from outside of North America and Europe – that likely has to do with cost and visas which affect who can attend. It also has to do with who organizations select to represent them at MERL Tech. We’re continuing to try to find ways to collaborate with groups working on MERL Tech in different regions. We believe that new and/or historically marginalized voices should be more involved in shaping the future of the sector and the future of MERL Tech. (If you would like to support us on this or get involved, please contact Linda!)

Post Conference Feedback

Some 25% of participants filled in the post-conference survey and 85% rated their experience “good” or “awesome” (up from 70% in 2018). Answers did not significantly differ based on whether a participant had attended previously or not. Another 8.5% rated sessions via the “Sched” conference agenda app, with an average session satisfaction rating of 9.1 out of 10.

The top rated session was on “Decolonizing Data and Technology in MERL.” As one participant said, “It shook me out of my complacency. It is very easy to think of the tech side of the work we do as ‘value free’, but this is not the case. Being a development practitioner it is important for me to think about inequality in tech and data further than just through the implementation of the projects we run.” Another noted that “As a white, gay male who has a background in international and intercultural education, it was great to see other fields bringing to light the decolonizing mindset in an interactive way. The session was enlightening and brought up conversation that is typically talked about in small groups, but now it was highlighted in front of the entire audience.”

Sign up for MERL Tech News if you’d like to read more about this and other sessions. We’re posting a series of posts and session summaries.

Key suggestions for improving next time were similar to those we hear every year: less showcasing and pitching, ensure that titles match what is actually delivered at the session, ensuring that presenters are well-prepared, and making sessions relevant, practical and applicable.

Additionally, several people commented that the venue had some issues with noise from conversations in the common area spilling into breakout rooms and making it hard to focus. Participants also complained that there was a large amount of trash and waste produced, and suggested more eco-friendly catering for next time.

Access the full feedback report here.

Where/when should the conference happen?

As noted, we are interested in finding a model for MERL Tech that allows for more diversity of voices and experiences, so we asked participants how often and where they thought we should do MERL Tech in the future. The majority (44.3%) felt we should run MERL Tech in DC every 2 years and somewhere else in the year in between. Some 23% said to keep it in DC every year, and around 15% suggested multiple MERL Tech conferences each year in DC and elsewhere. (We were pleased that no one selected the option of “stop doing MERL Tech altogether, it’s unnecessary.”)

Given this response, we will continue exploring options for partners who would like to support financially and logistically to enable MERL Tech to happen outside of DC. Please contact Linda if you’d like to be involved or have ideas on how to make this happen.

New ways to get involved!

Last year, the idea of having a GitHub repository was raised, and this year we were excited to have GitHub join us. They had come up with the idea of creating a MERL Tech Center on GitHub as well, so it was a perfect match! More info here.

We also had a request to create a MERL Tech Slack channel (which we have done). Please get in touch with Linda by email or via Slack if you’d like to join us there for ongoing conversations on data collection, open source, technology (or other channels you request!)

As always you can also follow us on Twitter and MERL Tech News.

Wrapping up MERL Tech DC

On September 6, we wrapped up three days of learning, reflecting, debating and sharing at MERL Tech DC. The conference kicked off with four pre-workshops on September 4: Big Data and Evaluation; Text Analytics; Spatial Statistics and Responsible Data. Then, on September 5-6, we had our regular two-day conference, including opening talks from Tariq Khokhar, The Rockefeller Foundation; and Yvonne MacPherson, BBC Media Action; one-hour sessions, two-hour sessions, lightning talks, a dashboard contest, a plenary session and two happy hour events.

This year’s theme was “The State of the Field” of MERL Tech and we aimed to explore what we as a field know about our work and what gaps remain in the evidence base. Conference strands included: Tech and Traditional MERL; Data, Data, Data; Emerging Approaches; and The Future of MERL.

Zach Tilton, University of Western Michigan; Kerry Bruce, Clear Outcomes; and Alexandra Robinson, Moonshot Global; update the plenary on the State of the Field Research that MERL Tech has undertaken over the past year. Photo by Christopher Neu.
Tariq Khokhar of The Rockefeller Foundation on “What Next for Data Science in Development? Photo by Christopher Neu.
Participants checking out what session to attend next. Photo by Christopher Neu.
Silvia Salinas, Strategic Director FuturaLab; Veronica Olazabal, The Rockefeller Foundation; and Adeline Sibanda, South to South Evaluation Cooperation; talk in plenary about Decolonizing Data and Technology, whether we are designing evaluations within a colonial mindset, and the need to disrupt our own minds. Photo by Christopher Neu.
What is holding women back from embracing tech in development? Patricia Mechael, HealthEnabled; Carmen Tedesco, DAI; Jaclyn Carlsen, USAID; and Priyanka Pathak, Samaj Studio; speak at their “Confidence not Competence” panel on women in tech. Photo by Christopher Neu.
Reid Porter, DevResults; Vidya Mahadevan, Bluesquare; Christopher Robert, Dobility; and Sherri Haas, Management Sciences for Health; go beyond “Make versus Buy” in a discussion on how to bridge the MERL – Tech gap. Photo by Christopher Neu.
Participants had plenty of comments and questions as well. Photo by Christopher Neu.
Drones, machine learning, text analytics, and more. Ariel Frankel, Clear Outcomes, facilitates a group in the session on Emerging MERL approaches. Photo by Christopher Neu.
The Principles for Digital Development have been heavily adopted by the MERL Tech sector as a standard for Digital Development. Allana Nelson, DIAL, shares thoughts on how the Principles can be used as an evaluative tool. Photo by Christopher Neu.
Kate Sciafe Diaz, TechnoServe; explains the “Marie Kondo” approach to MERL Tech in her Lightning Talk: “Does Your Tech Spark Joy? The Minimalist’s Approach to MERL Tech.” Photo by Christopher Neu.

In addition to learning and sharing, one of our main goals at MERL Tech is to create community. “I didn’t know there were other people working on the same thing as I am!” and “This MERL Tech conference is like therapy!” were some of the things we heard on Friday night as we closed down.

Stay tuned for blog posts about sessions and overall impressions, as well as our conference report once feedback surveys are in!

The MERL Tech DC agenda is ready!

Thanks to the hard work of our presenters, session leads, and Steering Committee, the 2019 MERL Tech DC agenda is ready, and we’re really excited about it!

On September 5-6, we’ll have our regular two days of lightning talks, break-out sessions, panels, Fail Fest, demo tables, and networking with folks from diverse sectors who all coincide at the intersection of MERL and Tech, Take a peek at the agenda here.

This year we are offering four pre-workshops on September 4 (separate registration required). Register at one of the links below. (Space is limited, so secure your spot now!)

Dashboard Contest!

You can also enter the MERL Tech Dashboard Contest and win a prize for the best dashboard (based on four criteria, and according to an esteemed panel of judges)! If you have a dashboard that you’d like to show off, sign up here to enter.

Fail Fest!

As usual, Wayan Vota will be organizing a “Fail Fest” during our Happy Hour Reception on Thursday, Sept 5th. More information coming shortly, but start thinking about what you may want to share….

Register Now!

Registration is open, and we normally sell out, so get your tickets soon! We also have a few Demo Tables left, and you can reserve one here.

Please get in touch with any questions, and we’re looking forward to seeing you there!  

MERL Tech DC Session Ideas are due Monday, Apr 22!

MERL Tech is coming up in September 2019, and there are only a few days left to get your session ideas in for consideration! We’re actively seeking practitioners in monitoring, evaluation, research, learning, data science, technology (and other related areas) to facilitate every session.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. Submit your session ideas by midnight ET on April 22, 2019. You will hear back from us by May 20 and, if selected, you will be asked to submit the final session title, summary and outline by June 17.

Submit your session ideas here by April 22, midnight ET

This year’s conference theme is MERL Tech: Taking Stock

Conference strands include:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines. Please consider these in your session proposals and in how you are choosing your speakers and facilitators.

Submit your session ideas now!

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone. Please review our Code of Conduct. Session submissions are reviewed and selected by our steering committee.

Join us for MERL Tech DC, Sept 5-6th!

MERL Tech DC: Taking Stock

September 5-6, 2019

FHI 360 Academy Hall, 8th Floor
1825 Connecticut Avenue NW
Washington, DC 20009

We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.

The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.

Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.

Conference strands:

Tech and traditional MERL:  How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?

Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?

Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?

The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?

Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.

Submit your session ideas, register to attend the conference, or reserve a demo table for MERL Tech DC now!

You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL Tech Jozi Feedback Report

MERL Tech Jozi took place on August 1-2, 2018. Below are some highlights from the post-conference survey that was sent to participants requesting feedback on their MERL Tech Jozi experience. Thirty-four percent of our attendees filled out the post-conference survey via Google Forms.

Overall Experience

Here’s how survey participants rated their overall experience:

Participants’ favorite sessions

The sessions that were most frequently mentioned as favorites and some reasons why included:

Session title Comments
Conducting a Baseline of the ICT Ecosystem – Genesis Analytics and DIAL

 

…interactive session and felt practical. I could easily associate with what the team was saying. I really hope these learnings make it to implementation and start informing decision-making around funding! The presenters were also great.

… interesting and engaging, findings were really relevant to the space.

…shared lessons and insights resonated with my own professional experience. The discussions were fruitful and directly relevant to my line of work.

…incredibly useful.

The study confirmed a lot of my perceptions as an IT developer in the MERL space, but now I have some more solid backup. I will use this in my webinars and consulting on “IT for M&E”

Datafication Discrimination — Media Monitoring Africa, Open Data Durban, Amandla.mobi and Oxfam South Africa

 

Linked both MERL and Tech to programme and focussed on the impact of MERL Tech in terms of sustainable, inclusive development.

Great panel, very knowledgeable, something different to the usual M&E. interactive and diverse.

… probably most critical and informative in terms of understanding where the sector was at … the varied level of information across the audience and the panel was fascinating – if slightly worrying about how unclear we are as an M&E sector.

When WhatsApp Becomes About More Than Messaging – Genesis Analytics, Every1Mobile and Praekelt.org

 

As an evaluator, I have never thought of using WhatsApp as a way of communicating with potential beneficiaries. It made me think about different ways of getting in touch with beneficiaries of programme, and getting them to participate in a survey.

The different case studies included examples, great media, good Q&A session at the end, and I learnt new things. WhatsApp is only just reaching it’s potential in mHealth so it was good to learn real life lessons.

Hearing about the opportunities and challenges of applying a tool in different contexts and for different purposes gave good all-around insights

Social Network Analysis – Data Innovators and Praeklelt.org

 

I was already very familiar with SNA but had not had the opportunity to use it for a couple of years. Hearing this presentation with examples of how others have used it really inspired me and I’ve since sketched out a new project using SNA on data we’re currently gathering for a new product! I came away feeling really inspired and excited about doing the analysis.
Least favorite sessions

Where participants rated sessions as their “least favorite it was because:

  • The link to technology was not clear
  • It felt like a sales pitch
  • It felt extractive
  • Speaker went on too long
  • Views on MERL or Tech seemed old fashioned
Topics that need more focus in the future

Unpack the various parts of “M” “E” “R” “L”

  • Technology across MERL, not just monitoring. There was a lot of technology for data collection & tracking but little for ERL in MERL
  • More evaluation?
  • The focus was very much on evaluation (from the sessions I attended) and I feel like we did not talk about the monitoring, research and learning so much. This is huge for overall programme implementation and continuously learning from our data. Next time, I would like to talk a bit more about how organisations are actually USING data day-to-day to make decisions (monitoring) and learning from it to adapt programmes.
  • The R of MERL is hardly discussed at all. Target this for the next MERL Tech.

New digital approaches / data science

  • AI and how it can introduce biases, machine learning, Python
  • A data science-y stream could open new channels of communication and collaboration

Systems and interoperability

  • Technology for data management between organizations and teams.
  • Integrations between platforms.
  • Public Health, Education. Think of how do we discuss and bring more attention to the various systems out there, and ensure interoperability and systems that support the long term visions of countries.
  • Different types of MERL systems. We focused a lot on data collection systems, but there is a range of monitoring systems that programme managers can use to make decisions.

 Scale and sustainability

  • How to engage and educate governments on digital data collection systems.
  • The debate on open source: because in development sector it is pushed as the holy grail, whereas most other software worldwide is proprietary for a reason (safety, maintenance, continued support, custom solution), and open source doesn’t mean free.
  • Business opportunities. MERL as a business tool. How MERL Tech has proved ROI in business and real market settings, even if those settings were in the NGO/NPO space. What is the business case behind MERL Tech and MERL Tech developments?
Ah ha! Moments

Learning about technology / tech approaches

  • I found the design workshops enlightening, and did not as an evaluator realise how much time technies put into user testing.
  • I am a tech dinosaur – so everything I learned about a new technology and how it can be applied in evaluation was an ‘aha!’

New learning and skills

  • The SNA [social network analysis] inspiration that struck me was my big takeaway! I can’t wait to get back to the office and start working on it.
  • Really enjoyed learning about WhatsApp for SBCC.
  • The qualitative difference in engagement, structure, analysis and resource need between communicating via SMS versus IM. (And realising again how old school I am for a tech person!)

Data privacy, security, ethics

  • Ah ha moment was around how we could improve handling data
  • Data security
  • Our sector (including me) doesn’t really understand ‘big data,’ how it can discriminate, and what that might mean to our programmes.

Talking about failure

  • The fail fest was wonderful. We all theoretically know that it’s good to be honest about failure and to share what that was like, but this took honest reflection to a whole new level and set the tone for Day 2.

I’m not alone!

  • The challenges I am facing with introducing tech for MERL in my organisations aren’t unique to me.
  • There are other MERL Tech practitioners with a journalism/media background! This is exciting and makes me feel I am in the right place. The industry seems to want to gate keep (academia, rigourous training) so this is interesting to consider going forward, but also excites me to challenge this through mentorship opportunities and opening the space to others like me who were given a chance and gained experience along the way. Also had many Aha moments for using WhatsApp and its highly engaging format.
  • Learning that many other practitioners support learning on your own.
  • There are people locally interested in connecting and learning from.
Recommendations for future MERL Tech events

More of most everything…

  • More technical sessions
  • More panel discussions
  • More workshops
  • More in-depth sessions!
  • More time for socializing and guided networking like the exercise with the coloured stickers on Day 1
  • More NGOs involved, especially small NGOs.
  • More and better marketing to attract more people
  • More demo tables, or have new people set up demo tables each day
  • More engagement: is there a way that MERL Tech could be used further to shape, drive and promote the agenda of using technology for better MERL? Maybe through a joint session where we identify important future topics to focus on? Just as something that gives those who want the opportunity to further engage with and contribute to MERL Tech and its agenda-setting?
  • The conversations generally were very ‘intellectual’. Too many conversations revolved around how the world had to move on to better appreciate the value of MERL, rather than how MERL was adapted, used and applied in the real world. [It was] too dominated by MERL early adopters and proponents, rather than MERL customers… Or am I missing the point, which may be that MERL (in South Africa) is still a subculture for academic minded researchers. Hope not.
  • More and better wine!
 Kudos
  • For some reason this conference – as opposed to so many other conferences I have been to – actually worked. People were enthused, they were kind, willing to talk – and best of all by day 2 they hadn’t dropped out like flies (which is such an issue with conferences!). So whatever you did do it again next time!
  • Very interactive and group-focused! This was well balanced with informative sessions. I think creative group work is good but it wouldn’t be good to have the whole conference like this. However, this was the perfect amount of it and it was well led and organized.
  • I really had a great time at this conference. The sessions were really interesting and it was awesome to get so many different people in the same place to discuss such interesting topics and issues. Lunch was also really delicious
  • Loved the lightning talks! Also the breakaway sessions were great. The coffee was amazing thank you Fail fest is such a cool concept and looking to introduce this kind of thinking into our own organisation more – we all struggle with the same things, was good to be around likeminded professionals.
  • I really appreciated the fairly “waste-free” conference with no plastic bottles, unnecessary programmes and other things that I’ll just throw away afterwards. This was a highlight for me!
  • I really enjoyed this conference. Firstly the food was amazing (always a win). But most of all the size was perfect. It was really clever the way you forced us to sit in small lunch sizes and that way by the end of the conference I really had the confidence to speak to people. Linda was a great organiser – enthusiastic and punctual.
Who attended MERL Tech Jozi?

Who presented at MERL Tech Jozi?

 

If you’d like to experience MERL Tech, sign up now to attend in Washington, DC on September 5-7, 2018!