Key aspects coming out of the events were the need for 1) guidance on data governance and 2) orientation on responsible data practices. Both policy and practice need to be contextualized for the African context and aimed at supporting African monitoring, evaluation, research and learning (MERL) practitioners in their work.
As a follow-on activity, CLEAR Anglophone Africa is calling on M&E practitioners to join up to be a part of this responsible data project for African MERL Practitioners. CLEAR Anglophone Africa and MERL Tech will be collaborating on this responsible data initiative.
A new wave of technologies and approaches has the potential to influence how monitoring, evaluation, research and learning (MERL) practitioners do their work. The growth in use of smartphones and the internet, digitization of existing data sets, and collection of digital data make data increasingly available for MERL activities. This changes how MERL is conducted and, in some cases, who conducts it.
We hypothesized that emerging technology is revolutionizing the types of data that can be collected and accessed and the ways that it can be processed and used for better MERL. However, improved research on and documentation of how these technologies are being used is required so the sector can better understand where, when, why, how, and for which populations and which types of MERL these emerging technologies would be appropriate.
The team reviewed the state of the field and found there were three key new areas of data that MERL practitioners should consider:
New kinds of data sources, such as application data, sensor data, data from drones and biometrics. These types of data are providing more access to information and larger volumes of data than ever before.
New types of systems for data storage. The most prominent of these was the distributed ledger technologies (also known as blockchain) and an increasing use of cloud and edge computing. We discuss the implications of these technologies for MERL.
New ways of processing data, mainly from the field of machine learning, specifically supervised and unsupervised learning techniques that could help MERL practitioners manage large volumes of both quantitative and qualitative data.
These new technologies hold great promise for making MERL practices more precise, automated and timely. However, some challenges include:
A need to clearly define problems so the choice of data, tool, or technique is appropriate
Non-representative selection bias when sampling
Reduced MERL practitioner or evaluator control
Change management needs to adapt how organizations manage data
Rapid platform changes and difficulty with assessing the costs
A need for systems thinking which may involve stitching different technologies together
To address emerging challenges and make best use of the new data, tools, and approaches, we found a need for capacity strengthening for MERL practitioners, greater collaboration among social scientists and technologists, a need for increased documentation, and a need for the incorporation of more systems thinking among MERL practitioners.
Finally there remains a need for greater attention to justice, ethics and privacy in emerging technology.
The year 2020 is a compelling time to look back and pull together lessons from five years of convening hundreds of monitoring, evaluation, research, and learning and technology practitioners who have joined us as part of the MERL Tech community. The world is in the midst of the global COVID-19 pandemic, and there is an urgent need to know what is happening, where, and to what extent. Data is a critical piece of the COVID-19 response — it can mean the difference between life and death. And technology use is growing due to stay-at-home orders and a push for “remote monitoring” and data collection from a distance.
At the same time, we’re witnessing (and I hope, also joining in with) a global call for justice — perhaps a tipping point — in the wake of decades of racist and colonialist systems that operate at the level of nations, institutions, organizations, the global aid and development systems, and the tech sector. There is no denying that these power dynamics and systems have shaped the MERL space as a whole, and the MERL Tech space as well.
Moments of crisis tend to test a field, and we live in extreme times. The coming decade will demand a nimble, adaptive, fair, and just use of data for managing complexity and for gaining longer-term understanding of change and impact. Perhaps most importantly, in 2020 and beyond, we need meaningful involvement of stakeholders at every level and openness to a re-shaping of our sector and its relationships and power dynamics.
It is in this time of upheaval and change that we are releasing a set of four papers that aim to take stock of the field from 2014-2019 as launchpad for shaping the future of MERL Tech. In September 2018, the papers’ authors began reviewing the past five years of MERL Tech events to identify lessons, trends, and issues in this rapidly changing field. They also reviewed the literature base in an effort to determine what we know, what we yet need to understand about technology in MERL, and what are the gaps in the formal literature. No longer is this a nascent field, yet it is one that is hard to keep up with, given that it is fast paced and constantly shifting with the advent of new technologies. We have learned many lessons over the past five years, but complex political, technical, and ethical questions remain.
The State of the Field series includes four papers:
What We Know About Traditional MERL Tech: Insights from a Scoping Review: Zach Tilton, Michael Harnar, and Michele Behr, University of Western Michigan; Soham Banerji and Manon McGuigan, independent consultants; and Paul Perrin, Gretchen Bruening, John Gordley and Hannah Foster, University of Notre Dame; Linda Raftree, independent consultant and MERL Tech Conference organizer.
Through these papers, we aim to describe the State of the Field up to 2019 and to offer a baseline point in time from which the wider MERL Tech community can take action to make the next phase of MERL Tech development effective, responsible, ethical, just, and equitable. We share these papers as conversation pieces and hope they will generate more discussion in the MERL Tech space about where to go from here.
We’d like to start or collaborate on a second round of research to delve into areas that were under-researched or less developed. Your thoughts are most welcome on topics that need more research, and if you are conducting research about MERL Tech, please get in touch and we’re happy to share here on MERL Tech News or to chat about how we could work together!
The four panelists described the kinds of administrative or “routine” data they are using in their work. For example, in Kenya educational records, client information from financial institutions, hospital records of patients, and health outcomes are being used to plan and implement actions related to COVID-19 and to evaluate the impact of different COVID-related policies that governments have put in place or are considering. In Malawi, administrative data is combined with other sources such as Google mobility data to understand how migration might be affecting the virus’ spread. COVID-19 is putting a spotlight on weaknesses and gaps in existing administrative data systems.
Watch the video here:
Listen to just the audio from the event here:
Benefits of administrative data include that:
Data is generated through normal operations and does not require an additional survey to create it
It can be more relevant than a survey because it covers a large swath of the entire population
It is an existing data source during COVID when it’s difficult to collect new data
It can be used to create dashboards for decision-makers at various levels
Data sits in silos and the systems are not designed to be interoperable
Administrative data may leave out those who are not participating in a government program
Data sets are time-bound to the life of the program
Some administrative data systems are outdated and have poor quality data that is not useful for decision-making or analysis
There is a demand for beautiful dashboards and maps but there is insufficient attention to the underlying data processes that would be needed to produce this information so that it can be used
Real-time data is not possible when there is no Internet connectivity
There is insufficient attention to data privacy and protection, especially for sensitive data
Institutions may resist providing data if weakness are highlighted through the data or they think it will make them look bad
Recommendations for better use of administrative data in the public sector:
Understand the data needs of decision-makers and build capacity to understand and use data systems
Map the data that exists, assess its quality, and identify gaps
Design and enact policies and institutional arrangements, tools, and processes to make sure that data is organized and interoperable.
Automate processes with digital tools to make them more seamless.
Focus on enhancing underlying data collection processes to improve the quality of administrative data; this includes making it useful for those who provide the data so that it is not yet another administrative burden with no local value.
Assign accountability for data quality across the entire system.
Learn from the private sector, but remember that the public sector has different incentives and goals.
Rather than fund more research on administrative data, donors should put funds into training on data quality, data visualization, and other skills related to data use and data literacy at different levels of government.
Determine how to improve data quality and use of existing administrative data systems rather than building new ones.
Make administrative data useful to those who are inputting it to improve data quality.
Data is not always available, and it can be costly to produce. One challenge is generating data cheaply and quickly to meet the needs of decision-makers within the operational constraints that enumerators face. Another is ensuring that the process is high quality and also human-centered, so that we are not simply extracting data. This can be a challenge when there is low connectivity and reach, poor networks capacity and access, and low smartphone access. Enumerator training is also difficult when it must be done remotely, especially if enumerators are new to technology and more accustomed to doing paper-based surveys.
Watch the video below.
Listen to just the audio from the session here.
Some recommendations arising from the session included:
Learn and experiment as you try new things. For example, tracking when and why people are dropping off a survey and finding ways to improve the design and approach. This might be related to the time of the call or length of the survey.
It’s not only about phone surveys. There are other tools. For example, WhatsApp has been used successfully during COVID-19 for collecting health data.
Don’t just put your paper processes onto a digital device. Instead, consider how to take greater advantage of digital devices and tools to find better ways of monitoring. For example, could we incorporate sensors into the monitoring from the start? At the same time, be careful not to introduce technologies that are overly complex.
Think about exclusion and access. Who are we excluding when we move to remote monitoring? Children? Women? Elderly people? We might be introducing bias if we are going remote. We also cannot observe if vulnerable people are in a safe place to talk if we are doing remote monitoring. So, we might be exposing people to harm or they could be slipping through the cracks. Also, people self-select for phone surveys. Who is not answering the phone and thus left out of the survey?
Consider providing airtime but make sure this doesn’t create perverse incentives.
Ethics and doing no harm are key principles. If we are forced to deliver programs remotely, this involves experimentation. And we are experimenting with people’s lives during a health crisis. Consider including a complaints channel where people can report any issues.
Ensure data is providing value at the local level, and help teams see what the whole data process is and how their data feeds into it. That will help improve data quality and reduce the tendency to ‘tick the box’ for data collection or find workarounds.
Design systems for interoperability so that the data can overlap, and the data can be integrated with other data for better insights or can be automatically updated. Data standards need to be established so that different systems can capture data in the same way or the same format;
Create a well-designed change management program to bring people on board and support them. Role modeling by leaders can help to promote new behaviors.
Further questions to explore:
How can we design monitoring to be remote from the very start? What new gaps could we fill and what kinds of mixed methods could we use?
What two-way platforms are most useful and how can they be used effectively and ethically?
Can we create a simple overview of opportunities and threats of remote monitoring?
How can we collect qualitative data, e.g., focus groups and in-depth interviews?
How can we keep respondents safe? What are the repercussions of asking sensitive questions?
How can we create data continuity plans during the pandemic?
Over the past decade, monitoring, evaluation, research and learning (MERL) practices have become increasingly digitalized. The COVID-19 pandemic has caused that the process of digitalization to happen with even greater speed and urgency, due to travel restrictions, quarantine, and social distancing orders from governments who are desperate to slow the spread of the virus and lessen its impact.
Data is a necessary and critical part of COVID-19 prevention and response efforts to understand where the virus might appear next, who is most at risk, and where resources should be directed for prevention and response. However we need to be sure that we are not putting people at risk of privacy violations or misuse of personal data and to ensure that we are managing that data responsibly so that we don’t unnecessarily create fear or panic.
Watch the video below:
Listen to the audio from the session here:
MERL Practitioners have clear responsibilities when sharing, presenting, consuming and interpreting data. Individuals and institutions may use data to gain prestige, and this can allow bias to creep in or to justify government decisions. Data quality is critical for informing decisions, and information gaps create the risk of misinformation and flawed understanding. We need to embrace uncertainty and the limitations of the science, provide context and definitions so that our sources are clear, and ensure transparency around the numbers and the assumptions that are underpin our work.
MERL Practitioners should provide contextual information and guidance on how to interpret the data so that people can make sense of it in the right way. We should avoid cherry picking data to prove a point, and we should be aware that data visualization carries power to sway opinions and decisions. It can also influence behavior change in individuals, so we need to take responsibility for that. We also need to find ways to visualize data for lay people and non-technical sectors.
Critical data is needed, yet it might be used in negative or harmful ways, for example, COVID-related stigmatization that can affect human dignity. We must not override ethical and legal principles in our rush to collect data. Transparency around data collection processes and use are also needed, as well as data minimization. Some might be taking advantage of the situation to amass large amounts of data for alternative purposes, which is unethical. Large amounts of data also bring increased risk of data breaches. When people are scared, such as in COVID times, they will be willing to hand over data. We need to ensure that we are providing oversight and keeping watch over government entities, health facilities, and third-party data processors to ensure data is protected and not misused.
MERL Practitioners are seeking more guidance and support on: aspects of consent and confidentiality; bias and interference in data collection by governments and community leaders; overcollection of data leading to fatigue; misuse of sensitive data such as location data; potential for re-identification of individuals; data integrity issues; lack of encryption; and some capacity issues.
Good practices and recommendations include ethical clearance of data and data assurance structures; rigorous methods to reduce bias; third party audits of data and data protection processes; localization and contextualization of data processes and interpretation; and “do no harm” framing.
Guest post from Janna Rous. Original published here.
So, all of a sudden you’re stuck at home because of the new coronavirus. You’re looking at your M&E commitments and your program commitments. Do you put them all on hold and postpone them until the coronavirus threat has passed and everything goes back to normal? Or is there a way to still get things done”!? This article reviews 8 ways you can adapt your M&E during the pandemic.
Here are a few ideas that you and your team might consider doing to make sure you can stay on track (and maybe even IMPROVE your MEAL practices) even if you might currently be in the middle of a lockdown, or if you think you might be going into a lockdown soon:
1. Phone Call Interviews instead of In-Person Interviews
Do you have any household assessments or baseline surveys or post-distribution monitoring that you had planned in the next 1 to 3 months? Is there a way that you can carry out these interviews by phone or WhatsApp calls? This is the easiest and most direct way to carry on with your current M&E plan. Instead of doing these interviews face-to-face, just get them on a call. I’ve created a checklist to help you prepare for doing phone call interviews – click here to get the “Humanitarian’s Phone Call Interview Checklist”. Here are a few things you need to think through to transition to a phone-call methodology:
You need phone numbers and names of people that need to be surveyed. Do you have these? Or is there a community leader who might be able to help you get these?
You also need to expect that a LOT of people may not answer their phone. So instead of “sampling” people for a survey, you might want to just plan on calling almost everyone on that list.
Just like for a face-to-face interview, you need to know what you’re going to say. So you need to have a script ready for how you introduce yourself and ask for consent to do a phone questionnaire. It’s best to have a structured interview questionnaire that you follow for every phone call, just like you would in a face-to-face assessment.
You also need to have a way to enter data as you ask the questions. This usually depends on what you’re most comfortable with – but I recommend preparing an ODK or KoboToolbox questionnaire, just like you would for an in-person survey, and filling it out as you do the interview over the phone. I find it easiest to enter the data into KoboToolbox “Webform” instead of the mobile app, because I can type information faster into my laptop rather than thumb-type it into a mobile device. But use what you have!
If you’re not comfortable in KoboToolbox, you could also prepare an Excel sheet for directly entering answers – but this will probably require a lot more data cleaning later on.
When you’re interviewing, it’s usually faster to type down the answers in the language you’re interviewing in. If you need your final data collection to be in English, go back and do the translation after you’ve hung up the phone.
If you want a record of the interview, ask if you can record the phone call. When the person says yes, then just record it so you can go back and double check an answer if you need to.
Very practically – if you’re doing lots of phone calls in a day, it is easier on your arm and your neck if you use a headset instead of holding your phone to your ear all day!
2. Collect Videos & Photos Directly from Households and Communities
When you’re doing any in-person MEAL activities, you’re always able to observe evidence. You can look around and SEE impact, you don’t just hear it through an interview or group discussion. But when you’re doing M&E remotely, you can’t double-check to see what impact really looks like. So I recommend:
Connect with as many beneficiaries and team members as possible through WhatsApp or another communication app and collect photos and videos of evidence directly from them.
Video – Maybe someone has a story of impact they can share with you through video. Or if you’re overseeing a Primary Health Care clinic, perhaps you can have a staff member walk you through the clinic with a video so you can do a remote assessment.
Pictures – Maybe you can ask everyone to send you a picture of (for example) their “hand washing station with soap and water” (if you’re monitoring a WASH program). Or perhaps you want evidence that the local water point is functioning.
3. Programme Final Evaluation
It’s a good practice to do a final evaluation review when you reach the end of a program. If you have a program finishing in the next 1-3 months, and you want to do a final review to assess lessons learned overall, then you can also do this remotely!
Make a list of all the stakeholders that would be great to talk to: staff members, a few beneficiaries, government authorities (local and/or national), other NGOs, coordination groups, partner organizations, local community leaders.
Then go in search of either their phone numbers, their email addresses, their Skype accounts, or their WhatsApp numbers and get in touch.
It’s best if you can get on a video chat with as many of them as possible – because it’s much more personal and easy to communicate if you can see one another’s faces! But if you can just talk with audio – that’s okay too.
Prepare a semi-structured interview, a list of questions you want to talk through about the impact, what went well, what could have gone better. And if there’s anything interesting that comes up, don’t worry about coming up with some new questions on the spot or skipping questions that don’t make sense in the context.
You can also gather together any monitoring reports/analysis that was done on the project throughout its implementation period, plus pictures of the interventions.
Use all this information to create a final “lessons learned” evaluation document. This is a fantastic way to continually improve the way you do humanitarian programming.
4. Adapt Your Focus Group Discussion Plan
If everyone is at home because your country has imposed a lockdown, it will be very difficult to do a focus group discussion because….you can’t be in groups! So, with your team decide if it might be better to switch your monitoring activity from collecting qualitative data in group discussions to actually just having one-on-one interviews on the phone with several people to collect the same information.
There are some dynamics that you will miss in one-to-one interviews, information that may only come out during group discussions. (Especially where you’re collecting sensitive or “taboo” data.) Identify what that type of information might be – and either skip those types of questions for now, or brainstorm how else you could collect the information through phone-calls.
5. Adapt Your Key Informant Interviews
If you normally carry out Key Informant Interviews, it would be a great idea to think what “extra” questions you need to ask this month in the midst of the coronavirus pandemic.
If you normally ask questions around your program sector areas, think about just collecting a few extra data points about feelings, needs, fears, and challenges that are a reality in light of Covid-19. Are people facing any additional pressures due to the epidemic? Or are there any new humanitarian needs right now? Are there any upcoming needs that people are anticipating?
It goes without saying that if your Key Informant Interviews are normally in person, you’ll want to carry these out by phone for the foreseeable future!
6. What To Do About Third Party Monitoring
Some programs and donors use Third Party Monitors to assess their program results independently. If you normally hire third party monitors, and you’ve got some third party monitoring planned for the next 1-3 months, you need to get on the phone with this team and make a new plan. Here are a few things you might want to think through with your third party monitors:
But also think through – is it worth it to get a third party monitor to assess results remotely? Is it better to postpone their monitoring? Or is it worth it to carry on regardless?
What is the budget implication? If cars won’t be used, is there any cost-savings? Is there any additional budget they’ll need for air-time costs for their phones?
Make sure there is a plan to gather as much photo and video evidence as possible (see point 2 above!)
If they’re carrying out phone call interviews it would also be a good recommendation to record phone calls if possible and with consent, so you have the records if needed.
7. Manage Expectations – The Coronavirus Pandemic May Impact Your Program Results.
You probably didn’t predict that a global pandemic would occur in the middle of your project cycle and throw your entire plan off. Go easy on yourself and your team! It is most likely that the results you’d planned for might not end up being achieved this year. Your donors know this (because they’re probably also on lockdown). You can’t control the pandemic, but you can control your response. So proactively manage your own expectations, your manager’s expectations and your donor’s expectations.
Get on a Skype or Zoom call with the project managers and review each indicator of your M&E plan. In light of the pandemic, what indicator targets will most likely change?
Look through the baseline numbers in your M&E plan – is it possible that the results at the END of your project might be worse than even your baseline numbers? For example, if you have a livelihoods project, it is possible that income and livelihoods will be drastically reduced by a country-wide lockdown. Or are you running an education program? If schools have been closed, then will a comparison to the baseline be possible?
Once you’ve done a review of your M&E plan, create a very simple revised plan that can be talked through with your program donor.
8. Talk To Your Donors About What You Can Do Remotely
When you’re on the phone with your donors, don’t only talk about revised program indicators.
Also talk about a revised timeframe – is there any flexibility on the program timeframe, or deadlines for interim reporting on indicators? What are their expectations?
Also talk about what you CAN do remotely. Discuss with them the plan you have for carrying on everything possible that can be done remotely.
And don’t forget to discuss financial implications of changes to timeframe.
by Linda Raftree, Independent Consultant and MERL Tech organizer
Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).
The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.
In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before roll-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.
We concluded that:
The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors.
The field needs more documentation on the utility and impact of ICTs for M&E.
Pressure to show impact may open up space for testing new M&E approaches.
A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs.
Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.
Where are we now: MERL Tech in 2019
Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.
The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”
In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.
Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:
First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base.
Second wave: Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use.
Third wave: Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL.
We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!
FHI 360 Academy Hall, 8th Floor 1825 Connecticut Avenue NW Washington, DC 20009
We gathered at the first MERL Tech Conference in 2014 to discuss how technology was enabling the field of monitoring, evaluation, research and learning (MERL). Since then, rapid advances in technology and data have altered how most MERL practitioners conceive of and carry out their work. New media and ICTs have permeated the field to the point where most of us can’t imagine conducting MERL without the aid of digital devices and digital data.
The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data and ICTs has been clouded, however, with legitimate questions about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies.
Join us in Washington, DC, on September 5-6 for this year’s MERL Tech Conference where we’ll be taking stock of changes in the space since 2014; showcasing promising technologies, ideas and case studies; sharing learning and challenges; debating ideas and approaches; and sketching out a vision for an ideal MERL future and the steps we need to take to get there.
Tech and traditional MERL: How is digital technology enabling us to do what we’ve always done, but better (consultation, design, community engagement, data collection and analysis, databases, feedback, knowledge management)? What case studies can be shared to help the wider sector learn and grow? What kinks do we still need to work out? What evidence base exists that can support us to identify good practices? What lessons have we learned? How can we share these lessons and/or skills with the wider community?
Data, data, and more data: How are new forms and sources of data allowing MERL practitioners to enhance their work? How are MERL Practitioners using online platforms, big data, digitized administrative data, artificial intelligence, machine learning, sensors, drones? What does that mean for the ways that we conduct MERL and for who conducts MERL? What concerns are there about how these new forms and sources of data are being used and how can we address them? What evidence shows that these new forms and sources of data are improving MERL (or not improving MERL)? What good practices can inform how we use new forms and sources of data? What skills can be strengthened and shared with the wider MERL community to achieve more with data?
Emerging tools and approaches: What can we do now that we’ve never done before? What new tools and approaches are enabling MERL practitioners to go the extra mile? Is there a use case for blockchain? What about facial recognition and sentiment analysis in MERL? What are the capabilities of these tools and approaches? What early cases or evidence is there to indicate their promise? What ideas are taking shape that should be tried and tested in the sector? What skills can be shared to enable others to explore these tools and approaches? What are the ethical implications of some of these emerging technological capabilities?
The Future of MERL: Where should we be going and what should the future of MERL look like? What does the state of the sector, of digital data, of technology, and of the world in which we live mean for an ideal future for the MERL sector? Where do we need to build stronger bridges for improved MERL? How should we partner and with whom? Where should investments be taking place to enhance MERL practices, skills and capacities? How will we continue to improve local ownership, diversity, inclusion and ethics in technology-enabled MERL? What wider changes need to happen in the sector to enable responsible, effective, inclusive and modern MERL?
Cross-cutting themes include diversity, inclusion, ethics and responsible data, and bridge-building across disciplines.
You’ll join some of the brightest minds working on MERL across a wide range of disciplines – evaluators, development and humanitarian MERL practitioners, small and large non-profit organizations, government and foundations, data scientists and analysts, consulting firms and contractors, technology developers, and data ethicists – for 2 days of in-depth sharing and exploration of what’s been happening across this multidisciplinary field and where we should be heading.
The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:
Transform and modernize MERL in an intentionally responsible and inclusive way
Promote ethical and appropriate use of tech (for MERL and more broadly)
Encourage diversity & inclusion in the sector & its approaches
Improve development, tech, data & MERL literacy
Build/strengthen community, convene, help people talk to each other
Help people find and use evidence & good practices
Provide a platform for hard and honest talks about MERL and tech and the wider sector
Spot trends and future-scope for the sector
Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.
Attendees came from a wide diversity of professions and disciplines:
An unofficial estimate on speaker racial and gender diversity is here.
Building bridges, connections, community, and capacity
Sharing experiences, examples, challenges, and good practice
Strengthening the evidence base on MERL Tech and ICT4D approaches
Facing our challenges and shortcomings
Exploring the future of MERL
As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.
Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).
One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.
As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”
Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.
Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector. Read the full Feedback Report here!
What’s next? State of the Field Research!
In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.
We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.