Indaba is an isiZulu and isiXhosa word for an important meeting held by leaders in South Africa to discuss critical matters. This past week, I’ve been listening in at the Africa Evaluation Indaba, organized by The University of the Witwatersrand and CLEAR Anglophone Africa. Critical matters have indeed been discussed!
At the upcoming session, we will discuss ways that M&E practitioners can improve data management and how they can play a role in improving data governance practices at the institutional and national levels. We will open the floor for discussion and consultation on priority areas and gaps in the practical aspects of responsible data management as well as in data governance processes that improve accountability.
Following the Indaba, will draft up a plan that lays out how we can best offer training, guidance and support for the M&E community with relation to responsible data management and data governance. We also plan to develop a set of guidance documents on responsible data governance and M&E together with the RDiME Working Group, which is made up of a group of experts who have data governance, data protection, and evaluation-related expertise and experience. We hope the RDiME Alliance’s work will support government evaluation efforts as well as civil society organizations and evaluation firms.
What makes the Africa Indaba Evaluation conversations so exciting (for me, at least!) is that they are framed within a lens of decolonization and transformation. This past week, topics included:
“Transforming Evaluation: The Race, Power, Gender and Class Struggle,” with speakers covering questions like: how do we locate evaluation within the historical context of asymmetrical global power relations and aid dependency? What needs to be done to dismantle systems and structures so that evaluation does not become complicit in entrenching existing inequalities? (Monday 16 November)
The Made in Africa (MAE) Evaluation approachwhich arose out of the quest for contextually relevant approaches, methods that emphasize the centrality of contextual relevance and place importance on indigenous knowledge systems. (Tuesday 17 November)
The launch of the Global Evaluation Initiative, (GEI) which aims to offer better coordination of evaluation resources and to support local and international organizations working in the area of evaluation. (Wednesday 18 November)
(Recordings of these sessions will be available soon).
Registration to the 2020 GeOnG Forum, taking place from November 2nd to 3rd, 2020 is now officially open! Please take 2 minutes to fill out the GeOnG registration form to attend this year’s edition.
For the first time ever, the forum will take place online. Expect a few twists: more participants, many live sessions alongside a video library of short presentations, and most of the content accessible for free!
We are very pleased to announce that Ben Parker, Senior Editor at The New Humanitarian will be opening the event as keynote speaker. You can learn more about the organizations expected to attend here.
We’ve scheduled 10 roundtables and an experience-sharing session on the impact of the Covid-19 crisis on data practices in the aid sector.
The GeOnG team will also strive to offer a selection of 15+ online workshops as well as 20+ short presentations on a wide range of topics related to the theme of the 2020 edition: “People at the heart of Information Management: promoting responsible and inclusive practices“. Most sessions will be conducted in English, but we will have a few workshops in French as well.
Our biggest edition so far: 500 participants expected online!
Free access to the majority of the content of the forum
A wide diversity of stakeholders: NGOs, IOs, donors, Global South actors, universities and more!
Watch and engage with sessions live or on your own time
Schedule time to network with fellow attendees and GeOnG sponsors & partners
Be part of the conversation on responsible data and more inclusive data management practices!
The GeOnG – the Humanitarian & Development Data Forum – is organized every 2 years by CartONG. Learn more about our organization!
Created in 2006, CartONG is a French H2H NGO specialized in Information Management. Our goal is to put data at the service of humanitarian, development and social action projects. We are dedicated to improving the quality and accountability of field activities, in particular through better needs assessments and monitoring and evaluation. We act as a multidisciplinary resources and expertise center, accompanying our partners’ strategies and operations. Our staff and volunteers also support the community as a whole by producing documentation, building capacities and raising awareness on the technical, strategic and ethical challenges of digital technologies.
CartONG has just released a new study on “Program Data: The silver bullet of the humanitarian and aid sectors? Panorama of the practices and needs of francophone CSOs“.
What place for program data management in a sector in the throes of a digital revolution?
Mirroring our society, the Humanitarian Aid and International Development (HAID) sector is in the throes of a digital revolution. Whilst the latter is undeniably impacting day-to-day management of Civil Society Organisations (CSOs) – whether in their administrative duties or in those related to fundraising – it is also generating radical changes in actions being implemented for the benefit of populations.
Although it has become a key element in the coordination of operations, data management remains somewhat invisible from the perspective of the sector, in spite of its many ethical, financial and human implications, and above all its impact on project quality. In the field and at headquarters, project teams are therefore devoting an increasing amount of time to data management, often at the expense of other activities. Poorly trained and ill-equipped, these teams can produce substandard performances with regards to these tasks, and without the topic necessarily being regarded as an operational issue by most CSOs.
Program data management – also known as Information Management (IM) – is both a topical issue and the source of numerous debates within francophone Humanitarian Aid and International Development CSOs.
A unique study in the world of French-speaking CSOs
At present and to our knowledge, no equivalent study with a view to examining, as a whole, the practices of (francophone) CSOs, or to identifying their needs in terms of program data management, has yet been carried out. A number of analyses and articles do exist, yet these generally approach the subject either from a technical standpoint or as if these were still innovations for the sector and thus with limited constructive hindsight.
The organisational dimension is moreover relatively unexplored and very little consolidated data at the inter-CSO level is available. Lastly, although CSOs have been handling large amounts of data for almost 20 years, there remains much debate: what level of attention and investment should data management be subject to? Does the activity require a dedicated person in-house and, if so, which profile should be given priority? In fact, where does the scope of data management begin and where does it end? Do CSOs working in humanitarian situations have different needs than those working in a development context? Do differences in approach exist between francophone and anglophone CSOs, the latter often deemed more advanced in the field?
Based on a survey of CSOs, a literature review and interviews with key stakeholders, this study designed by CartONG aims to explore and provide preliminary answers to these questions. It also aims to make a valuable contribution to bolster the debate on data management. To this end, we have thereupon sought to synthesise and formalise often scattered and at times contradictory considerations.
Based on the concept of Information Management (IM), program data management is a term whose scope of application continues to fluctuate and whose definition remains unclear. With a view to facilitating its ownership, readers of this new study will be given an accessible definition (synthesised in the diagram below) and a relatively small scope of application (see illustration below), at the juncture of Monitoring & Evaluation (M&E), Information and Communications Technologies for Development (ICT4D), information systems and knowledge management.
Main components of Information Management
Simplified diagram of the place of Information Management vis-à-vis related topics
Program data management & Francophone CSOs: an overview of the main stakes and of the existing relationships by categories of CSOs
Despite studies still being relatively sparse as to the link between project data management and project quality, the available evidence shows that good data project management makes for greater efficiency and transparency in organisations. The evidence gathered suggests, however, that project data management is widely used today for the benefit of bottom-up accountability – towards decision-makers and financial backers – rather than for day-to-day project steering.
The reasons for this state of affairs are manifold, but it appears that chief amongst them is a significant lack of maturity from francophone CSOs in matters relating to data and digital issues. Six main weaknesses and levers for action have thus been identified (see illustration):
an insufficient data literacy within CSOs
unduly fragile, siloed and insufficiently funded program data management strategies
a lack of leadership and often overly vague responsibilities;
a technological environment that is neither controlled nor influenced by CSOs
the use of approaches that foster information overload and neglect qualitative data; and
an under-estimation of the responsibilities carried by CSOs and of the ethical issues at stake with regard to the data they manipulate.
Confronted with these challenges, it appears that francophone CSOs are somewhat lagging behind – at least in terms of awareness and strategic positioning – compared to their anglophone counterparts. Moreover, program data management continues to be approached by the various CSOs in an inconsistent manner: the study therefore proposes a classification of CSOs and reflects on the main existing differences – between types, sectors and sizes – and in particular points out the difficulties encountered by the smallest organisations.
What types of IM support are expected by Francophone CSOs and on what priority themes?
This study was also an opportunity to identify both the type of materials and on which priority program data management themes a support is expected by francophone CSOs (see below); especially to enable specialized organisations, including H2H/Support CSOs such as CartONG, to better define their priorities of support toward CSOs.
The study also reveals that CSOs are mainly waiting for accompaniment on the following topics (in this order):
selection of solutions
responsible data management
data quality control
data sharing and, for the smaller ones also
database design and
simple map visualization.
What follow-up does CartONG intend to give to this study?
The study closes with a series of some fifteen recommendations to the various international aid and development actors, especially CSOs, who would benefit from being more proactive on the topic, as well as to donors and network heads who play a pivotal role to advance these issues.
By clarifying the various elements feeding the debate along with the issues at stake, we hope that this document – which remains a first for CartONG – will help feed current discussions. Many of them should actually be taken up again during the next GeOnG Forum that will be held online from November 2-3, 2020.
Carried out as part of the project Strengthening program data management within francophone CSOs carried out by CartONG (and co-financed by the French Development Agency – AFD over the 2020-2022 period), this study should be the subject of presentations during face-to-face or remote events before the year is out. It will also be enriched in the coming months by the release of many other resources.
Do not hesitate to follow us on social media or to write to us to be added to the project mailing list to stay informed.
Update, 26 August 2020 – The new deadline to submit an RFP is now end of day, Sunday, 13 September 2020.
Many thanks to those who have submitted and those who have asked questions related to the MERL Tech and MERL Center Website request for proposals. Please find our responses below. You may resubmit without penalty if the answers below change your proposal.
Is WordPress a hard requirement for merltech.org and is GitHub Pages a hard requirement for the MERL Center?
Yes, WordPress is a hard requirement for merltech.org. We are open to other suggestions than GitHub Pages for the MERL Center, but the website must be editable from the (public) MERL Center GitHub repository.
My company is headquartered in the US, but my development team is in India. Can I still apply?
Yes, you can still apply so long as we are able to have development and design discussions with a main point of contact throughout the project. We are based on the east coast (ET) of the US.
Is merltech.org a free WordPress blog or a hosted website using the free WordPress CMS?
The merltech.org site is currently a free WordPress blog. It is possible it will move to a full hosted website running the free WordPress CMS. MerlTech.org is an independently hosted website (not on WordPress.org) site using the WordPress CMS. The selected candidate would NOT be responsible for covering the cost of hosting, security certificates or plugins. Kurante / GitHub will cover those costs separately.
Are there specific WordPress plugins merltech.org relies on now? Will this change in the future?
There are no plugins currently used on the merltech.org site. MerlTech uses Akismet Anti-Spam, Classic Editor, Easy Updates Manager, Jetpack, and Wordfence Security. We are open to adding plugins.
Could you elaborate more on your understanding of the term “new information architecture”? There is a lot of information/content on merltech.org.
This part is flexible. A new information architecture could be as simple as reordering or renaming the current page structure. It could also be a full page structure redo with adding secondary headers, archiving or otherwise reorganizing content, and adding tags. The selected candidate will help Kurante determine the best option and migrate content, but is NOT responsible for editing or formatting pieces of content.
Will the MERL Tech Sched account be upgraded or will new Sched features for MERL Tech be added in the future?
Kurante pays for Sched to manage 2-3 of its conferences per year. Sched is integrated with Eventbrite. Kurante then links the agenda into merltech.org. No changes are envisioned to the current plan.
Your current Sched integration is mostly embedding and doesn’t use any API integrations. Will there be any future Sched APIs used on merltech.org?
Sched provides a free and simple HTML embed code that is added to merltech.org. There is no plan to use any of Sched’s APIs.
How will content be integrated across the two two websites? Will the MERL Center GitHub Pages site need to pull content off WordPress site or vice versa?
There is no planned direct integration of merltech.org and the forthcoming merlcenter.org site. The merlcenter.org site will pull its code and content from a public GitHub repository, much like how https://opensource.guide works. The selected candidate will be asked to create unified branding (CSS style guides, page structures, related logos, etc.) between the two sites and recommend how and where to link the sites to each other.
Will MERL Center working group members need training on GitHug Pages and/or Markdown?
MERL Center working group members already use Markdown. 1-2 members will need to understand how the forthcoming GitHub Pages code is set up and how to make simple style modifications, page additions, etc. ~5-10 members may also require (basic) training on adding or modifying content on the GitHub Pages site.
Our first webinar in the series Emerging Data Landscapes in M&E, on Geospatial, location and big data: Where have we been and where can we go? was held on 28 July. We had a lively discussion on the use of these innovative technologies in the world of evaluation.
First, Estelle Raimondo, Senior Evaluation Officer at the World Bank Independent Evaluation Group, framed the discussion with her introduction on Evaluation and emerging data: what are we learning from early applications? She noted how COVID-19 has been an accelerator of change, pushing the evaluation community to explore new, innovative technologies to overcome today’s challenges, and set the stage for the ethical, conceptual and methodical considerations we now face.
Next came the Case Study: Integrating geospatial methods into evaluations: opportunities and lessons from Anupam Anand, Evaluation Officer at the Global Environmental Facility, Independent Evaluation Office, and Hur Hassnain, Senior Evaluation Advisor, European Commission DEVCO/ESS. After providing an overview of the advantages of using satellite and remote sensing data, particularly in fragile and conflict zones, the presenters gave the examples of their use in Syria and Sierra Leone.
The second Case Study: Observing from space when you cannot observe from the field, was presented by Joachim Vandercasteelen, Young Professional at World Bank Independent Evaluation Group. This example focused on using geospatial data for evaluating a biodiversity conservation project in Madagascar, as traveling to the field was not feasible. The presentation gave an overview on how to use such technology for both quantitative and qualitative assessments, but also the downsides to consider.
The full recording of the webinar, including the PowerPoint Presentations and Questions & Answers session at the end, are available on the EES’ YouTube page.
Over the next month, we will release specific blogs of each of the presentations, where the speakers will answer the questions participants raised during the webinar that were not already addressed during the Q&A, and provide the links to further reading on the subject. These will be publicly available on the EES Blog.
Update, 26 August 2020 – The new deadline to submit an RFP for the websites development is now end of day, Sunday, 13 September 2020. The application for the MERL Center stipends is now closed.
Post by Mala Kumar, GitHub Social impact, Tech for Social Good
Over the past year, some of you may have seen previous posts here on MERL Tech about a new, highly collaborative effort called the “MERL Center.” A joint effort of the GitHub Social Impact, Tech for Social Good (formerly Open Source for Good) program and MERL Tech, the MERL Center is creating case studies, beginner’s guides and more to support practitioners determine if, when and how to use open source software, tools and data as part of MERL solutions. Since launching at the MERL Tech DC conference in September 2019, we have grown to 30+ Working Group members.
We’re now actively recruiting for two PAID opportunities that will help take the MERL Center to the next level, as outlined below!
MERL Center Working Group Stipends
We are offering 15 stipends of US $500 to MERL Center Working Group members who are willing, able and qualified to commit to set deliverables in support of the MERL Center (for example, supporting with writing on case studies or developing guidance).
Existing Working Group members and new members are welcome to apply. Five stipends are reserved for new professionals with less than three (3) years of experience, including internships. Five stipends are reserved for applicants from low- and middle-income countries. The remaining five stipends are open to anyone. For now, the stipends are a one-time annual payment, though we hope to fund additional stipends next year.
Please note it is NOT required to apply for a stipend to join/continue with the MERL Center as a Working Group member. Email Malaif you wish to get involved without receiving a stipend. Stipends are funded by GitHub and will be distributed through MERL Tech (via Kurante).
Key aspects coming out of the events were the need for 1) guidance on data governance and 2) orientation on responsible data practices. Both policy and practice need to be contextualized for the African context and aimed at supporting African monitoring, evaluation, research and learning (MERL) practitioners in their work.
As a follow-on activity, CLEAR Anglophone Africa is calling on M&E practitioners to join up to be a part of this responsible data project for African MERL Practitioners. CLEAR Anglophone Africa and MERL Tech will be collaborating on this responsible data initiative.
A new wave of technologies and approaches has the potential to influence how monitoring, evaluation, research and learning (MERL) practitioners do their work. The growth in use of smartphones and the internet, digitization of existing data sets, and collection of digital data make data increasingly available for MERL activities. This changes how MERL is conducted and, in some cases, who conducts it.
We hypothesized that emerging technology is revolutionizing the types of data that can be collected and accessed and the ways that it can be processed and used for better MERL. However, improved research on and documentation of how these technologies are being used is required so the sector can better understand where, when, why, how, and for which populations and which types of MERL these emerging technologies would be appropriate.
The team reviewed the state of the field and found there were three key new areas of data that MERL practitioners should consider:
New kinds of data sources, such as application data, sensor data, data from drones and biometrics. These types of data are providing more access to information and larger volumes of data than ever before.
New types of systems for data storage. The most prominent of these was the distributed ledger technologies (also known as blockchain) and an increasing use of cloud and edge computing. We discuss the implications of these technologies for MERL.
New ways of processing data, mainly from the field of machine learning, specifically supervised and unsupervised learning techniques that could help MERL practitioners manage large volumes of both quantitative and qualitative data.
These new technologies hold great promise for making MERL practices more precise, automated and timely. However, some challenges include:
A need to clearly define problems so the choice of data, tool, or technique is appropriate
Non-representative selection bias when sampling
Reduced MERL practitioner or evaluator control
Change management needs to adapt how organizations manage data
Rapid platform changes and difficulty with assessing the costs
A need for systems thinking which may involve stitching different technologies together
To address emerging challenges and make best use of the new data, tools, and approaches, we found a need for capacity strengthening for MERL practitioners, greater collaboration among social scientists and technologists, a need for increased documentation, and a need for the incorporation of more systems thinking among MERL practitioners.
Finally there remains a need for greater attention to justice, ethics and privacy in emerging technology.
Guest Post by Zach Tilton, Doctoral Research Associate, Interdisciplinary Ph.D. in Evaluation (IDPE), Western Michigan University
Would I be revealing too much if I said we initially envisioned and even titled our knowledge synthesis as a ‘rapid’ scoping review? Hah! After over a year and a half of collaborative research with an amazing team we likely have just as many findings about how (and how not) to conduct a scoping review as we do about the content of our review on traditional MERL Tech. I console myself that the average Cochrane systematic review takes 30 months to complete (while recognizing that is a more disciplined knowledge synthesis).
Looking back, I could describe our hubris and emotions during the synthesis process similar to the trajectory of the Gartner Hype Cycle, a concept we draw from in our broader MERL Tech State of the Field research to conceptualize the maturity and adoption of technology. Our triggering curiosities about the state of the field was followed by multiple peaks of inflated expectations and troughs of disillusionment until we settled onto the plateau of productivity (and publication). We uncovered much about the nature of what we termed traditional MERL Tech, or tech-enabled systematic inquiry that allows us to do what we have always done in the MERL space, only better or differently.
One of our findings was actually related to the possible relationship technologies have with the Gartner Hype Cycle. Based on a typology we developed as we started screening studies from our review, we found that the ratio of studies related to a specific MERL Tech versus the studies focused on that same MERL Tech, provided an indirect measure of the trust researchers and practitioners had in that technology to deliver results, similar to the expectation variable in Y axis of the Hype Cycle plane.
Briefly, in focused studies MERL Tech is under the magnifying glass; in related studies MERL Tech is the magnifying glass. When we observed specific technologies being regularly used to study other phenomena significantly more than they were themselves being studied, we inferred these technologies were trusted more than others to deliver results. Conversely, when we observed a higher proportion of technologies being investigated as opposed to facilitating investigations, we inferred these were less trusted to deliver results. In other words, coupled with higher reported frequency, the technologies with higher levels of trust could be viewed as farther along on the hype cycle than those with lower levels of trust. Online surveys, geographic information system, and quantitative data analysis software were among the most trusted technologies, with dashboards, mobile tablets, and real-time technologies among the least trusted.
To read a further explanation of this and other findings, conclusions, and recommendations from our MERL Tech State of the Field Scoping Review, download the white paper.
The year 2020 is a compelling time to look back and pull together lessons from five years of convening hundreds of monitoring, evaluation, research, and learning and technology practitioners who have joined us as part of the MERL Tech community. The world is in the midst of the global COVID-19 pandemic, and there is an urgent need to know what is happening, where, and to what extent. Data is a critical piece of the COVID-19 response — it can mean the difference between life and death. And technology use is growing due to stay-at-home orders and a push for “remote monitoring” and data collection from a distance.
At the same time, we’re witnessing (and I hope, also joining in with) a global call for justice — perhaps a tipping point — in the wake of decades of racist and colonialist systems that operate at the level of nations, institutions, organizations, the global aid and development systems, and the tech sector. There is no denying that these power dynamics and systems have shaped the MERL space as a whole, and the MERL Tech space as well.
Moments of crisis tend to test a field, and we live in extreme times. The coming decade will demand a nimble, adaptive, fair, and just use of data for managing complexity and for gaining longer-term understanding of change and impact. Perhaps most importantly, in 2020 and beyond, we need meaningful involvement of stakeholders at every level and openness to a re-shaping of our sector and its relationships and power dynamics.
It is in this time of upheaval and change that we are releasing a set of four papers that aim to take stock of the field from 2014-2019 as launchpad for shaping the future of MERL Tech. In September 2018, the papers’ authors began reviewing the past five years of MERL Tech events to identify lessons, trends, and issues in this rapidly changing field. They also reviewed the literature base in an effort to determine what we know, what we yet need to understand about technology in MERL, and what are the gaps in the formal literature. No longer is this a nascent field, yet it is one that is hard to keep up with, given that it is fast paced and constantly shifting with the advent of new technologies. We have learned many lessons over the past five years, but complex political, technical, and ethical questions remain.
The State of the Field series includes four papers:
What We Know About Traditional MERL Tech: Insights from a Scoping Review: Zach Tilton, Michael Harnar, and Michele Behr, University of Western Michigan; Soham Banerji and Manon McGuigan, independent consultants; and Paul Perrin, Gretchen Bruening, John Gordley and Hannah Foster, University of Notre Dame; Linda Raftree, independent consultant and MERL Tech Conference organizer.
Through these papers, we aim to describe the State of the Field up to 2019 and to offer a baseline point in time from which the wider MERL Tech community can take action to make the next phase of MERL Tech development effective, responsible, ethical, just, and equitable. We share these papers as conversation pieces and hope they will generate more discussion in the MERL Tech space about where to go from here.
We’d like to start or collaborate on a second round of research to delve into areas that were under-researched or less developed. Your thoughts are most welcome on topics that need more research, and if you are conducting research about MERL Tech, please get in touch and we’re happy to share here on MERL Tech News or to chat about how we could work together!