When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.
Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.
Consider the Public Option
And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?
For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.
The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.
Flip the Paradigm
And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.
But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.
Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.
It didn’t surprise me when I learned that — when Ministry of Finance officials conduct trainings on the Aid Management Platform for Village Chiefs, CSOs and citizens throughout the districts of Malawi — officials are almost immediately asked:
“What were the results of these projects? What were the outcomes?”
It didn’t just matter what development organizationssaid they would do — it also mattered what they actually did.
We’ve heard the same question echoed by a number of agriculture practitioners interviewed as part of the Initiative for Open Ag Funding. When asked what information they need to make better decisions about where and how to implement their own projects, many replied:
“We want to know — if [others] were successful — what did they do? If they weren’t successful, what shouldn’t we do?”
This interest in understanding what went right (or wrong) came not from wanting to point fingers, but from genuine desire to learn. In considering how to publish and share data, the importance of — and interest in! — learning cannot be understated.
At MERL Tech DC earlier this month, we decided to explore the International Aid Transparency Initiative (IATI) format, currently being used by organizations and governments globally for publishing aid and results data. For this hands-on exercise, we printed different types of projects from the D-Portal website, including any evaluation documents included in the publication. We then asked participants to answer the following questions about each project:
What were the successes of the project?
What could be replicated?
What are the pitfalls to be avoided?
Where did it fail?
Taryn Davis leading participants through using IATI results data at MERLTech DC
We then discussed whether participants were (or were not) able to answer these questions with the data provided. Here is the Good, the Bad, and the Ugly of what participants shared:
Many were impressed that this data — particularly the evaluation documents — were even shared and made public, not hidden behind closed doors.
For those analyzing evaluation documents, the narrative was helpful for answering our four questions, versus having just the indicators without any context.
One attendee noted that this data would be helpful in planning project designs for business development purposes.
There were challenges with data quality — for example, some data were missing units, making it difficult to identify — was the number “50” a percent, a dollar amount, or another unit?
Some found the organizations’ evaluation formats easier to understand than what was displayed on D-portal. Others were given evaluations with a more complex format, making it difficult to identify key takeaways. Overall, readability varied, and format matters. Sometimes less columns is more ( readable). There is a fine line between not enough information (missing units), and a fire hose of information (gigantic documents).
Since the attachments included more content in narrative format, they were more helpful in answering our four questions than just the indicators that were entered in the IATI standard.
There were no visualizations for a quick takeaway on project success. A visual aid would help understand “successes” and “failures” quicker without having spend as much time digging and comparing, and could then spend more time looking at specific cases and focusing on the narrative.
Some data was missing time periods, making it hard to know how relevant it would be for those interested in using the data.
Data was often disorganized, and included spelling mistakes.
Reading data “felt like reading the SAT”: challenging to comprehend.
The data and documents weren’t typically forthcoming about challenges and lessons learned.
Participants weren’t able to discern any real, tangible learning that could be practically applied to other projects.
Fortunately, the “Bad” elements can be relatively easily addressed. We’ve spent time reviewing results data for organizations published in IATI, providing feedback to improve data quality, and to make the data cleaner and easier to understand.
However, the “ugly” elements are really key for organizations that want to share their results data. To move beyond a “transparency gold star,” and achieve shared learning and better development, organizations need to ask themselves:
“Are we publishing the right information, and are we publishing it in a usable format?”
As we noted earlier, it’s not just the indicators that data users are interested in, but how projects achieved (or didn’t achieve) those targets. Users want to engage in the “L” in Monitoring, Evaluation, and Learning (MERL). For organizations, this might be as simple as reporting “Citizens weren’t interested in adding quinoa to their diet so they didn’t sell as much as expected,” or “The Village Chief was well respected and supported the project, which really helped citizens gain trust and attend our trainings.”
This learning is important both for organizations internally, enabling them tounderstand and learn from the data; it’s also important for the wider development community. In hindsight, what do you wish you had known about implementing an irrigation project in rural Tanzania before you started? That’s what we should be sharing.
In order to do this, we must update our data publishing formats (and mindsets) so that we can answer questions like, “How did this project succeed? What can be replicated? What are the pitfalls to avoid? Where did it fail?” Answering these kinds of questions — and enabling actual learning — should be a key goal for all project and programs; and it should not feel like an SAT exam every time we do so.