The Good, the Bad, and the Ugly of Using IATI Results Data
This is a cross-post from Taryn Davis of Development Gateway. The original was published here on September 19th, 2017. Taryn and Reid Porter led the “Making open data on results useful” session at MERL Tech DC.
It didn’t surprise me when I learned that — when Ministry of Finance officials conduct trainings on the Aid Management Platform for Village Chiefs, CSOs and citizens throughout the districts of Malawi — officials are almost immediately asked:
“What were the results of these projects? What were the outcomes?”
It didn’t just matter what development organizations said they would do — it also mattered what they actually did.
We’ve heard the same question echoed by a number of agriculture practitioners interviewed as part of the Initiative for Open Ag Funding. When asked what information they need to make better decisions about where and how to implement their own projects, many replied:
“We want to know — if [others] were successful — what did they do? If they weren’t successful, what shouldn’t we do?”
This interest in understanding what went right (or wrong) came not from wanting to point fingers, but from genuine desire to learn. In considering how to publish and share data, the importance of — and interest in! — learning cannot be understated.
At MERL Tech DC earlier this month, we decided to explore the International Aid Transparency Initiative (IATI) format, currently being used by organizations and governments globally for publishing aid and results data. For this hands-on exercise, we printed different types of projects from the D-Portal website, including any evaluation documents included in the publication. We then asked participants to answer the following questions about each project:
- What were the successes of the project?
- What could be replicated?
- What are the pitfalls to be avoided?
- Where did it fail?
Taryn Davis leading participants through using IATI results data at MERLTech DC
We then discussed whether participants were (or were not) able to answer these questions with the data provided. Here is the Good, the Bad, and the Ugly of what participants shared:
The Good
- Many were impressed that this data — particularly the evaluation documents — were even shared and made public, not hidden behind closed doors.
- For those analyzing evaluation documents, the narrative was helpful for answering our four questions, versus having just the indicators without any context.
- One attendee noted that this data would be helpful in planning project designs for business development purposes.
The Bad
- There were challenges with data quality — for example, some data were missing units, making it difficult to identify — was the number “50” a percent, a dollar amount, or another unit?
- Some found the organizations’ evaluation formats easier to understand than what was displayed on D-portal. Others were given evaluations with a more complex format, making it difficult to identify key takeaways. Overall, readability varied, and format matters. Sometimes less columns is more ( readable). There is a fine line between not enough information (missing units), and a fire hose of information (gigantic documents).
- Since the attachments included more content in narrative format, they were more helpful in answering our four questions than just the indicators that were entered in the IATI standard.
- There were no visualizations for a quick takeaway on project success. A visual aid would help understand “successes” and “failures” quicker without having spend as much time digging and comparing, and could then spend more time looking at specific cases and focusing on the narrative.
- Some data was missing time periods, making it hard to know how relevant it would be for those interested in using the data.
- Data was often disorganized, and included spelling mistakes.
The Ugly
- Reading data “felt like reading the SAT”: challenging to comprehend.
- The data and documents weren’t typically forthcoming about challenges and lessons learned.
- Participants weren’t able to discern any real, tangible learning that could be practically applied to other projects.
Fortunately, the “Bad” elements can be relatively easily addressed. We’ve spent time reviewing results data for organizations published in IATI, providing feedback to improve data quality, and to make the data cleaner and easier to understand.
However, the “ugly” elements are really key for organizations that want to share their results data. To move beyond a “transparency gold star,” and achieve shared learning and better development, organizations need to ask themselves:
“Are we publishing the right information, and are we publishing it in a usable format?”
As we noted earlier, it’s not just the indicators that data users are interested in, but how projects achieved (or didn’t achieve) those targets. Users want to engage in the “L” in Monitoring, Evaluation, and Learning (MERL). For organizations, this might be as simple as reporting “Citizens weren’t interested in adding quinoa to their diet so they didn’t sell as much as expected,” or “The Village Chief was well respected and supported the project, which really helped citizens gain trust and attend our trainings.”
This learning is important both for organizations internally, enabling them to understand and learn from the data; it’s also important for the wider development community. In hindsight, what do you wish you had known about implementing an irrigation project in rural Tanzania before you started? That’s what we should be sharing.
In order to do this, we must update our data publishing formats (and mindsets) so that we can answer questions like, “How did this project succeed? What can be replicated? What are the pitfalls to avoid? Where did it fail?” Answering these kinds of questions — and enabling actual learning — should be a key goal for all project and programs; and it should not feel like an SAT exam every time we do so.
Image Credit: Reid Porter, InterAction