by Emily Tomkys Valteri, the ICT in Program Accountability Project Manager at Oxfam GB. In her role, Emily drives Oxfam’s thinking on the use of Information and Communications Technologies (ICT) for accountability and supports staff with applications of ICTs within their work.
Every day the human race generates enough data to fill 10 million blu-ray discs and if you stacked them up it would be four times the height of the Eiffel tower. Although the data we process at Oxfam is tiny in comparison, sometimes the journey towards being “data driven” feels like following the yellow brick road to The Emerald City. It seems like a grand ideal, but for anyone who knows the film, inflated expectations are set to be dashed. Does data actually help organisations like Oxfam better understand the needs of communities affected by disaster or poverty? Or do we need to pull back the curtain and manage our expectations about getting the basics right? When there are no ruby slippers, we need to understand what it is we can do with data and improve the way data is managed and analysed across countries and projects.
Oxfam works in over 90 countries using a variety of different data management and analysis tools that are developed or purchased in country. In the past, we have experimented with software licenses and database expertise, but we have started aiming for a more joined up approach. It’s our belief that good systems which build in privacy by design can help us stay true to values in our rights based Responsible Program Data Policy and Information Systems Data Security guidelines – which are about treating those people whom data is about with dignity and respect.
One of our most intractable challenges is that Oxfam’s data is analysed in system silos. Data is usually collected and viewed through a project level lens. Different formats and data standards make it difficult to compare across countries, regions or even globally. When data remains in source systems, trying to analyse between different systems is long and manual, meaning that any meta analysis is rarely done. One of the key tenants of Responsible Data is to only collect data you can use and to make the most of that information to effectively meet people’s needs. Oxfam collects a lot of valuable data and we think we need to do more with it: analyse more efficiently, effectively, at national and beyond level to drive our decision making in our programmes.
In response, Oxfam has begun creating the DataHub: a system which integrates programme data into a standard set of databases and presents it to a reporting layer for analysis. It bakes in principles of privacy and compliance with new data protection laws by design. Working with our in-house agile software development team we conducted four tech sprints, each lasting two weeks. Now we have the foundations. One of our standard data collection tools, SurveyCTO, is being pushed via a webhook into our unstructured database, Azure Cosmos DB. Within this database, the data is organised into collections, currently set up by country. From here, the data can be queried using Power BI and presented to programme teams for analysis. Although we only have one source system into quantitative analysis for now, the bigger picture will have lots of source systems and a variety of analysis options available.
To get to where we are today, Oxfam’s ICT in Programme team worked closely with the Information Systems teams to develop a solution that was in line with strategy and future trends. Despite the technology being new to Oxfam, the solution is relatively simple and we ensured good process, interoperability and that tools available to us were fit for purpose. This collaborative approach gave us the organisational support to prioritise these activities as well as the resources required to carry them out.
This journey wasn’t without its challenges, some of which are still being worked on. The EU General Data Protection Regulation (GDPR) coming into force in May 2018, and Oxfam has had to design the DataHub with this in mind. At this stage, data is anonymised during integration and so no Personally Identifiable Information (PII) enters the DataHub due to a series of configurations and processes we have put in place. Training and capacity is another challenge, we need to encourage a culture of valuing the data. This will only be of benefit to teams and the organisation if they make use of the system, investing time and resources to learning it.
We are excited about the potential of the DataHub and the success we have already had in setting up the infrastructure to enable more efficient data analysis and more responsive programming as well as save resources. We are keen to work with and share ideas with others. We know there is a lot of work ahead to foster a data driven organisation but we’re starting to feel, with the right balance of technology, process and culture it’s more realistic than we might have first hoped.