Tag Archives: Maliha Khan

MERL Tech Maturity Models

by Maliha Khan, a development practitioner in the fields of design, measurement, evaluation and learning. Maliha led the Maturity Model sessions at MERL Tech DC and Linda Raftree, independent consultant and lead organizer of MERL Tech.

MERL Tech is a platform for discussion, learning and collaboration around the intersection of digital technology and Monitoring, Evaluation, Research, and Learning (MERL) in the humanitarian and international development fields. The MERL Tech network is multidisciplinary and includes researchers, evaluators, development practitioners, aid workers, technology developers, data analysts and data scientists, funders, and other key stakeholders.

One key goal of the MERL Tech conference and platform is to bring people from diverse backgrounds and practices together to learn from each other and to coalesce MERL Tech into a more cohesive field in its own right — a field that draws from the experiences and expertise of these various disciplines. MERL Tech tends to bring together six broad communities:

  • traditional M&E practitioners, who are interested in technology as a tool to help them do their work faster and better;
  • development practitioners, who are running ICT4D programs and beginning to pay more attention to the digital data produced by these tools and platforms;
  • business development and strategy leads in organizations who want to focus more on impact and keep their organizations up to speed with the field;
  • tech people who are interested in the application of newly developed digital tools, platforms and services to the field of development, but may lack knowledge of the context and nuance of that application
  • data people, who are focused on data analytics, big data, and predictive analytics, but similarly may lack a full grasp of the intricacies of the development field
  • donors and funders who are interested in technology, impact measurement, and innovation.

Since our first series of Technology Salons on ICT and M&E in 2012 and the first MERL Tech conference in 2014, the aim has been to create stronger bridges between these diverse groups and encourage the formation of a new field with an identity of its own — In other words, to move people beyond identifying as, say, an “evaluator who sometimes uses technology,” and towards identifying as a member of the MERL Tech space (or field or discipline) with a clearer understanding of how these various elements work together and play off one another and how they influence (and are influenced by) the shifts and changes happening in the wider ecosystem of international development.

By building and strengthening these divergent interests and disciplines into a field of their own, we hope that the community of practitioners can begin to better understand their own internal competencies and what they, as a unified field, offered to international development. This is a challenging prospect, as beyond their shared use of technology to gather, analyze, and store data and an interest in better understanding how, when, why, where, (etc.) these tools work for MERL and for development/humanitarian programming, there aren’t many similarities between participants.

At the MERL Tech London and MERL Tech DC conferences in 2017, we made a concerted effort to get to the next level in the process of creating a field. In London in February, participants created a timeline of technology and MERL and identified key areas that the MERL Tech community could work on strengthening (such as data privacy and security frameworks and more technological tools for qualitative MERL efforts). At MERL Tech DC, we began trying to understand what a ‘maturity model’ for MERL Tech might look like.

What do we mean by a ‘maturity model’?

Broadly, maturity models seek to qualitatively assess people/culture, processes/structures, and objects/technology to craft a predictive path that an organization, field, or discipline can take in its development and improvement.

Initially, we considered constructing a “straw” maturity model for MERL Tech and presenting it at the conference. The idea was that our straw model’s potential flaws would spark debate and discussion among participants. In the end, however, we decided against this approach because (a) we were worried that our straw model would unduly influence people’s opinions, and (b) we were not very confident in our own ability to construct a good maturity model.

Instead, we opted to facilitate a creative space over three sessions to encourage discussion on what a maturity model might look like, and what it might contain. Our vision for these sessions was to get participants to brainstorm in mixed groups containing different types of people- we didn’t want small subsets of participants to create models independently without the input of others.

In the first session, “Developing a MERL Tech Maturity Model”, we invited participants to consider what a maturity model might look like. Could we begin to imagine a graphic model that would enable self-evaluation and allow informed choices about how to best develop competencies, change and adjust processes and align structures in organizations to optimize using technology for MERL or indeed other parts of the development field?

In the second session, “Where do you sit on the Maturity Model?” we asked participants to use the ideas that emerged from our brainstorm in the first session to consider their own organizations and work, and compare them against potential maturity models. We encouraged participants to assess themselves using green (young sapling) to yellow (somewhere in the middle) and red (mature MERL Tech ninja!) and to strike up a conversation with other people in the breaks on why they chose that color.

In our third session, “Something old, something new”, we consolidated and synthesized the various concepts participants had engaged with throughout the conference. Everyone was encouraged to reflect on their own learning, lessons for their work, and what new ideas or techniques they may have picked up on and might use in the future.

The Maturity Models

As can be expected, when over 300 people take marker and crayons to paper, many a creative model emerges. We asked the participants to gallery walk the models over the next day during the breaks and vote on their favorite models.

We won’t go into detail of what all the 24 the models showed, but there were some common themes that emerged from the ones that got the most votes – almost all maturity models include dimensions (elements, components) and stages, and a depiction of potential progression from early stages to later stages across each dimension. They all also showed who the key stakeholders or players were, and some had some details on what might be expected of them at different stages of maturity.

Two of the models (MERLvana and the Data Appreciation Maturity Model – DAMM) depicted the notion that reaching maturity was never really possible and the process was an almost infinite loop. As the presenters explained MERLvana “it’s an impossible to reach the ideal state, but one must keep striving for it, in ever closer and tighter loops with fewer and fewer gains!”

MERLvana
MERLvana
Data
Data Appreciation Maturity Model

“MERL-tropolis” had clearly defined categories (universal understanding, learning culture and awareness, common principles, and programmatic strategy) and the structures/ buildings needed for those (staff, funding, tools, standard operating procedures, skills).

MERLTropolis
MERLTropolis

The most popular was “The Data Turnpike” which showed the route from the start of “Implementation with no data” to the finish line of “Technology, capacity and interest in data and adaptive management” with all the pitfalls along the way (misuse, not timely, low ethics etc) marked to warn of the dangers.

data turnpike
The Data Turnpike

As organizers of the session, we found the exercises both interesting and enlightening, and we hope they helped participants to begin thinking about their own MERL Tech practice in a more structured way. Participant feedback on the session was on polar extremes. Some people loved the exercise and felt that it allowed them to step back and think about how they and their organization were approaching MERL Tech and how they could move forward more systematically with building greater capacities and higher quality work. Some told us that they left with clear ideas on how they would work within their organizations to improve and enhance their MERL Tech practice, and that they had a better understanding of how to go about that. A few did not like that we had asked them to “sit around drawing pictures” and some others felt that the exercise was unclear and that we should have provided a model instead of asking people to create one. [Note: This is an ongoing challenge when bringing together so many types of participants from such diverse backgrounds and varied ways of thinking and approaching things!]

We’re curious if others have worked with “maturity models” and if they’ve been applied in this way or to the area of MERL Tech before. What do you think about the models we’ve shared? What is missing? How can we continue to think about this field and strengthen our theory and practice? What should we do at MERL Tech London in March 2018 and beyond to continue these conversations?

Technology in MERL: an approximate history

by Linda Raftree, MERL Tech co-organizer.

At MERL Tech London, Maliha Khan led us in an exercise to map out our shared history of MERL Tech. Following that we did some prioritizing around potential next steps for the sector (which I’ll cover in a next post).

She had us each write down 1) When we first got involved in something related to MERL Tech, and 2) What would we identify as a defining moment or key event in the wider field or in terms of our own experiences with MERL Tech.

The results were a crowdsourced MERL Tech Timeline on the wall.

 

An approximate history of tech in MERL 

We discussed the general flow of how technology had come to merge with MERL in humanitarian and development work over the past 20 years. The purpose was not to debate about exact dates, but to get a sense of how the field and community had emerged and how participants had experienced its ebbs and flows over time.

Some highlights:

  • 1996 digital photos being used in community-led research
  • 1998 mobile phones start to creep more and more into our work
  • 2000 the rise of SMS
  • 2001 spread of mobile phone use among development/aid workers, especially when disasters hit
  • 2003 Mobile Money comes onto the scene
  • 2004 enter smart phones; Asian tsunami happens and illustrates need for greater collaboration
  • 2005 increased focus on smartphones; enter Google maps
  • 2008 IATI, Hans Rosling interactive data talk/data visualization
  • 2009 ODK, FrontlineSMS, more and more Mobile Money and smart phones, open data; global ICT4D conference
  • 2010 Haiti earthquakes – health, GIS and infrastructure data collected at large scale, SMS reporting and mapping
  • 2011 FrontlineSMS’ data integrity guide
  • 2012 introduction and spread of cloud services in our work; more and more mapping/GIS in humanitarian and development work
  • 2013 more focus and funding from donors for tech-enabled work, more awareness and work on data standards and protocols, more use of tablets for data collection, bitcoin and blockchain enter the humanitarian/development scene; big data
  • 2014 landscape report on use of ICTs for M&E; MERL Tech conference starts to come together; Responsible Data Forum; U-Report and feedback loops; thinking about SDGs and Data revolution
  • 2015 Ebola crisis leads to different approach to data, big data concerns and ‘big data disasters’, awareness of the need for much improved coordination on tech and digital data; World Bank Digital Dividends report; Oxfam Responsible Data policy
  • 2016 real-time data and feedback loops are better unpacked and starting to be more integrated, adaptive management focus, greater awareness of need of interoperability, concerns about digital data privacy and security
  • 2017 MERL Tech London and the coming-together of the related community

What do you think? What’s missing? We’d love to have a more complete and accurate timeline at some point…. 

 

How to Develop and Implement Responsible Data Policies

nigera-ict

A friend reminded me at the MERL Tech Conference that a few years ago when we brought up the need for greater attention to privacy, security and ethics when using ICTs and digital data in humanitarian and development contexts, people pointed us to Tor, encryption and specialized apps. “No, no, that’s not what we mean!” we kept saying. “This is bigger. It needs to be holistic. It’s not just more tools and tech.”

So, even if as a sector we are still struggling to understand and address all the different elements of what’s now referred to as “Responsible Data” (thanks to the great work of the Engine Room and key partners), at least we’ve come a long way towards framing and defining the areas we need to tackle. We understand the increasing urgency of the issue that the volume of data in the world is increasing exponentially and the data in our sector is becoming more and more digitalized.

This year’s MERL Tech included several sessions on Responsible Data, including Responsible Data Policies, the Human Element of the Data Cycle, The Changing Nature of Informed Consent, Remote Monitoring in Fragile Environments and plenary talks that mentioned ethics, privacy and consent as integral pieces of any MERL Tech effort.

The session on Responsible Data Policies was a space to share with participants why, how, and what policies some organizations have put in place in an attempt to be more responsible. The presenters spoke about the different elements and processes their organizations have followed, and the reasoning behind the creation of these policies. They spoke about early results from the policies, though it is still early days when it comes to implementing them.

What do we mean by Responsible Data?

Responsible data is about more than just privacy or encryption. It’s a wider concept that includes attention to the data cycle at every step, and puts the rights of people reflected in the data first:

  • Clear planning and purposeful collection and use of data with the aim of improving humanitarian and development approaches and results for those we work with and for
  • Responsible treatment of the data and respectful and ethical engagement with people we collect data from, including privacy and security of data and careful attention to consent processes and/or duty of care
  • Clarity on data sharing – what data, from whom and with whom and under what circumstances and conditions
  • Attention to transparency and accountability efforts in all directions (upwards, downwards and horizontally)
  • Responsible maintenance, retention or destruction of data.

Existing documentation and areas to explore

There is a huge bucket of concepts, frameworks, laws and policies that already exist in various other sectors and that can be used, adapted and built on to develop responsible approaches to data in development and humanitarian work. Some of these are in conflict with one another, however, and those conflicts need to be worked out or at least recognized if we are to move forward as a sector and/or in our own organizations.

Some areas to explore when developing a Responsible Data policy include:

  • An organization’s existing policies and practices (IT and equipment; downloading; storing of official information; confidentiality; monitoring, evaluation and research; data collection and storage for program administration, finance and audit purposes; consent and storage for digital images and communications; social media policies).
  • Local and global laws that relate to collection, storage, use and destruction of data, such as: Freedom of information acts (FOIA); consumer protection laws; data storage and transfer regulations; laws related to data collection from minors; privacy regulations such as the latest from the EU.
  • Donor grant requirements related to data privacy and open data, such as USAID’s Chapter 579 or International Aid Transparency Initiative (IATI) stipulations.

Experiences with Responsible Data Policies

At the MERL Tech Responsible Data Policy session, organizers and participants shared their experiences. The first step for everyone developing a policy was establishing wide agreement and buy-in for why their organizations should care about Responsible Data. This was done by developing Values and Principles that form the foundation for policies and guidance.

Oxfam’s Responsible Data policy has a focus on rights, since Oxfam is a rights-based organization. The organization’s existing values made it clear that ethical use and treatment of data was something the organization must consider to hold true to its ethos.

It took around six months to get all of the global affiliates to agree on the Responsible Program Data policy, a quick turnaround compared to other globally agreed documents because all the global executive directors recognized that this policy was critical.

A core point for Oxfam was the belief that digital identities and access will become increasingly important for inclusion in the future, and so the organization did not want to stand in the way of people being counted and heard. However, it wanted to be sure that this was done in a way that balanced and took privacy and security into consideration.

The policy is a short document that is now in the process of operationalization in all the countries where Oxfam works. Because many of Oxfam’s affiliate headquarters reside in the European Union, it needs to consider the new EU regulations on data, which are extremely strict, for example, providing everyone with an option for withdrawing consent.

This poses a challenge for development agencies that normally do not have the type of detailed databases on ‘beneficiaries’ as they do on private donors. Shifting thinking about ‘beneficiaries’ and treating them more as clients may be in order as one result of these new regulations. As Oxfam moves into implementation, challenges continue to arise.

For example, data protection in Yemen is different than data protection in Haiti. Knowing all the national level laws and frameworks and mapping these out alongside donor requirements and internal policies is extremely complicated, and providing guidance to country staff is difficult given that each country has different laws.

Girl Effect’s policy has a focus on privacy, security and safety of adolescent girls, who are the core constituency of the organization.

The policy became clearly necessary because although the organization had a strong girl safeguarding policy and practice, the effect of digital data had not previously been considered, and the number of programs that involve digital tools and data is increasing. The Girl Effect policy currently has four core chapters: privacy and security during design of a tool, service or platform; content considerations; partner vetting; and MEAL considerations.

Girl Effect looks at not only the privacy and security elements, but also aims to spur thinking about potential risks and unintended consequences for girls who access and use digital tools, platforms and content. One core goal is to stimulate implementers to think through a series of questions that help them to identify risks. Another is to establish accountability for decisions around digital data.

The policy has been in process of implementation with one team for a year and will be updated and adapted as the organization learns. It has proven to have good uptake so far from team members and partners, and has become core to how the teams and the wider organization think about digital programming. Cost and time for implementation increase with the incorporation of stricter policies, however, and it is challenging to find a good balance between privacy and security, the ability to safely collect and use data to adapt and improve tools and platforms, and user friendliness/ease of use.

Catholic Relief Services has an existing set of eight organizational principles: Sacredness and Dignity of the human person; Rights and responsibilities; Social Nature of Humanity; The Common Good; Subsidiarity; Solidarity; Option for the Poor; Stewardship.

It was a natural fit to see how these values that are already embedded in the organization could extend to the idea of Responsible Data. Data is an extension of the human person, therefore it should be afforded the same respect as the individual. The principle of ‘common good’ easily extends to responsible data sharing.

The notion of subsidiarity says that decision-making should happen as close as possible to the place where the impact of the decision will be the strongest, and this is nicely linked with the idea of sharing data back with communities where CRS works and engaging them in decision-making. The option for the poor urges CRS to place a preferential value on privacy, security and safety of the data of the poor over the data demands of other entities.

The organization is at the initial phase of creating its Responsible Data Policy. The process includes the development of the values and principles, two country learning visits to understand the practices of country programs and their concerns about data, development of the policy, and a set of guidelines to support staff in following the policy.

USAID recently embarked on its process of developing practical Responsible Data guidance to pair with its efforts in the area of open data. (See ADS 579). More information will be available soon on this initiative.

Where are we now?

Though several organizations are moving towards the development of policies and guidelines, it was clear from the session that uncertainties are the order of the day, as Responsible Data is an ethical question, often relying on tradeoffs and decisions that are not hard and fast. Policies and guidelines generally aim to help implementers ask the right questions, sort through a range of possibilities and weigh potential risks and benefits.

Another critical aspect that was raised at the MERL Tech session was the financial and staff resources that can be required to be responsible about data. On the other hand, for those organizations receiving funds from the European Union or residing in the EU or the UK (where despite Brexit, organizations will likely need to comply with EU Privacy Regulations), the new regulations mean that NOT being responsible about data may result in hefty fines and potential legal action.

Going from policy to implementation is a challenge that involves both capacity strengthening in this new area as well as behavior change and a better understanding of emerging concepts and multiple legal frameworks. The nuances by country, organization and donor make the process difficult to get a handle on.

Because staff and management are already overburdened, the trick to developing and implementing Responsible Data Policies and Practice will be finding ways to strengthen staff capacity and to provide guidance in ways that do not feel overwhelmingly complex. Though each situation will be different, finding ongoing ways to share resources and experiences so that we can advance as a sector will be one key step for moving forward.

This post was written with input from Maliha Khan, Independent Consultant; Emily Tomkys, Oxfam GB; Siobhan Green, Sonjara and Zara Rahman, The Engine Room.