All posts by Linda Raftree

About Linda Raftree

Linda Raftree supports strategy, program design, research, and technology in international development initiatives. She co-founded MERLTech in 2014 and Kurante in 2013. Linda advises Girl Effect on digital safety, security and privacy and supports the organization with research and strategy. She is involved in developing responsible data policies for both Catholic Relief Services and USAID. Since 2011, she has been advising The Rockefeller Foundation’s Evaluation Office on the use of ICTs in monitoring and evaluation. Prior to becoming an independent consultant, Linda worked for 16 years with Plan International. Linda runs Technology Salons in New York City and advocates for ethical approaches for using ICTs and digital data in the humanitarian and development space. She is the co-author of several publications on technology and development, including Emerging Opportunities: Monitoring and Evaluation in a Tech-Enabled World with Michael Bamberger. Linda blogs at Wait… What? and tweets as @meowtree. See Linda’s full bio on LInkedIn.

MERL Tech London session ideas due this Friday, Nov 10th!

MERL Tech London is coming up on March 19-20, 2018. Session ideas are due by Friday, November 10th, so be sure to get yours in this week!!

Submission Deadline: Friday, November 10, 2017.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early December and, if selected, you will be asked to submit an updated and final session title, summary and outline by January 19th, 2018.

Topics we’re looking for:

  • Case studies: Sharing end-to-end experiences/learning from a MERL Tech process
  • MERL Tech 101: How-to use a MERL Tech tool or approach
  • Methods & Frameworks: Sharing/developing/discussing methods and frameworks for MERL Tech
  • Data: Big, large, small, quant, qual, real-time, online-offline, approaches, quality, etc.
  • Innovations: Brand new, untested technologies or approaches and their application to MERL(Tech)
  • Debates: Lively discussions, big picture conundrums, thorny questions, contentious topics related to MERL Tech
  • Management: People, organizations, partners, capacity strengthening, adaptive management, change processes related to MERL Tech
  • Evaluating MERL Tech: comparisons or learnings about MERL Tech tools/approaches and technology in development processes
  • Failures: What hasn’t worked and why, and what can be learned from this?
  • Demo Tables: to share MERL Tech approaches, tools, and technologies
  • Other topics we may have missed!

To get you thinking — take a look at past agendas from MERL Tech LondonMERL Tech DC and MERL Tech News.

Submit your session idea now!

We’re actively seeking a diverse (in every way) set of MERL Tech practitioners to facilitate every session. We encourage organizations to open this opportunity to colleagues and partners working outside of headquarters and to support their participation. (And please, no all-male panels!)

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone. Please review our Code of Conduct. Session submissions are reviewed by our steering committee.

Submit your session ideas by November 10th!

If you have any questions about your submission idea, please contact Linda Raftree.

(Registration is also open!)

MERL Tech Round Up | November 1, 2017

It’s time for our second MERL Tech Round Up, a monthly compilation of MERL Tech News!

On the MERL Tech Blog:

We’ve been posting session summaries from MERL Tech DC. Here are some posts you may have missed in October:

Stuff we’re reading/watching/bookmarking:

There’s quite a bit to learn both in our “MERL / Tech” sector and in related sectors whose experiences are relatable to MERL Tech. Some thought-provoking pieces here:

Events:

Jobs

Head over to ICT4DJobs for a ton of tech related jobs. Here are some interesting ones for folks in the MERL Tech space:

If you’re not already signed up to the Pelican Initiative: Platform for Evidence-based Learning & Communication for Social Change, we recommend doing that. You will find all kinds of MERL and MERLTech related jobs and MERL-related advice. (Note: the Platform is an extremely active forum, so you may want to adjust your settings to receive weekly compilations).

Tag us on Twitter using #MERLTech if you have resources, events, or other news you’d like us to include here!

Don’t forget to submit your session ideas for MERL Tech London by November 10th!

Submit your session ideas for MERL Tech London by Nov 10th!

MERL Tech London

Please submit a session idea, register to attend, or reserve a demo table for MERL Tech London, on March 19-20, 2018, for in-depth sharing and exploration of what’s happening across the multidisciplinary monitoring, evaluation, research and learning field.

Building on MERL Tech London 2017, we will engage 200 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn the theories of MERL technology into effective practice that delivers real insight and learning in our sector.

MERL Tech London 2018

Digital data and new media and information technologies are changing MERL practices. The past five years have seen technology-enabled MERL growing by leaps and bounds, including:

  • Adaptive management and ‘developmental evaluation’
  • Faster, higher quality data collection.
  • Remote data gathering through sensors and self-reporting by mobile.
  • Big Data and social media analytics
  • Story-triggered methodologies

Alongside these new initiatives, we are seeing increasing documentation and assessment of technology-enabled MERL initiatives. Good practice guidelines and new frameworks are emerging and agency-level efforts are making new initiatives easier to start, build on and improve.

The swarm of ethical questions related to these new methods and approaches has spurred greater attention to areas such as responsible data practice and the development of policies, guidelines and minimum ethical frameworks and standards for digital data.

Please submit a session idea, register to attend, or reserve a demo table for MERL Tech London to discuss all this and more! You’ll have the chance to meet, learn from, debate with 150-200 of your MERL Tech peers and to see live demos of new tools and approaches to MERL.

Submit Your Session Ideas Now!

Like previous conferences, MERL Tech London will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are particularly interested in:

  • Case studies: Sharing end-to-end experiences/learning from a MERL Tech process
  • MERL Tech 101: How-to use a MERL Tech tool or approach
  • Methods & Frameworks: Sharing/developing/discussing methods and frameworks for MERL Tech
  • Data: Big, large, small, quant, qual, real-time, online-offline, approaches, quality, etc.
  • Innovations: Brand new, untested technologies or approaches and their application to MERL(Tech)
  • Debates: Lively discussions, big picture conundrums, thorny questions, contentious topics related to MERL Tech
  • Management: People, organizations, partners, capacity strengthening, adaptive management, change processes related to MERL Tech
  • Evaluating MERL Tech: comparisons or learnings about MERL Tech tools/approaches and technology in development processes
  • Failures: What hasn’t worked and why, and what can be learned from this?
  • Demo Tables: to share MERL Tech approaches, tools, and technologies
  • Other topics we may have missed!

Session Submission Deadline: Friday, November 10, 2017.

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early December and, if selected, you will be asked to submit an updated and final session title, summary and outline by Friday, January 19th, 2018.

Register Now!

Please register to attend, or reserve a demo table for MERL Tech London 2018 to examine these trends with an exciting mix of educational keynotes, lightning talks, and group breakouts, including an evening Fail Festival reception to foster needed networking across sectors.

We are charging a modest fee to better allocate seats and we expect to sell out quickly again this year, so buy your tickets or demo tables now. Event proceeds will be used to cover event costs and to offer travel stipends for select participants implementing MERL Tech activities in developing countries.

MERL Tech Round Up | October 2, 2017

We’ll be experimenting with a monthly round-up of MERL Tech related content (bi-weekly if there’s enough to fill a post). Let us know if it’s useful! We aim to keep it manageable and varied, rather than a laundry list of every possible thing. The format, categories, and topics will evolve as we see how it goes and what the appetite is.

If you have anything you’d like to share or see featured, feel free to send it on over or post on Twitter using the #MERLTech hashtag.

On the MERL Tech Blog:

Big Data in Evaluation – Michael Bamberger discusses the future of development evaluation in the age of Big Data and ways to build bridges between evaluators and Big Data analysis. Rick Davies (Monitoring and Evaluation News) raises some great points in the comments (and Michael replies).

Experiences with Mobile case management for multi-dimensional accountability from Oxfam and Survey CTO.

Thoughts on MERL Tech Maturity Models & Next Generation Transparency & Accountability from Megan Colner (Open Society Foundations) and Alison Miranda (Transparency and Accountability Initiative).

The best learning at MERL Tech DC came from sharing failures from Ambika Samarthya-Howard (Praekelt.org).

We’ll be posting more MERL Tech DC summaries and wrap-up posts over the next month or two. We’re also gearing up for MERL Tech London coming up in March 2018. Stay tuned for more information on that.

Stuff we’re reading / watching:

New research (Making All Voices Count research team) on ICT-mediated citizen engagement. What makes it transformative? 

Opportunities and risks in emerging technologies, including white papers on Artificial IntelligenceAlgorithmic Accountability; and Control of Personal Data (The Web Foundation).

Research on Privacy, Security, and Digital Inequality: How Technology Experiences and Resources Vary by Socioeconomic Status, Race, and Ethnicity in the United States from Mary Madden (Data & Society).

Tools, frameworks and guidance we’re bookmarking:

A framework for evaluating inclusive technology, technology for social impact and ICT4D programming (SIMLab) and an example of its application. The framework is open source, so you can use and adapt it!

survey tool and guidance for assessing women’s ICT access and use (FHI 360’s mSTAR project).  (Webinar coming up on Oct 10th)

Series on data management (DAI) covering 1) planning and collecting data; 2) managing and storing data; and 3) getting value and use out of the data that’s collected through analysis and visualization.

Events and training:

Webinar on using ICT in monitoring and evaluation of education programming for refugee populations (USAID and INEE). Recording and presentations from the Sep 28th event here.

Webinar on assessing women’s ICT access and use, Oct 10th (Nethope, USAID and mSTAR/FHI 360.

Let us know of upcoming events we should feature.

Jobs:

Send us vacancies for MERL Tech-related jobs, consultants, RFPs and we’ll help spread the word.

MERL Tech DC Conference wrap up

Over 300 MERL Tech practitioners came together in Washington DC the first week of September for MERL Tech DC.

Kathy Newcomer, American Evaluation Association President, gives her opening talk on Day 2.
Kathy Newcomer, American Evaluation Association President, gives her opening talk on Day 2.
Blockchain was one of the most popular sessions.
Blockchain was one of the most popular sessions.

Core topic areas included organizational change and capacity; evaluation of MERL Tech and ICT4D; big data, small data and data analytics; tech tools to support qualitative methods; new and emerging technologies with a potential role in MERL; inclusion and ways tech can support ‘downward’ accountability; practical sessions on tools and methods; and community building in the MERL Tech sector.

Check out InSTEDD’s fantastic recap of the event in pictures and Tweets.

What does “MERL Tech Maturity” look like?

In plenary, groups worked together to discuss “MERL Tech Maturity Models” – in other words, what are the characteristics of an organization that is fully mature when it comes to MERL Tech. People also spent some time thinking about where their organizations fit on the “MERL Tech Maturity” scale: from brand new or less experienced to fully mature. (We’ll share more about this in a future post).

The Data Turnpike was voted the best depiction of a Maturity Model.
The Data Turnpike was voted the best depiction of a Maturity Model.

As always, there was plenty of socializing with old and new friends and collaborators too!

Screen Shot 2017-09-20 at 6.19.32 AMScreen Shot 2017-09-20 at 6.22.47 AMScreen Shot 2017-09-20 at 6.28.57 AMScreen Shot 2017-09-20 at 6.29.14 AM

Stay tuned for session summaries and more, coming up over the next several weeks here on MERL Tech News!

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Better or different or both?

by Linda Raftree, Independent Consultant and MERL Tech Organizer

As we delve into why, when, where, if, and how to incorporate various types of technology and digital data tools and approaches into monitoring, evaluation, research and learning (MERL), it can be helpful to think about MERL technologies from two angles:

  1. Doing our work better:  How can new technologies and approaches help us do what we’ve always done — the things that we know are working and having an impact — but do them better? (E.g., faster, with higher quality, more efficiently, less expensively, with greater reach or more inclusion of different voices)
  2. Doing our work differently:  What brand new, previously unthinkable things can be done because of new technologies and approaches? How might these totally new ideas contribute positively to our work or push us to work in an entirely different way.

Sometimes these two things happen simultaneously and sometimes they do not.  Some organizations are better at Thing 1, and others are set-up well to explore Thing 2. Not all organizations need to feel pressured into doing Thing 2; however, and sometimes it can be a distraction from Thing 1. Some organizations may be better off letting early adopters focus on Thing 2 and investing their own budgets and energy in Thing 1 until innovations have been tried and tested by the early adopters. Organizations may also have staff members or teams working on both Thing 1 and Thing 2 separately. Others may conceptualize this as process or pathway moving from Thing 2 to Thing 1, where Thing 2 (once tested and evaluated) is a pipeline into Thing 1.

Here are some potentially useful past discussions on the topic of innovations within development organizations that flesh out some of these thoughts:

Many of the new tools and approaches that were considered experimental 10 years ago have moved from being “brand new and innovative” to simply “helping us do what we’ve always done.” Some of these earlier “innovations” are related to digital data and data collection and processing, and they help us do better monitoring, evaluation and research.

On the flip side, monitoring, evaluation and research have played a key role in helping organizations and the sector overall learn more about how, where, when, why and in what contexts these different tools and approaches (including digital data for MERL) can be adopted. MERL on ICT4D and Digital Development approaches can help calibrate the “hype cycle” and weed out the shiny new tools and approaches that are actually not very effective or useful to the sector and highlight those that cause harm or put people at risk.

There are always going to be new tools and approaches that emerge. Humanitarian and development organizations, then, need to think strategically about what kind of organization they are (or want to be) and where they fit on the MERL Tech continuum between Thing 1 and Thing 2.

What capacities does an organization have for working on Thing 2 (brand new and different)? When and for how long should an organization focus on Thing 1, building on what it knows is working or could work, keeping an eye on the early adopters who are working on Thing 2. When does an organization have enough “proof” to start adopting new tools and approaches that seem to add value? How are these new tools and approaches being monitored, evaluated and researched to improve our use of them?

It’s difficult for widespread adoption to happen in the development space, where there is normally limited time and capacity for failure or for experimentation, without solid MERL. And even with “solid MERL” it can be difficult for organizations to adapt and change due to a multitude of factors, both internal and external.

I’m looking forward to September’s MERL Tech Conference in DC where we have some sessions that explore “the MERL on ICT4MERL?” and others that examine aspects of organizational change related to adopting newer MERL Tech tools and approaches.

(Register here if you haven’t already!)

 

 

MERL Tech DC: Session ideas due by May 12th!

Don’t forget to sign up to present, register to attend, or reserve a demo table for MERL Tech DC on September 7-8, 2017 at FHI 360 in Washington, DC.

Submit Your Session Ideas by Friday, May 12th!

Like previous conferences, MERL Tech DC will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are particularly interested in:

  • Discussions around good practice and evidence-based review
  • Workshops with practical, hands-on exercises
  • Discussion and sharing on how to address methodological aspects such as rigor, bias, and construct validity in MERL Tech approaches
  • Future-focused thought provoking ideas and examples
  • Conversations about ethics, inclusion and responsible policy and practice in MERL Tech

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early June and, if selected, you will be asked to submit the final session title, summary and outline by June 30.

If you have questions or are unsure about a submission idea, please get in touch with Linda Raftree.

Submit your ideas here! 

Six priorities for the MERL Tech community

by Linda Raftree, MERL Tech Co-organizer

IMG_4636Participants at the London MERL Tech conference in February 2017 crowdsourced a MERL Tech History timeline (which I’ve shared in this post). Building on that, we projected out our hopes for a bright MERL Tech Future. Then we prioritized our top goals as a group (see below). We’ll aim to continue building on these as a sector going forward and would love more thoughts on them.

  1. Figure out how to be responsible with digital data and not put people, communities, vulnerable groups at risk. Subtopics included: share data with others responsibly without harming anyone; agree minimum ethical standard for MERL and data collection; agree principles for minimizing data we collect so that only essential data is captured, develop duty of care principles for MERL Tech and digital data; develop ethical data practices and policies at organization levels; shift the power balance so that digital data convenience costs are paid by orgs, not affected populations; develop a set of quality standards for evaluation using tech
  2. Increase data literacy across the sector, at individual level and within the various communities where we are working.
  3. Overcome the extraction challenge and move towards true downward accountability. Do good user/human centered design and planning together, be ‘leaner’ and more user-focused at all stages of planning and MERL. Subtopics included: development of more participatory MERL methods; bringing consensus decision-making to participatory MERL; realizing the potential of tech to shift power and knowledge hierarchies; greater use of appreciative inquiry in participatory MERL; more relevant use of tech in MERL — less data, more empowering, less extractive, more used.
  4. Integrate MERL into our daily opfor blogerations to avoid the thinking that it is something ‘separate;’ move it to the core of operations management and make sure we have the necessary funds to do so; demystify it and make it normal! Subtopics included that: we’ve stopped calling “MERL” a “thing” and the norm is to talk about monitoring as part of operations; data use is enabling real-time coordination; no more paper based surveys.
  5. Improve coordination and interoperability as related to data and tools, both between organizations and within organizations. Subtopics included: more interoperability; more data-sharing platforms; all data with suitable anonymization is open; universal exchange of machine readable M&E Data (e.g., standards? IATI? a platform?); sector-wide IATI compliance; tech solutions that enable sharing of qualitative and quantitative data; systems of use across agencies; e.g., to refer feedback; coordination; organizations sharing more data; interoperability of tools. It was emphasized that donors should incentivize this and ensure that there are resources to manage it.
  6. Enhance user-driven and accessible tech that supports impact and increases efficiency, that is open source and can be built on, and that allows for interoperability and consistent systems of measurement and evaluation approaches.

In order to move on these priorities, participants felt we needed better coordination and sharing of tools and lessons among the NGO community. This could be through a platform where different innovations and tools are appropriately documented so that donors and organizations can more easily find good practice, useful tools and get a sense of ‘what’s out there’ and what it’s being used for. This might help us to focus on implementing what is working where, when, why and how in M&E (based on a particular kind of context) rather than re-inventing the wheel and endlessly pushing for new tools.

Participants also wanted to see MERL Tech as a community that is collaborating to shape the field and to ensure that we are a sector that listens, learns, and adopts good practices. They suggested hosting MERL Tech events and conferences in ‘the South;’ and building out the MERL Tech community to include greater representation of users and developers in order to achieve optimal tools and management processes.

What do you think – have we covered it all? What’s missing?

Technology in MERL: an approximate history

by Linda Raftree, MERL Tech co-organizer.

At MERL Tech London, Maliha Khan led us in an exercise to map out our shared history of MERL Tech. Following that we did some prioritizing around potential next steps for the sector (which I’ll cover in a next post).

She had us each write down 1) When we first got involved in something related to MERL Tech, and 2) What would we identify as a defining moment or key event in the wider field or in terms of our own experiences with MERL Tech.

The results were a crowdsourced MERL Tech Timeline on the wall.

 

An approximate history of tech in MERL 

We discussed the general flow of how technology had come to merge with MERL in humanitarian and development work over the past 20 years. The purpose was not to debate about exact dates, but to get a sense of how the field and community had emerged and how participants had experienced its ebbs and flows over time.

Some highlights:

  • 1996 digital photos being used in community-led research
  • 1998 mobile phones start to creep more and more into our work
  • 2000 the rise of SMS
  • 2001 spread of mobile phone use among development/aid workers, especially when disasters hit
  • 2003 Mobile Money comes onto the scene
  • 2004 enter smart phones; Asian tsunami happens and illustrates need for greater collaboration
  • 2005 increased focus on smartphones; enter Google maps
  • 2008 IATI, Hans Rosling interactive data talk/data visualization
  • 2009 ODK, FrontlineSMS, more and more Mobile Money and smart phones, open data; global ICT4D conference
  • 2010 Haiti earthquakes – health, GIS and infrastructure data collected at large scale, SMS reporting and mapping
  • 2011 FrontlineSMS’ data integrity guide
  • 2012 introduction and spread of cloud services in our work; more and more mapping/GIS in humanitarian and development work
  • 2013 more focus and funding from donors for tech-enabled work, more awareness and work on data standards and protocols, more use of tablets for data collection, bitcoin and blockchain enter the humanitarian/development scene; big data
  • 2014 landscape report on use of ICTs for M&E; MERL Tech conference starts to come together; Responsible Data Forum; U-Report and feedback loops; thinking about SDGs and Data revolution
  • 2015 Ebola crisis leads to different approach to data, big data concerns and ‘big data disasters’, awareness of the need for much improved coordination on tech and digital data; World Bank Digital Dividends report; Oxfam Responsible Data policy
  • 2016 real-time data and feedback loops are better unpacked and starting to be more integrated, adaptive management focus, greater awareness of need of interoperability, concerns about digital data privacy and security
  • 2017 MERL Tech London and the coming-together of the related community

What do you think? What’s missing? We’d love to have a more complete and accurate timeline at some point….