By Mala Kumar, GitHub Social Impact, Open Source for Good
I lead a program on the GitHub Social Impact team called Open Source for Good — detailed in a previous MERL Tech post and (back when mass gatherings in large rooms were routine) at a lightning talk at the MERL Tech DC conference last year.
Before joining GitHub, I spent a decade wandering around the world designing, managing, implementing, and deploying tech for international development (ICT4D) software products. In my career, I found open source in ICT4D tends to be a polarizing topic, and often devoid of specific arguments. To advance conversations on the challenges, barriers, and opportunities of open source for social good, my program at GitHub led a year-long research project and produced a culminating report, which you can download here.
One of the hypotheses I posed at the MERL Tech conference last year, and that our research subsequently confirmed, is that IT departments and ICT4D practitioners in the social sector* have relatively less budgetary decision-making power than their counterparts at corporate IT companies. This makes it hard for IT and ICT4D staff to justify the use of open source in their work.
In the past year, Open Source for Good has solidified its strategy around helping the social sector more effectively engage with open source. To that aim, we started the MERL Center, which brings together open source experts and MERL practitioners to create resources to help medium and large social sector organizations understand if, how, and when to use open source in their MERL solutions.**
With the world heading into unprecedented economic and social change and uncertainty, we’re more committed than ever at GitHub Social Impact to helping the social sector effectively use open source and to build on a digital ecosystem that already exists.
Thanks to our wonderful working group members, the MERL Center has identified its target audiences, fleshed out the goals of the Center, set up a basic content production process, and is working on a few initial contributions to its two working groups: Case Studies and Beginner’s Guides. I’ll announce more details in the coming months, but I am also excited to announce that we’re committing funds to get a MERL Center public-facing website live to properly showcase the materials the MERL Center produces and how open source can support technology-enabled MERL activities and approaches.
As we ramp up, we’re now inviting more people to join the MERL Center working groups! If you are a MERL practitioner with an interest in or knowledge of open source, or you’re an open source expert with an interest in and knowledge of MERL, we’d love to have you! Please feel free to reach out me with a brief introduction to you and your work, and I’ll help you get on-boarded. We’re excited to have you work with us!
*We define the “social sector” as any organization or company that primarily focuses on social good causes.
Just about everyone I know in the ICT4D and MERL communities has interacted with, presented, or created a chart, dashboard, infographic, or other data visualization. We’ve also all seen charts that mislead, confuse, or otherwise fall short of making information more accessible.
The goal of the Data Visualization Society is to collect and establish best practices in data viz, fostering a community that supports members as they grow and develop data visualization skills. With more than 11.5K members from 123 countries on our first birthday, the society has grown faster than any of the founders imagined.
There are three reasons you should join the Data Visualization Society to improve your data visualizations in international development.
Self-service data visualization tools are everywhere, but that doesn’t mean we’re always building usable charts and graphs.
Just about anyone can make a chart if they have a table of data, thanks to the wide range of tools out there (Flourish, RAWgraphs, Datawrapper, Tableau, PowerBI…to name a few). Without a knowledge of data viz fundamentals though, it’s easy to use these tools to create confusing and misleading graphs.
A recent study on user-designed dashboards in DHIS2 (a commonly used data management and analysis platform in global health) found that “while the technical flexibility of [DHIS2] has been taken advantage of by providing platform customization training…the quality of the dashboards created face numerous challenges.” (Aprisa & Sebo, 2020).
The researchers used a framework from Stephen Few to evaluate the frequency of five different kinds of ‘dashboard problems’ on 80 user-designed sample dashboards. The five problem ‘types’ included: context, dashboard layout, visualization technique, logical, and data quality.
Of the 80 dashboards evaluated, 69 (83.1%) had at least one visualization technique problem (Aprisa & Sebo, 2020). Many of the examples shared in the paper could be easily addressed, like transforming the pie chart made of slices representing points in time into a line graph.
With so many tools at our fingerprints, how can we use them to develop meaningful, impactful charts and interactive dashboards? Learning fundamentals of data visualization is an excellent place to start, and DVS offers a free-to-join professional home to learn those fundamentals.
Many of the communities that exist around data visualization are focused on specific tools, which may not be relevant or accessible for your organization.
In ICT4D, we often have to be scrappy and flexible. That means learning how to work with open source tools, hack charts in Excel, and often make decisions about what tool to use driven as much by resource availability as functionality.
There are many great tool specific communities out there: TUGs, PUGs, RLadies, Stack Overflow, and more. DVS emerged out of a need to connect people looking to share best practices across the many disciplines doing data viz: journalists, evaluators, developers, graphic designers, and more. That means not being limited to one tool or platform, so we can look for what fits a given project or audience.
After joining DVS, you’ll receive an invite to the Society’s’ Slack, a community “workspace” with channels on different topics and for connecting different groups of people within the community. You can ask questions about any data viz tool on the #topic-tools channel, and explore emerging and established platforms with honest feedback on how other members have used them in their work.
Data visualization training often means one-off workshops. Attendees leave enthusiastic, but then don’t have colleagues to rely on when they run into new questions or get stuck.
Data visualization isn’t consistently taught as a foundation skill for public health or development professionals.
In university, there may be a few modules within a statistics or evaluation class, but seldom are there dedicated, semester long classes on visualization; those are reserved for computer science and analytics programs (though this seems to be slowing changing!). Continuing education in data viz is usually short workshops, not long-term mentoring relationships.
So what happens when people are asked to “figure it out” on the job? Or attend a two day workshop and come away as a resident data viz expert?
Within DVS, our leadership and our members step up to answer questions and be that coach for people at all stages of learning data visualization. We even have a dedicated feedback space within Slack to share examples of data viz work in progress and get feedback.
DVS also enables informal connections on questions on a wide range of topics. Go to #share-critique, for posting work-in-progress visualizations and seeking feedback from the community. We also host quarterly challenges where you can do hands-on practice with provided data sets to develop your data viz skills and have plans for a formal mentorship program to launch in 2020.
Join DVS today to get its benefits – members from Africa, Asia, and other underrepresented areas are especially encouraged to join us now!
Have any questions? Or ideas on ways DVS can support our global membership base? Find me on Twitter – my DMs are open.
The panel “Technology Adoption and Innovation in the Industry: How to Bridge the International Development Industry with Technology Solutions” proved to be an engaging conversation between four technology and international development practitioners. Admittedly, as someone who comes from more of a development background, some parts of this conversation were hard to follow. However, three takeaways stuck out to me after hearing the insights and experiences of Aasit Nanavati, a Director of DevResults, Joel Selanikio, CEO and Co-Founder of Magpi, Nancy Hawa, a Sofware Engineer from DevResults, and Mike Klein, a Director from IMC Worldwide and the panel moderator.
“Innovation isn’t always creation.”
The fact that organizations often think about innovation and creation as synonymous actually creates barriers to entry for tech in the development market. When asked to speak about these barriers, all three panelists mentioned that clients oftentimes want highly customized tools when, they could achieve their goals with what already exists in the market. Nanavati (whose quote titles this section) followed his point about innovation not always requiring creation by asserting that innovation is sometimes just a matter of implementing existing tools really well. Hawa added to this idea by arguing that sometimes development practitioners and organizations should settle for something that’s close enough to what they want in order to save money and resources. When facing clients’ unrealistic expectations about original creation, consultancies should explain that the super-customized system the client asks for may actually be unusable because of the level of complexity this customization would introduce. While this may be hard to admit, communicating with candor is better than the alternative — selling a bad product for the sake of expanding business.
An audience member asked how one could convince development clients to accept the non-customized software. In response, Hawa suggested that consultancies talk about software in a way that non-tech clients understand. Say something along the lines of, “Why recreate Microsoft Excel or Gmail?” Later in the discussion, Selanikio offered another viewpoint. He never tries to persuade clients to use Magpi. Rather, he does business with those who see the value of Magpi for their needs. This method may be effective in avoiding a tense situation between the service provider and client when the former is unable to meet the unrealistic demands of the latter.
We need to close the gap in understanding between the tech and development fields.
Although not explicitly stated, one main conclusion that can be drawn from the panel is that a critical barrier keeping technology from effectively penetrating development is miscommunication and misunderstanding between actors from the two fields. By learning how to communicate better about the technology’s true capacity, clients’ unrealistic expectations, and the failed initiatives that often result from the mismatch between the two, future failures-in-the-making can be mitigated. Interestingly, all three panelists are, in themselves, bridges between these two fields, as they were once development implementors before turning to the tech field.Nanavati and Selanikio used to work in the public health sphere in epidemiology, and Hawa was a special education teacher. Since the panelists were once in their clients’ positions, they better understand the problems their clients face and reflect this understanding in the useful tech they develop. Not all of us have expertise in both fields. However, we must strive to understand and accept the viewpoints of each other to effectively incorporate technology in development.
Grant funding has its limitations.
This is not to say that you cannot produce good tech outputs with grant funding. However, using donations and grants to fund the research and development of your product may result in something that caters to the funders’ desires rather than the needs of the clients you aim to work with. Selanikio, while very grateful to the initial funders of Magpi, found that once the company began to grow, grants as a means of funding no longer worked for the direction that he wanted to go. As actors in the international development sphere, the majority of us are mission-driven, so when the funding streams hinder you from following that mission, then it may be worth considering other options. For Magpi, this involved having both a free and paid version of its platform. Oftentimes, clients transition from the free to paid version and are willing to pay the fee when Magpi proves to be the software that they need. Creative tech solutions require creative ways to fund them in order to keep their integrity.
Technology can greatly aid development practitioners to make a positive impact in the field. However, using it effectively requires that all those involved speak candidly about the capacity of the tech the practitioner wants to employ and set realistic expectations. Each panelist offered level-headed advice on how to navigate these relationships but remained optimistic about the role of tech in development.
Throughout my life, I’ve heard women grumble about using technology—from my mom, from friends in school, and from work colleagues—yet these are highly educated, often extremely logical thinkers that excel at, well, Excel!
The irony of the situation has been troubling me in the past few months. Why? Because there is a clear contrast in attention paid to the benefits of empowering women and girls through technology in low-and middle-income countries, with the attention paid to empowering women and girls through technology in high-income environments.
As an international development community, we spend a lot of resources promoting the use of technology among women and girls within the communities where we work—with good results. And yet, as a community of women development practitioners, we are failing at embracing technology ourselves. The gender gap in science, technology, education, and mathematics (STEM) exists around the world, and society continues to fail women and girls by not expecting them to know much about technical matters. This plays out in our day-to-day work in the monitoring, evaluation, research, and learning (MERL) sector. Whether it’s learning new software to improve our results monitoring or using new mobile tools in the field, there seems to be a hesitance, and lack of confidence, often accompanied by a self-deprecation that our male counterparts lack.
What is holding back women from embracing technology in our own work, even as we tout it for others in the field? These questions motivated me to take the topic to a broader audience at the recent MERLTech Conference in Washington, D.C.
Panelists discuss their own experiences as women working in the tech space. From left to right Dr. Patty Mecheal (Co-founder and Policy Lead, HealthEnabled), Carmen Tedesco (author), Jaclyn Carlsen (Policy Advisor, Development Informatics team, USAID), Priyanka Pathak (Principal, Samaj Studio).
But first, a bit of history.
How Did We Get Here?
In her article from the Center for Media Literacy, Margaret Brenston explains: “In our society, boys and men are expected to learn about machines, tools and how things work. In addition, they absorb, ideally, a ‘technological world view’ that grew up along with industrial society. Such a world view emphasizes objectivity, rationality, control over nature, and distance from human emotions. Conversely, girls and women are not expected to know much about technical matters. Instead, they are to be good at interpersonal relationships and to focus on people and emotion.”
She goes on to outline how those differences play out when technology is seen as a language, and one in which women “are silenced.” She writes: “It is very difficult for women to discuss technical problems, particularly experimental ones, with male peers—they either condescend or they want to perform whatever task is at issue themselves. In either case, asking a question or raising a problem in discussion is proof (if any is needed) that women don’t know what they are doing. Male students, needless to say, do not get this treatment.” An interesting literature review of gender differences in technology usage highlights a 2003 study that details how women are more anxious than men with IT utilization, which reduces their self-effectiveness and increases the perception that IT requires more effort.
I organized a panel at MERLTech, where we discussed our experiences as women in tech working in monitoring, evaluation, and learning (MEL), some of the data behind the gender gap in STEM, and why women struggle to embrace technology.
So many conference attendees echoed the above findings, mentioning that tech savvy is seen as smart, but smart is not seen as feminine. There is a misconception about what technology is by women. The “imposter syndrome” or a fear of failure, has a real impact on women in our lives, and the reaction by men to women’s discomfort with tech is often compounded by mocking or dismissal, making many women even more hesitant to engage.
How Can We Fix This?
The Global Fund for Women states, “Access to technology, control of it, and the ability to create and shape it, is a fundamental issue of women’s human rights.” The Fund does this by, “help[ing] end the gender technology gap and empower[ing] women and girls to create innovative solutions to advance equality in their communities.”
Based on our discussion, here are five tips to help bridge the technology gender divide within our own field.
Be, or find, a mentor. Women will benefit from mentors and allies in this space, whether you plan to go into a tech field, or just want to ask a question without fear of looking uninformed.
Become a role model where you can. Find allies, men and women to help you build confidence.
Increase representation. When women can be brought to the table in discussions of tech, they should be. Slowly, this will permeate the culture of the organization. Having more women involved in the process of explaining and building tech in our companies will normalize the use of tech and take away some of the gendered dynamics that exist now.
Confront bias head-on. Addressing gender assumptions when they occur can be hard but pointing out the bias is not enough. Countering the action with a specific recommendation for course correction works best.
Build confidence. Personal development can play a role in building confidence, as can much of the point listed above. Confidence is the foundation for competence.
Both men and women should be aware of the history and social context behind women’s hesitation in the technology space. It is in all our best interests to be aware of this bias and find ways to help correct it. In taking some of these small steps, we can pave the way for increased confidence in the tech space for women.
Imagine this picture of data literacy at all levels of a programme:
You’ve got a “donor visit” to your programme. The country director and a project officer accompany the donor on a field trip, and they all visit a household within one of the project communities. All sat around a cup of tea, they started a discussion about data. In this discussion, the household members explained what data had been collected and why. The country director explained what had surprised him/her in the data. And the donor discussed how they made a decision to fund the programme based on the data. What if no one was surprised at the discussion, or how the data was used, because they’d ALL seen and understood the data process?
Data literacy can mean lots of different things depending on who you are. It could mean knowing how to:
collect, analyze and use data;
make sense of data and use it for management
validate data, be critical of it,
tell good from bad data and knowing how credible it is;
ensure everyone is confident talking about data.
IS “IMPROVING DATA LITERACY FOR ALL LEVELS” A TOP PRIORITY FOR THE HUMANITARIAN SECTOR?
“YES” data literacy is a priority! Poor data literacy is still a huge stumbling block for many people in the sector and needs to be improved at ALL levels – from community households to field workers to senior management to donors. However, there are a few challenges in how this priority is worded.
IS “LITERACY” THE RIGHT WORD?
Suggesting someone is “illiterate” when it comes to data – that doesn’t sit well with most people. Many aid workers – from senior HQ staff right down to beneficiaries of a humanitarian programme – are well-educated and successful. Not only are they literate, but most speak 2 or more languages! So to insinuate “illiteracy” doesn’t feel right.
Illiteracy is insulting…
Many of these same people are not super-comfortable with “data”, but to ask them if they “struggle” with data, or to suggest they “don’t understand” by claiming they are “data illiterate” is insulting (even if you think it’s true!).
Leadership is enticing…
The language you use is extremely important here. Instead of “literacy”, should you be talking about “leadership”? What if you framed it as: Improving data leadership. Could you harness the desirability of that skill – leadership – so that workshop and training titles played into people’s egos, instead of attacking their egos?
WHAT CAN YOU DO TO IMPROVE DATA LITERACY (LEADERSHIP) WITHIN YOUR OWN ORGANIZATION?
You might be directly involved with helping to improve data literacy within your own organization. Here are a few ideas on how to improve general data literacy/leadership:
Training and courses around data literacy.
While courses that focus on data analysis using computer programming languages such as [R] or Python exist, it might be better to focus on skills-development on more popular software (such as Excel) which is more sustainable. Due to the high turnover of staff within your sector, complex data analysis cannot normally be sustained once an advanced analyst leaves the field.
Donor funding to promote data use and the use of technology.
While the sector should not only rely on donors for pushing the agenda of data literacy forward, money is powerful. If NGOs and agencies are required to show data literacy in order to receive funding, this will drive a paradigm shift in becoming more data-driven as a sector. There are still big questions on how to fund interoperable tech systems in the sector to maximize the value of that funding in collaboration between multiple agencies. However, donors who can provide structures and settings for collaboration will be able to promote data literacy across the sector.
Capitalize on “trendy” knowledge – what do people want to know about because it makes them look intelligent?
In 2015/16, everyone wanted to know “how to collect digital data”. A couple years later, most people had shifted – they wanted to know “how to analyze data” and “make a dashboard”. Now in 2018, GDPR and “Responsible Data” and “Blockchain” are trending – people want to know about it so they can talk about it. While “trends” aren’t all we should be focusing on, they can often be the hook that gets people at all levels of our sector interested in taking their first steps forward in data literacy.
DATA LITERACY MEANS SOMETHING DIFFERENT FOR EACH PERSON
Data literacy means something completely different depending on who you are, your perspective within a programme, and what you use data for.
To the beneficiary of a programme…
data literacy might just mean understanding why data is being collected and what it is being used for. It means having the knowledge and power to give and withhold consent appropriately.
To a project manager…
data literacy might mean understanding indicator targets, progress, and the calculations behind those numbers, in addition to how different datasets relate to one another in a complex setting. Managers need to understand how data is coming together so that they can ask intelligent questions about their programme dashboards.
To an M&E officer…
data literacy might mean an understanding of statistical methods, random selection methodologies, how significant a result may be, and how to interpret results of indicator calculations. They may need to understand uncertainty within their data and be able to explain this easily to others.
To the Information Management team…
data literacy might mean understanding how to translate programme calculations into computer code. They may need to create data collection or data analysis or data visualization tools with an easy-to-understand user-interface. They may ultimately be relied upon to ensure the correctness of the final “number” or the final “product”.
To the data scientist…
data literacy might mean understanding some very complex statistical calculations, using computer languages and statistical packages to find trends, insights, and predictive capabilities within datasets.
To the management team…
data literacy might mean being able to use data results (graphs, charts, dashboards) to explain needs, results, and impact in order to convince and persuade. Using data in proposals to give a good basis for why a programme should exist or using data to explain progress to the board of directors, or even as a basis for why a new programme should start up….or close down.
To the donor…
data literacy might mean an understanding of a “good” needs assessment vs. a “poor one” in evaluating a project proposal, how to prioritize areas and amounts of funding, how to ask tough questions of an individual partner, how to be suspect of numbers that may be too good to be true, how to evaluate quality vs. quantity, or how to see areas of collaboration between multiple partners. They need to use data to communicate international priorities to their own wider government, board, or citizens.
Use more precise wording
Data literacy means something different to everyone. So this priority can be interpreted in many different ways depending on who you are. Within your organization, frame this priority with a more precise wording. Here are some examples:
Improve everyone’s ability to raise important questions based on data.
Let’s get better at discussing our data results.
Improve our leadership in communicating the meaning behind data.
Develop our skills in analyzing and using data to create an impact.
Improve our use of data to inform our decisions.
This blog article was based on a recent session at MERL Tech UK 2018. Thanks to the many voices who contributed ideas. I’ve put my own spin on them to create this article – so if you disagree, the ideas are mine. And if you agree – kudos to the brilliant people at the conference!
by Christopher Robert, CEO of Dobility (Survey CTO). This post was originally published on March 15, 2018, on the Survey CTO blog.
Needs, markets, and innovation combine to produce technological change. This is as true in the international development sector as it is anywhere else. And within that sector, it’s as true in the broad category of MERL (monitoring and evaluation, research, and learning) technologies as it is in the narrower sub-category of digital data collection technologies. Here, I’ll consider the recent history of digital data collection technology as an example of MERL technology maturation – and as an example, more broadly, of the importance of market structure in shaping the evolution of a technology.
My basic observation is that, as digital data collection technology has matured, the same stakeholders have been involved – but the market structure has changed their relative power and influence over time. And it has been these very changes in power and influence that have changed the cost and nature of the technology itself.
First, when it comes to digital data collection in the development context, who are the stakeholders?
Donors. These are the primary actors who fund development work, evaluation of development policies and programs, and related research. There are mega-actors like USAID, Gates, and the UN agencies, but also many other charities, philanthropies, and public or nonprofit actors, from Catholic Charities to the U.S. Centers for Disease Control and Prevention.
Developers. These are the designers and software engineers involved in producing technology in the space. Some are students or university faculty, some are consultants, many work full-time for nonprofits or businesses in the space. (While some work on open-source initiatives in a voluntary capacity, that seems quite uncommon in practice. The vast majority of developers working on open-source projects in the space get paid for that work.)
Consultants and consulting agencies.These are the technologists and other specialists who help research and program teams use technology in the space. For example, they might help to set up servers and program digital survey instruments.
Researchers. These are the folks who do the more rigorous research or impact evaluations, generally applying social-science training in public health, economics, agriculture, or other related fields.
M&E professionals.These are the people responsible for program monitoring and evaluation. They are most often part of an implementing program team, but it’s also not uncommon to share more centralized (and specialized) M&E teams across programs or conduct outside evaluations that more fully separate some M&E activities from the implementing program team.
IT professionals.These are the people responsible for information technology within those organizations implementing international development programs and/or carrying out MERL activities.
Program beneficiaries. These are the end beneficiaries meant to be aided by international development policies and programs. The vast majority of MERL activities are ultimately concerned with learning about these beneficiaries.
These different stakeholders have different needs and preferences, and the market for digital data collection technologies has changed over time – privileging different stakeholders in different ways. Two distinct stages seem clear, and a third is coming into focus:
The early days of donor-driven pilots and open source. These were the days of one-offs, building-your-own, and “pilotitis,” where donors and developers were effectively in charge and there was a costly additional layer of technical consultants between the donors/developers and the researchers and M&E professionals who had actual needs in the field. Costs were high, and some combination of donor and developer preferences reigned supreme.
Intensifying competition in program-adopted professional products.Over time, professional products emerged that began to directly market to – and serve – researchers and M&E professionals. Costs fell with economies of scale, and the preferences of actual users in the field suddenly started to matter in a more direct, tangible, and meaningful way.
Intensifying competition in IT-adopted professional products.Now that use of affordable, accessible, and effective data-collection technology has become ubiquitous, it’s natural for IT organizations to begin viewing it as a kind of core organizational infrastructure, to be adopted, supported, and managed by IT. This means that IT’s particular preferences and needs – like scale, standardization, integration, and compliance – start to become more central, and costs unfortunately rise.
While I still consider us to be in the glory days of the middle stage, where costs are low and end-users matter most, there are still plenty of projects and organizations living in that first stage of more costly pilots, open source projects, and one-offs. And I think that the writing’s very much on the wall when it comes to our progression toward the third stage, where IT comes to drive the space, innovation slows, and end-user needs are no longer dominant.
Full disclosure: I myself have long been a proponent of the middle phase, and I am proud that my social enterprise has been able to help graduate thousands of users from that costly first phase. So my enthusiasm for the middle phase began many years ago and in fact helped to launch Dobility.
THE EARLY DAYS OF DONOR-DRIVEN PILOTS AND OPEN SOURCE
In the beginning, there were pioneering developers, patient donors, and program or research teams all willing to take risks and invest in a better way to collect data from the field. They took cutting-edge technologies and found ways to fit them into some of the world’s most difficult, least-cutting-edge settings.
In these early days, it mattered a lot what could excite donors enough to open their checkbooks – and what would keep them excited enough to keep the checks coming. So the vital need for large and ongoing capital injections gave donors a lot of influence over what got done.
Developers also had a lot of sway. Donors couldn’t do anything without them, and they also didn’t really know how to actively manage them. If a developer said “no, that would be too hard or expensive” or even “that wouldn’t work,” what could the donor really say or do? They could cut off funding, but that kind of leverage only worked for the big stuff, the major milestones and the primary objectives. For that stuff, donors were definitely in charge. But for the hundreds or thousands of day-to-day decisions that go into any technology solution, it was the developers effectively in charge.
Actual end-users in the field – the researchers and M&E professionals who were piloting or even trying to use these solutions – might have had some solid ideas about how to guide the technology development, but they had essentially no levers of control. In practice, the solutions being built by the developers were often so technically-complex to configure and use that there was an additional layer of consultants (technical specialists) sitting between the developers and the end-users. But even if there wasn’t, the developers’ inevitable “no, sorry, that’s not feasible,” “we can’t realistically fit that into this release,” or simple silence was typically the end of the story for users in the field. What could they do?
Unfortunately, without meaning any harm, most developers react by pushing back on whatever is contrary to their own preferences (I say this as a lifelong developer myself). Something might seem like a hassle, or architecturally unclean, and so a developer will push back, say it’s a bad idea, drag their heels, even play out the clock. In the past five years of Dobility, there have been hundreds of cases where a developer has said something to the effect of “no, that’s too hard” or “that’s a bad idea” to things that have turned out to (a) take as little as an hour to actually complete and (b) provide massive amounts of benefit to end-users. There’s absolutely no malice involved, it’s just the way most of them/us are.
This stage lasted a long time – too long, in my view! – and an entire industry of technical consultants and paid open-source contributors grew up around an approach to digital data collection that didn’t quite embrace economies of scale and never quite privileged the needs or preferences of actual users in the field. Costs were high and complaints about “pilotitis” grew louder.
INTENSIFYING COMPETITION IN PROGRAM-ADOPTED PROFESSIONAL PRODUCTS
But ultimately, the protagonists of the early days succeeded in establishing and honing the core technologies, and in the process they helped to reveal just how much was common across projects of different kinds, even across sectors. Some of those protagonists also had the foresight and courage to release their technologies with the kinds of permissive open-source licenses that would allow professionalization and experimentation in service and support models. A new breed of professional products directly serving research, program, and M&E teams was born – in no small part out of a single, tremendously-successful open-source project, Open Data Kit (ODK).
These products tended to be sold directly to end-users, and were increasingly intended for those end-users to be able to use themselves, without the help of technical staff or consultants. For traditionalists of the first stage, this was a kind of heresy: it was considered gauche at best and morally wrong at worst to charge money for technology, and it was seen as some combination of impossible and naive to think that end-users could effectively deploy and manage these technologies without technical assistance.
In fact, the new class of professional products were not designed to be used entirely without assistance. But they were designed to require as little assistance as possible, and the assistance came with the product instead of being provided by a separate (and separately-compensated) internal or external team.
A particularly successful breed of products came to use a “Software as a Service” (SaaS) model that streamlined both product delivery and support, ramping up economies of scale and driving down costs in the process (like SurveyCTO). When such products offered technical support free-of-charge as part of the purchase or subscription price, there was a built-in incentive to improve the product: since tech support was so costly to deliver, improving the product such that it required less support became one of the strongest incentives driving product development. Those who adopted the SaaS model not only had to earn every dollar of revenue from end-users, but they had to keep earning that revenue month in, month out, year in, year out, in order to retain business and therefore the revenue needed to pay the bills. (Read about other SaaS benefits for M&E in this recent DevResults post.)
It would be difficult to overstate the importance of these incentives to improve the product and earn revenue from end-users. They are nothing short of transformative. Particularly once there is active competition among vendors, users are squarely in charge. They control the money, their decisions make or break vendors, and so their preferences and needs are finally at the center.
Now, in addition to the “it’s heresy to charge money or think that end-users can wield this kind of technology” complaints that used to be more common, there started to be a different kind of complaint: there are too many solutions! It’s too overwhelming, how many digital data collection solutions there are now. Some go so far as to decry the duplication of effort, to claim that the free market is inefficient or failing; they suggest that donors, consultants, or experts be put back in charge of resource allocation, to re-impose some semblance of sanity to the space.
But meanwhile, we’ve experienced a kind of golden age in terms of who can afford digital data collection technology, who can wield it effectively, and in what kinds of settings. There are a dizzying number of solutions – but most of them cater to a particular type of need, or have optimized their business model in a particular sort of way. Some, like us, rely nearly 100% on subscription revenues, others fund themselves more primarily from service provision, others are trying interesting ways to cross-subsidize from bigger, richer users so that they can offer free or low-cost options to smaller, poorer ones. We’ve overcome pilotitis, economies of scale are finally kicking in, and I think that the social benefits have been tremendous.
INTENSIFYING COMPETITION IN IT-ADOPTED PROFESSIONAL PRODUCTS
It was the success of the first stage that laid the foundation for the second stage, and so too it has been the success of the second stage that has laid the foundation for the third: precisely because digital data collection technology has become so affordable, accessible, and ubiquitous, organizations are increasingly thinking that it should be IT departments that procure and manage that technology.
Part of the motivation is the very proliferation of options that I mentioned above. While economics and the historical success of capitalism has taught us that a marketplace thriving with competition is most often a very good thing, it’s less clear that a wide variety of options is good within any single organization. At the very least, there are very good reasons to want to standardize some software and processes, so that different people and teams can more effortlessly share knowledge and collaborate, and so that there can be some economies of scale in training, support, and compliance.
Imagine if every team used its own product and file format for writing documents, for example. It would be a total disaster! The frictions across and between teams would be enormous. And as data becomes more and more core to the operations of more organizations – the way that digital documents became core many years ago – it makes sense to want to standardize and scale data systems, to streamline integrations, just for efficiency purposes.
Growing compliance needs only up the ante. The arrival of the EU’s General Data Protection Regulation (GDPR) this year, for example, raises the stakes for EU-based (or even EU-touching) organizations considerably, imposing stiff new data privacy requirements and steep penalties for violations. Coming into compliance with GDPR and other data-security regulations will be effectively impossible if IT can’t play a more active role in the procurement, configuration, and ongoing management of data systems; and it will be impractical for IT to play such a role for a vast array of constantly-shifting technologies. After all, IT will require some degree of stability and scale.
But if IT takes over digital data collection technology, what changes? Does the golden age come to an end?
Potentially. And there are certainly very good reasons to worry.
First, changing who controls the dollars – who’s in charge of procurement – threatens to entirely up-end the current regime, where end-users are directly in charge and their needs and preferences are catered to by a growing body of vendors eager to earn their business.
It starts with the procurement process itself. When IT is in charge, procurement processes are long, intensive, and tend to result in a “winner take all” contract. After all, it makes sense that IT departments would want to take their time and choose carefully; they tend to be choosing solutions for the organization as a whole (or at least for some large class of users within the organization), and they most often intend to choose a solution, invest heavily in it, and have it work for as long as possible.
This very natural and appropriate method that IT uses to procure is radically different from the method used by research, program, and M&E teams. And it creates a radically different dynamic for vendors.
Vendors first have to buy into the idea of investing heavily in these procurement processes – which some may simply choose not to do. Then they have to ask themselves, “what do these IT folks care most about?” In order to win these procurements, they need to understand the core concerns driving the purchasing decision. As in the old saying “nobody ever got fired for choosing IBM,” safety, stability, and reputation are likely to be very important. Compliance issues are likely to matter a lot too, including the vendor’s established ability to meet new and evolving standards. Integrations with corporate systems are likely to count for a lot too (e.g., integrating with internal data and identity-management systems).
Does it still matter how well the vendor meets the needs of end-users within the organization? Of course. But note the very important shift in the dynamic: vendors now have to get the IT folks to “yes” and so would be quite right to prioritize meeting their particular needs. Nobody will disagree that end-users ultimately matter, but meanwhile the focus will be on the decision-makers. The vendors that meet the decision-makers’ needs will live, the others will die. That’s simply one aspect of how a free market works.
Note also the subtle change in dynamic once a vendor wins a contract: the SaaS model where vendors had to re-earn every customer’s revenue month in, month out, is largely gone now. Even if the contract is formally structured as a subscription or has lots of exit options, the IT model for technology adoption is inherently stickier. There is a lot more lock-in in practice. Solutions are adopted, they’re invested in at large scale, and nobody wants to walk away from that investment. Innovation can easily slow, and nobody wants to repeat the pain of procurement and adoption in order to switch solutions.
And speaking of the pain of the procurement process: costs have been rising. After all, the procurement process itself is extremely costly to the vendor – especially when it loses, but even when it wins. So that’s got to get priced in somewhere. And then all of the compliance requirements, all of the integrations with corporate systems, all of that stuff’s really expensive too. What had been an inexpensive, flexible, off-the-shelf product can easily become far more expensive and far less flexible as it works itself through IT and compliance processes.
What had started out on a very positive note (“let’s standardize and scale, and comply with evolving data regulations”) has turned in a decidedly dystopian direction. It’s sounding pretty bad now, and you wouldn’t be wrong to think “wait, is this why a bunch of the products I use for work are so much more frustrating than the products I use as a consumer?” or “if Microsoft had to re-earn every user’s revenue for Excel, every month, how much better would it be?”
While I don’t think there’s anything wrong with the instinct for IT to take increasing control over digital data collection technologies, I do think that there’s plenty of reason to worry. There’s considerable risk that we lose the deep user orientation that has just been picking up momentum in the space.
WHERE WE’RE HEADED: STRIKING A BALANCE
If we don’t want to lose the benefits of a deep user orientation in this particular technology space, we will need to work pretty hard – and be awfully clever – to avoid it. People will say “oh, but IT just needs to consult research, program, and M&E teams, include them in the process,” but that’s hogwash. Or rather, it’s woefully inadequate. The natural power of those controlling resources to bend the world to their preferences and needs is just too powerful for mere consultation or inclusion to overcome.
And the thing is: what IT wants and needs is good. So the solution isn’t just “let’s not let them anywhere near this, let’s keep the end-users in charge.” No, that approach collapses under its own weight eventually, and certainly it can’t meet rising compliance requirements. It has its own weaknesses and inefficiencies.
What we need is an approach – a market structure – that allows the needs of IT and the needs of end-users both to matter to appropriate degrees.
With SurveyCTO, we’re currently in an interesting place: we’re becoming split between serving end-users and serving IT organizations. And I suppose as long as we’re split, with large parts of our revenue coming from each type of decision-maker, we remain incentivized to keep meeting everybody’s needs. But I see trouble on the horizon: the IT organizations can pay more, and more organizations are shifting in that direction… so once a large-enough proportion of our revenue starts coming from big, winner-take-all IT contracts, I fear that our incentives will be forever changed. In the language of economics, I think that we’re currently living in an unstable equilibrium. And I really want the next equilibrium to serve end-users as well as the last one!