Tag Archives: bias

Confidence Not Competence: What’s Holding Women Back from Embracing Tech in Development

by Carmen Tedesco, DAI. Original first published here.

Throughout my life, I’ve heard women grumble about using technology—from my mom, from friends in school, and from work colleagues—yet these are highly educated, often extremely logical thinkers that excel at, well, Excel!

The irony of the situation has been troubling me in the past few months. Why? Because there is a clear contrast in attention paid to the benefits of empowering women and girls through technology in low-and middle-income countries, with the attention paid to empowering women and girls through technology in high-income environments.

As an international development community, we spend a lot of resources promoting the use of technology among women and girls within the communities where we work—with good results. And yet, as a community of women development practitioners, we are failing at embracing technology ourselves. The gender gap in science, technology, education, and mathematics (STEM) exists around the world, and society continues to fail women and girls by not expecting them to know much about technical matters. This plays out in our day-to-day work in the monitoring, evaluation, research, and learning (MERL) sector. Whether it’s learning new software to improve our results monitoring or using new mobile tools in the field, there seems to be a hesitance, and lack of confidence, often accompanied by a self-deprecation that our male counterparts lack.

What is holding back women from embracing technology in our own work, even as we tout it for others in the field? These questions motivated me to take the topic to a broader audience at the recent MERLTech Conference in Washington, D.C.

Panel.JPG

Panelists discuss their own experiences as women working in the tech space. From left to right Dr. Patty Mecheal (Co-founder and Policy Lead, HealthEnabled), Carmen Tedesco (author), Jaclyn Carlsen (Policy Advisor, Development Informatics team, USAID), Priyanka Pathak (Principal, Samaj Studio).

But first, a bit of history.

How Did We Get Here?

In her article from the Center for Media Literacy, Margaret Brenston explains: “In our society, boys and men are expected to learn about machines, tools and how things work. In addition, they absorb, ideally, a ‘technological world view’ that grew up along with industrial society. Such a world view emphasizes objectiv­ity, rationality, control over nature, and distance from human emotions. Con­versely, girls and women are not expected to know much about technical matters. Instead, they are to be good at interper­sonal relationships and to focus on people and emotion.”

She goes on to outline how those differences play out when technology is seen as a language, and one in which women “are silenced.” She writes: “It is very difficult for women to discuss technical problems, particularly experi­mental ones, with male peers—they either condescend or they want to perform whatever task is at issue themselves. In either case, asking a question or raising a problem in discussion is proof (if any is needed) that women don’t know what they are doing. Male students, needless to say, do not get this treatment.” An interesting literature review of gender differences in technology usage highlights a 2003 study that details how women are more anxious than men with IT utilization, which reduces their self-effectiveness and increases the perception that IT requires more effort.

I organized a panel at MERLTech, where we discussed our experiences as women in tech working in monitoring, evaluation, and learning (MEL), some of the data behind the gender gap in STEM, and why women struggle to embrace technology.

So many conference attendees echoed the above findings, mentioning that tech savvy is seen as smart, but smart is not seen as feminine. There is a misconception about what technology is by women. The “imposter syndrome” or a fear of failure, has a real impact on women in our lives, and the reaction by men to women’s discomfort with tech is often compounded by mocking or dismissal, making many women even more hesitant to engage.

How Can We Fix This?

The Global Fund for Women states, “Access to technology, control of it, and the ability to create and shape it, is a fundamental issue of women’s human rights.” The Fund does this by, “help[ing] end the gender technology gap and empower[ing] women and girls to create innovative solutions to advance equality in their communities.”

Based on our discussion, here are five tips to help bridge the technology gender divide within our own field.

  1. Be, or find, a mentor. Women will benefit from mentors and allies in this space, whether you plan to go into a tech field, or just want to ask a question without fear of looking uninformed.
  2. Become a role model where you can. Find allies, men and women to help you build confidence.
  3. Increase representation. When women can be brought to the table in discussions of tech, they should be. Slowly, this will permeate the culture of the organization. Having more women involved in the process of explaining and building tech in our companies will normalize the use of tech and take away some of the gendered dynamics that exist now.
  4. Confront bias head-on. Addressing gender assumptions when they occur can be hard but pointing out the bias is not enough. Countering the action with a specific recommendation for course correction works best.
  5. Build confidence. Personal development can play a role in building confidence, as can much of the point listed above. Confidence is the foundation for competence.

Both men and women should be aware of the history and social context behind women’s hesitation in the technology space. It is in all our best interests to be aware of this bias and find ways to help correct it. In taking some of these small steps, we can pave the way for increased confidence in the tech space for women.

Data quality in the age of lean data

by Daniel Ramirez-Raftree, MERL Tech support team.

Evolving data collection methods call for evolving quality assurance methods. In their session titled Data Quality in the Age of Lean Data, Sam Schueth of Intermedia, Woubedle Alemayehu of Oxford Policy Management, Julie Peachey of the Progress out of Poverty Index, and Christina Villella of MEASURE Evaluation discussed problems, solutions, and ethics related to digital data collection methods. [Bios and background materials here]

Sam opened the conversation by comparing the quality assurance and control challenges in paper assisted personal interviewing (PAPI) to those in digital assisted personal interviewing (DAPI). Across both methods, the fundamental problem is that the data that is delivered is a black box. It comes in, it’s turned into numbers and it’s disseminated, but in this process alone there is no easily apparent information about what actually happened on the ground.

During the age of PAPI, this was dealt with by sending independent quality control teams to the field to review the paper questionnaire that was administered and perform spot checks by visiting random homes to validate data accuracy. Under DAPI, the quality control process becomes remote. Survey administrators can now schedule survey sessions to be recorded automatically and without the interviewer’s knowledge, thus effectively gathering a random sample of interviews that can give them a sense of how well the sessions were conducted. Additionally, it is now possible to use GPS to track the interviewers’ movements and verify the range of households visited. The key point here is that with some creativity, new technological capacities can be used to ensure higher data quality.

Woubedle presented next and elaborated on the theme of quality control for DAPI. She brought up the point that data quality checks can be automated, but that this requires pre-survey-implementation decisions about what indicators to monitor and how to manage the data. The amount of work that is put into programming this upfront design has a direct relationship on the ultimate data quality.

One useful tool is a progress indicator. Here, one collects information on trends such as the number of surveys attempted compared to those completed. Processing this data could lead to further questions about whether there is a pattern in the populations that did or did not complete the survey, thus alerting researchers to potential bias. Additionally, one can calculate the average time taken to complete a survey and use it to identify outliers that took too little or too long to finish. Another good practice is to embed consistency checks in the survey itself; for example, making certain questions required or including two questions that, if answered in a particular way, would be logically contradictory, thus signaling a problem in either the question design or the survey responses. One more practice could be to apply constraints to the survey, depending on the households one is working with.

After this discussion, Julie spoke about research that was done to assess the quality of different methods for measuring the Progress out of Poverty Index (PPI). She began by explaining that the PPI is a household level poverty measurement tool unique to each country. To create it, the answers to 10 questions about a household’s characteristics and asset ownership are scored to compute the likelihood that the household is living below the poverty line. It is a simple, yet effective method to evaluate household level poverty. The research project Julie described set out to determine if the process of collecting data to create the PPI could be made less expensive by using SMS, IVR or phone calls.

Grameen Foundation conducted the study and tested four survey methods for gathering data: 1) in-person and at home, 2) in-person and away from home, 3) in-person and over the phone, and 4) automated and over the phone. Further, it randomized key aspects of the study, including the interview method and the enumerator.

Ultimately, Grameen Foundation determined that the interview method does affect completion rates, responses to questions, and the resulting estimated poverty rates. However, the differences in estimated poverty rates was likely not due to the method itself, but rather to completion rates (which were affected by the method). Thus, as long as completion rates don’t differ significantly, neither will the results. Given that the in-person at home and in-person away from home surveys had similar completion rates (84% and 91% respectively), either could be feasibly used with little deviation in output. On the other hand, in-person over the phone surveys had a 60% completion rate and automated over the phone surveys had a 12% completion rate, making both methods fairly problematic. And with this understanding, developers of the PPI have an evidence-based sense of the quality of their data.

This case study illustrates the the possibility of testing data quality before any changes are made to collection methods, which is a powerful strategy for minimizing the use of low quality data.

Christina closed the session with a presentation on ethics in data collection. She spoke about digital health data ethics in particular, which is the intersection of public health ethics, clinical ethics, and information systems security. She grounded her discussion in MEASURE Evaluation’s experience thinking through ethical problems, which include: the vulnerability of devices where data is collected and stored, the privacy and confidentiality of the data on these devices, the effect of interoperability on privacy, data loss if the device is damaged, and the possibility of wastefully collecting unnecessary data.

To explore these issues, MEASURE conducted a landscape assessment in Kenya and Tanzania and analyzed peer reviewed research to identify key themes for ethics. Five themes emerged: 1) legal frameworks and the need for laws, 2) institutional structures to oversee implementation and enforcement, 3) information systems security knowledge (especially for countries that may not have the expertise), 4) knowledge of the context and users (are clients comfortable with their data being used?), and 5) incorporating tools and standard operating procedures.

Based in this framework, MEASURE has made progress towards rolling out tools that can help institute a stronger ethics infrastructure. They’ve been developing guidelines that countries can use to develop policies, building health informatic capacity through a university course, and working with countries to strengthen their health information systems governance structures.

Finally, Christina explained her take on how ethics are related to data quality. In her view, it comes down to trust. If a device is lost, this may lead to incomplete data. If the clients are mistrustful, this could lead to inaccurate data. If a health worker is unable to check or clean data, this could create a lack of confidence. Each of these risks can lead to the erosion of data integrity.

Register for MERL Tech London, March 19-20th 2018! Session ideas due November 10th.