Protection and Safeguarding, meet Trust and Safety


A 15-year-old girl in a middle income country looking intently at her phone.
ChatGPT 4o: A 15 year old girl in a middle income country looking intently at her phone.

In late July, as part of my work with iMedia, I attended TrustCon, an annual conference that brings together Trust & Safety (T&S) professionals — the people who work to keep digital services and platforms (both large and small) safe for those who use them by protecting them from harmful and unwanted experiences.

Platforms have been blamed for multiple ills, from the crumbling of democracy, to genocide, to mental health issues in children, to an increase in online child sexual abuse material and a general breakdown of societal norms. T&S teams work preventing and addressing these challenges — from election integrity and preventing the spread of mis- and disinformation and extremism; to privacy and human rights, to addressing the offline risks that people who use Lyft, AirBnB or Rover might encounter; to detecting and reporting all kinds of online abuse, including hate speech, gender-based violence, grooming, and child sexual abuse and exploitation

My own work with development and humanitarian organizations often overlaps with trust and safety topics, but in our sector, we refer to our work as online safety, online protection and/or digital safeguarding. While it should have been obvious, somehow I hadn’t realized that there were entire teams within platforms who were working on online protection, digital safeguarding and online safety.

The notorious unwillingness of big tech to put people’s safety and societal well-being over their own profits and growth had always left me with a negative impression. This wasn’t helped by my experiences with platform representatives at social sector events – I’d only been exposed to corporate PR people keeping to tight, pre-established talking points. So, I was pleasantly surprised to find so many of ‘my people’ (the T&S teams) working within platforms and fighting similar battles. What an opportunity for learning!

Female coded safety work

Like protection and safeguarding in our sector, T&S is underfunded, undervalued, and largely ‘female coded’. Australia’s eSafety Commissioner Julie Inman-Grant wrote about this after being recognized as a role model for women in cybersecurity. She highlighted the importance of valuing the work of women who combat tech-facilitated gender-based violence (and other harms that disproportionately affect women and girls online) as part of the cyber profession. She described cybersecurity as a three-legged stool comprised of security, privacy and safety; yet noted that safety is often overlooked.

Hunsberger’s Venn diagram of security, privacy and safety.

Inman-Grant went on to say that “Security is often framed as warfare — for example, defending against attacks and exploiting weaknesses — while Trust & Safety is more oriented around justice; the focus is on fairness, empathy, inclusivity and empathy…. T&S has distinctly female-coded ideals, and the skills needed to thrive in the profession tend to be female-coded as well: skills like emotional intelligence, compromise, cultural sensitivity, and communication.”

Along these same lines, Alice Hunsberger notes in Everything in Moderation that mature companies generally have a Chief Security Officer and/or Chief Privacy Officer, yet it’s rare to see T&S represented within the C-suite despite the fact that “all three industries focus on protecting users and data, require a deep understanding of technology, and have complex risk frameworks and regulatory oversight.” Hunsberger points out that Cybersecurity and Privacy overlap with T&S, yet T&S is not afforded the same respect and resources, as is common with most fields considered ‘female’.

The development and humanitarian sectors are only beginning to understand that as our programming becomes more digital, we need to invest in all three legs of the stool. While cybersecurity is understood as a critical capacity, very few organizations have advanced to appointing data protection/data privacy officers, and many are still working to integrate digital requirements into their safeguarding / protection roles and their organizational policies overall and to train up their safeguarding and protection teams in these areas.

Decolonizing T&S

It’s no secret that platforms de-emphasize investments in T&S for less lucrative languages and locations, leaving some people and societies at greater risk of platform-related harms. The continued expansion of US-based social media and AI platforms and services to global majority countries makes it imperative that platforms consider how their T&S leads (based largely in the US), their budgets (focused largely on the US), and their governance (with decision-making largely happening in the US) consider global majority contexts and realities.

Yet power and decision-making within most big tech companies remain firmly in Silicon Valley despite users of the largest platforms being based all over the world and a good portion of content moderation and other tasks being outsourced to low paid workers in the majority world. As one TrustCon panelist pointed out, there is a missed opportunity for getting feedback and orientation from these workers who, because they moderate content on platforms every day, have a deep understanding of emerging harms in local contexts. Global majority civil society organizations also expressed difficulties in accessing and meaningfully influencing how digital platforms and applications do business in their countries, even when platforms were the sites of serious societal harms.

A small group of researchers discussed decolonizing T&S in a session I attended on research in global majority countries. Farhana Shahid lays out in more detail how scholars in the majority world struggle with colonialism in content moderation research based on a study she was involved in:

“The NLP researchers working in Tamil, Kiswahili, and Quechua stated that the biggest roadblock in addressing online harms is the lack of high-quality digital data. This stems from the colonial legacy in NLP, which favors digital inclusion of English and a handful of European languages, while neglecting linguistic diversity in the Majority World…. African NLP researchers working on Kiswahili complained that tech companies often denied them access to data if they did not have prior publications, which is difficult for these researchers since they lack funding to support and publish their work on low-resource languages.

Things worsened when tech companies began charging exorbitant fees for data access, axed researchers’ access to existing data, and blocked open-source tools that independent researchers used to scrape online content…. This manifestation of the resource gap is deeply rooted in the colonial legacy, which prioritizes strengthening Western institutions as knowledge-producers and solution-makers to global problems, rather than building local research capacity. Tech companies, many of which are based in the West, exacerbate this gap further by gatekeeping and monetizing user-generated content in low-resource languages.”

Despite the huge spread of big tech platforms globally, there remains a dearth of research about platforms (and the harms they can cause) in global majority countries. Topic areas where participants in the session wanted to see (and do!) more research included:

  • How to hold platforms accountable for prioritizing the safety of users in the majority world, including the effectiveness of regulation for holding platforms accountable
  • How to decolonize Trust and Safety
  • What does it look like to work with influencers in the global majority? What are the benefits and challenges?
  • How can we better understand local pathways from uptake of general misinformation to uptake and engagement with political disinformation in the majority world?
  • What do user personas look like in the majority world and how do we understand them if there is intermittent access to the Internet and no platform data to work with?
  • What are some ways to manage mis- and disinformation when it requires face-to-face, off-platform work and when it’s in non-English languages?
  • How do socioeconomic inequities amplify harm and abuse for global majority platform users
  • How can we balance granular cultural contexts and local nuances with scalability when it comes to T&S?
  • How to more effectively target social and behavior change messaging and activities on digital platforms to users who do not behave the same as users in the US and Europe

This group is working to support online moderators to meet and share expertise, experiences, and stories, and to advocate together for research that centers their needs; to create space for academic, industry, and civil society researchers in this space to collaborate; and to encourage online moderators, researchers, and civil society experts to discuss the kinds of research that moderators would like to see conducted about them and their work.

Collaboration across T&S, Safeguarding, Protection and Online Safety

An illustration of a diverse group of trust and safety and NGO staff members sitting around a table discussing protection, safety and safeguarding
ChatGPT 4o: A diverse group of trust and safety and NGO staff members discussing protection, safety and safeguarding

The presence of quite a few consulting firms at TrustCon sparked speculation that regulation is finally on the horizon in the US. The T&S profession may need to professionalize to meet these regulations, which may go further than cybersecurity and privacy requirements.

It remains to be seen how platforms will manage stricter regulation considering that over the past few years, many downsized their T&S teams and/or terminated large numbers of their T&S staff working on core issues like election integrity, human rights, and prevention of online child sexual abuse and exploitation. (For an excellent play-by-play of how Elon Musk decimated Twitter’s T&S team, I recommend reading Extremely Hardcore: Inside Elon Musk’s Twitter by Zoë Schiffer).

Additionally, the potential harms emerging with the growth and use of Generative AI raise big questions about how harms might manifest globally. Some areas where GenAI is enabling harm include:

  • non-consensual pornography and child sexual abuse material (CSAM) (the most common types of harmful synthetic media);
  • proliferation of spam and low-quality websites that give harmful advice
  • targeted scams and spear phishing (scraping someone’s voice from social media and using it to scam someone for money)
  • creation and spread of dis-contextualized and misleading information

If these trends continue, with greater levels of harmful content and conduct online and reduced T&S teams at platforms, the burden may fall on the shoulders of safeguarding and protection staff in civil society organizations. These teams will continue to see their roles expanding to include data protection, online safety, tech-facilitated gender-based violence and online safeguarding and protection. It will be important for civil society organizations to continue investing in training and skill-building around digital harm and to engage more with majority world T&S researchers and current and former T&S staff. Many global and local human rights organizations, child- and gender-focused organizations and humanitarian organizations are deep into this work and will need increased funding as platforms continue to step back while the challenges keep growing.

While TrustCon is still very much an ‘industry’ conference, over the past 3 years the number of local and global civil society organizations in attendance has grown. The increasing digitalization of social sector and nonprofit work signals that organizations should be paying more attention to T&S. The reduction of T&S teams at platforms may lay responsibility for digital protection and safeguarding in the hands of civil society organizations. Working more closely together could help us learn from one another and to strengthen the third leg of the proverbial security, privacy and safety stool.

Leave a Reply

Your email address will not be published. Required fields are marked *