The Online Safety Report 2024 | TrustLab

Created time
Jul 12, 2024 09:55 PM
Posted?
Posted?

What’s Inside?

Foreword by Tom Siegel

notion image
Tom Siegel.

Tom Siegel

Co-Founder & CEO
As online safety threats continue to grow at a galloping pace, the fight against harmful content is more important than ever.
Trust & Safety teams are at the forefront of that fight.
The past 3-5 years have been turbulent for the Trust & Safety industry. Economic uncertainties led to widespread layoffs, while emerging online regulations increased pressure on teams, among other challenges.
Despite these setbacks, the commitment and resilience of Trust & Safety professionals have never wavered. They continue to innovate and adapt, employing advanced technologies to detect and mitigate threats more effectively.
At TrustLab, we believe in empowering these professionals by giving them the insights and tools they need to navigate this rapidly evolving landscape. To further explore these issues and provide valuable insights to the Trust & Safety community, we are proud to present our Online Safety Report.
This study draws from hundreds of responses and in-depth interviews with over 100 Trust & Safety professionals across various industries such as social media, dating, gaming, and marketplaces. Shining a light on their most pressing challenges, emerging threats, and the innovative ways these teams combat online harm.
As we unpack these findings and cast light on where the industry is headed next, we invite you to join us in shaping a safer digital future for all.

About the Online Safety Survey

A Blend of Video Surveys & In-depth Interviews
This report includes results from a VideoAsk survey circulated within the Trust & Safety community, along with in-depth discussions with Trust & Safety professionals across various industries such as social media, dating, gaming, marketplaces, and more.
Through a blend of quantitative data and qualitative insights, the report seeks to equip Trust & Safety professionals with valuable knowledge and practical tools to help you navigate and address emerging threats and issues in the Online Safety space.
💡 In total, we had over 30 hours of conversations with Trust & Safety professionals. From Heads of, to Project Managers, Founders, and Policy folks.
notion image

We spoke to 100+ T&S professionals, here’s where they sit:

Industry

Gig = Convenient, on-demand services | Web Services = Tech collaboration tools
Other include: Non-profit, Gaming, Consultancy Services

Role

T&S Leadership include: Heads of, Managers and Team Leads
Other include: Individual contributors and platform-specific managers

Key Themes Uncovered

1

Staying ahead of emerging threats is a universal struggle

Staying ahead of new types of abuse, threats and sophisticated bad actors is a challenge for all Trust & Safety teams – staying ahead is often done manually by teams or with the help of users, but there is no systematic approach for proactive threat detection.
2

Online Safety metrics are still elusive

Trust & Safety teams rely on customer feedback and industry benchmarks to evaluate their efforts, but there’s no clear framework or north star when it comes to metrics.
“Budget cuts and mass layoffs have created and will continue to create massive policy and operations loopholes for bad actors to exploit.”
3

There’s a lack of resources and desire to invest in T&S

Although most participants couldn’t disclose information on budget-changes, many stated that this was a challenge across the board – investment is often made when things are “already on fire,” not as a preemptive measure (i.e. investing in tools).
4

Content-level vs Actor Level Signals

While content-level signals are essential for daily moderation, actor-level signals uncover deeper abuse patterns. These two tasks require different tools and efforts, with actor signals posing a bigger challenge to platforms.
63% of participants mentioned “staying ahead of emerging threats” as one of the biggest challenges for their team.
46% of participants said that scaling Trust & Safety Operations with User growth was a major challenge.
5

3P tools are not covering all Trust & Safety bases

Existing third party solutions are not meeting the nuanced requirements of Trust & Safety operations. Teams are using up internal resources to build tools that fit their unique requirements or spending significant amounts on external services.
6

Platforms overpay for human moderation, but automation falls short

Online platforms looking for safety solutions struggle to balance human moderation with automation, facing high costs and management challenges with offshore teams, and complexity and support issues with automation.
“Off the shelf classifiers are giving us bananas and saying they’re explicit. It costs us more to fine tune, and hurts user experience.”

Who handles Online Safety?

The obvious answer might be "whoever the Trust & Safety leader is," but in reality, we saw a mix of roles, teams, and levels handling online safety. This shows that companies often don't prioritize online safety, which many pointed out can lead to serious consequences.

Which Team Handles Online Safety?

Online Safety is still fragmented, although many companies have their T&S teams in charge, many rely on teams that are not specialized in online safety to tackle issues.
The consequence - besides the obvious stack up of user-safety issues, - is a huge mess to clean up once a Trust & Safety professional (hopefully) comes along.
“Leadership doesn’t see bad content on the platform, so they don’t think that they need to invest in T&S.”
These answers are not mutually exclusive, as participants were allowed to pick more than one.

Who Makes Trust & Safety decisions?

Online Safety decisions are mostly made by Trust & Safety leadership or executives.
Decisions are usually budget-related issues (such as investment in resources) or pressing issues, like a new threat that requires action.
Many respondents expressed frustration when it comes to decision-making, it’s often reactive and not proactive, with leadership only being on high-alert when “shit has hit the fan.”
The majority of the time decisions about user safety are not being made by trained T&S professionals, which is a frustration shared by many in the industry.
These answers are not mutually exclusive, as participants were allowed to pick more than one.

What does “Online Safety” mean for the company

These answers are not mutually exclusive, as participants were allowed to pick more than one.
Different industries have different priorities. For example, Marketplaces often spend more time preventing fraud, whilst Social Media and Dating Apps are working towards a safe and respectful environment.

Biggest Challenges for Trust & Safety Professionals

One of the most interesting findings was that different companies had unique "trigger events" that finally pushed their leadership to invest in Trust & Safety. However, the greatest threats and challenges teams face were surprisingly similar across the board.

What Triggered Bigger Trust & Safety Concerns?

The response to this question was mixed, depending not only on industry but also on the culture and size of the company.
Companies that have executive leaders with Trust and Safety experience are more likely to see the value in Online Safety initiatives from the get go.
However, in most cases, Trust & Safety only comes to the forefront during an emergency, such as user safety issues that result in negative press, litigation, or regulatory fines.

Biggest Challenges for Trust & Safety Today

These answers are not mutually exclusive, as participants were allowed to pick more than one.
“If there was a tool, and if you’re aware of any, to help feed us those emerging threats, I’d be very interested in that.”

Biggest emerging threats/challenges

These answers are not mutually exclusive, as participants were allowed to pick more than one.

Tracking Emerging Threats

One of the most common challenges identified by professionals in charge of Online Safety, was the difficulty in identifying emerging threats. Finding and squashing new threats is still a game of whac-a-mole for most, and current solutions are time-consuming and impossible to scale.

User Feedback

Many teams rely heavily on user reports to detect and respond to new threats. Although this highlights the value of direct user input, it shows a reactive approach to threat detection, which ultimately isn’t scalable.

Manual Monitoring

Organizations use anomaly detection to spot unusual activities like unexpected payment failures, alongside regular team discussions and manual checks. Although this is a more proactive approach, a lot can slip through the cracks.

External Insights

Information from media, industry reports, and collaborations with experts is one of the main tools for understanding emerging threats. This provides a broader perspective but fails to address platform nuances.
💡 One of the biggest pain-points for Trust & Safety teams is the lack of data that can help with tracking emerging threats. Some teams use a combination of tools to gauge anomalies, but there isn’t a straightforward solution to emerging threat signals.

Content vs. Actor-Level Signals

One of the trends we identified is the need to look at actor-level and content-level signals. Both are key. But one is more challenging, and more strategically relevant than the other.

Content-Level Signals

Content-level signals focus on specific pieces of content, independent of the user who shared it.
This often involves looking at text, images, video or other forms of media to determine if the content complies with platform Terms of Service, local/international regulations or otherwise pose a risk to users.
Platforms often use automated systems to review content, however, content-level analysis doesn’t address recurrent problematic behavior by users, which could help preemptively address issues.

Actor-Level Signals

Actor-level signals focus on the behavior of users, taking into account their history and patterns of interaction on the platform.
This approach is crucial for identifying users who might repeatedly engage in harmful behavior, as it allows for early detection and preemptive action against those who pose ongoing risks to the platform and its community.
A few survey participants expressed a strong desire for actor-level signals across platforms – so they can be identified before entering the platform.
💡 While content signals are essential for daily moderation, actor signals offer strategic benefits by uncovering and addressing deeper abuse patterns. This is key for platforms striving to remain safe and resist attacks by sophisticated or persistent bad actors.

Metrics & Tools

While online safety metrics aren't always a priority, their value in demonstrating Trust & Safety effectiveness and facilitating team learning and improvement is widely recognized. At the same time, we found that there wasn’t a universal third party solution for Trust & Safety needs.

What metrics are used to measure Online Safety

This includes speed to resolve issues!
These answers are not mutually exclusive, as participants were allowed to pick more than one.
Note: A reduction in the number of reported incidents doesn't necessarily mean your platform is safer. It could also indicate bugs in your reporting mechanisms or a decline in user trust in your ability to handle escalations.

Most used 3P tools for Online Safety Efforts

Data privacy management, compliance, legal reporting and other tools 22.3%

Where third-party tools fell short

Contextual Understanding

Third-party tools often fail to grasp the context of user interactions, leading to frequent false positives where benign content is mistakenly flagged as harmful. For example, one tool flagged a picture of a cloud as drugs due to its shape similarities with smoke.

Integration Challenges

A lot of teams opt to build in-house solutions, because most third-party tools are too burdensome to integrate with existing systems and workflows. Aligning the tool's operations (such as labelling) with specific policy needs was also a recurring pain-point.

Handling of Nuanced Language

Automated systems struggle with nuances like sarcasm and cultural slang, once again, bringing up false-positives that create a ton of extra work for teams. An incident was noted where specific fetish slang on a dating app was flagged as harmful content.

Limited Customization and Flexibility

Many third-party tools offer limited customization options, creating misalignments with specific community guidelines and making them unable to handle sensitive topics effectively.

Human Moderation vs. Automation:

No Perfect Solution

The problem with human moderation

notion image
Cost/volume graph.
Offshore teams are difficult to manage, deliver questionable quality, and costs grow in line with platforms’ user bases.

The problem with automation

notion image
Effective gains/time graph.
Automation rarely reaches its full potential due to extreme complexity and lack of ongoing technical support.

We asked Trust & Safety Professionals what they would do if they had a magic wand

More Automation and AI

There's a strong desire for advanced automation and AI to handle content moderation, threat detection, and compliance more effectively.

Team Support

Improving the mental wellness and workload of Trust & Safety teams is a priority, with suggestions for increased staffing and better training.

Strategic Leadership

Establishing strategic leadership roles like a Head of Safety to ensure a proactive rather than reactive approach to user safety.

Tools & Systems Flexibility

The need for scalable and adaptable systems is crucial, especially to handle growth and integrate new solutions without major disruptions.

Full visibility

There is a strong desire for systems and tools that can fully monitor and detect risky user behavior and content in real-time, and address threats swiftly.

What’s next for Trust & Safety?

This report not only highlights the dedication and expertise of those in the Trust & Safety field but also underscores the urgent need for continued innovation and support.
As we look to the future, it is clear that the role of Trust & Safety teams will become even more critical in ensuring the integrity and security of online spaces for users, businesses, and governments.
Our findings reveal that staying ahead of emerging threats is a universal struggle and the industry's ongoing battle to balance human moderation with automation – two pieces of the puzzle that we, at TrustLab, are working hard to solve.
In future reports, we want to explore how online harm translates into offline harm, delve into the ethics of policy writing, and look further into cross-functional collaboration within Trust & Safety teams.
Until then, we'll continue to monitor the industry's progress in developing more comprehensive online safety metrics and the evolution of third-party tools to meet the nuanced requirements of Trust & Safety operations.
Thank you for your commitment to this vital work. Together, we can build a safer, more trustworthy digital world.
Sincerely,
Tom Siegel
CEO, TrustLab

About TrustLab

Since 2019, TrustLab has been building AI Technology and running operations to identify and mitigate harmful content.
  • 30+ years of combined experience in Trust & Safety.
  • Developed classifiers and T&S programs for the biggest social media platforms and online marketplaces in the world.
  • Official partners for US and EU misinformation practices, content regulation & trend analysis.

Our Founders

Tom Siegel | CEO

  • Founded & ran global Trust & Safety team at Google for 14 years.
  • Deep expertise in T&S Leadership, Business, Technical & Operational Systems.

Shankar Ponnekanti | CTO

  • Ran the Video Brand Safety Division at Youtube for 15 years.
  • Created some of Google’s earliest foundational AI models.
  • Expert in automation engineering.

Benji Loney | CPO

  • T&S leader at TikTok, Reddit and YouTube.
  • Built global T&S organizations (policy, ops, product).
  • Expert in scaling T&S in hyper-growth platforms.

Our Team’s T&S Expertise Includes:

Looking for a smarter content moderation solution?

Join our Beta list.