Safer’s 2024 Impact Report

Safer continues to provide new ways for digital platforms to moderate their online spaces. With purpose-built solutions for detecting child sexual abuse material (CSAM) and exploitation (CSE), Safer empowers technology companies to confront these online harms. In 2024, more tech companies than ever used Safer.

Since Thorn launched Safer in 2019, they have continued evolving the technology to address young people's changing behaviors and the online threats they face. 2024 was no different. To empower a full range of customers across multiple industries, Thorn incorporated new capabilities allowing Safer to detect a wider variety of child sexual exploitation.

New in 2024

Safer Predict Text Classifier

New this year, the Safer Predict CSE text classification model lets tech companies add another crucial layer of protection. It uses predictive AI to analyze line-by-line text, looking for conversation context that could identify sextortion, access to children and other exploitation risks.

CSAM Classifier Deployment Options

The Safer CSAM classifier was made available as a stand-alone option, giving a broader range of technology companies greater access to the tool. Previously, this AI-enabled solution was only available as part of Safer Enterprise, our comprehensive, self-hosted CSAM detection solution.

Safer’s 2024 Impact

2024 was another banner year for Safer. The launch of a text classifier for detecting text-based exploitation gave technology companies a new angle from which to monitor and address child harm. This broader detection horizon creates more opportunities for content moderation, and a clear picture of child safety risks. With Safer Predict, trust and safety teams have a powerful new tool to identify potential threats, such as discussions about child sexual abuse material, sextortion threats, and other forms of sexual harm to minors.

Offering the CSAM classifier as a stand-alone solution allowed new customers to put the technology to work on their platforms, substantially increasing the quantity of files processed and the number of files detected.

In 2024, Safer processed 112.3 billion files input by our customers. This impressive number was fueled by more than a dozen new Safer customers. Today, the Safer community comprises more than 60 platforms, with millions of users sharing an incredible amount of content daily. This represents a substantial foundation for the important work of preventing repeated and viral sharing of CSAM online.

Safer detected just under 2,000,000 images and videos of known CSAM in 2024. Safer uses multiple hashing methods to detect known CSAM. These hashes are matched against hash lists compiled by various sources, including the National Center for Missing or Exploited Children (NCMEC), which triple-verifies submitted materials. Using hash matching allows Safer to programmatically determine if a file is previously verified CSAM while avoiding unnecessary exposure of content moderators to harmful content.

In addition to detecting known CSAM, our predictive AI detected more than 2,200,000 files of potential novel CSAM. Safer’s image and video classifiers use machine learning to predict whether new content is likely to be CSAM and flag it for further review. Identifying and verifying novel CSAM allows it to be added to the hash library, accelerating future detection.

Proactively detecting CSAM enabled one customer, GIPHY, to dramatically reduce the amount of harmful content discovered and reported by users. Since deploying Safer in 2021, GIPHY has detected and deleted 400% more CSAM than previous years and only received one single confirmed user report of CSAM through its reporting tools.

Altogether, Safer detected more than 4,100,000 files of known or potential CSAM in 2024.

Safer launched a text classifier in 2024 and processed more than 3,000,000 lines of text in just the first year. This capability offers a whole new dimension of detection, helping platforms identify sextortion and other abuse behaviors happening via text or messaging features. In all, almost 3,200 lines of potential child exploitation were identified, helping content moderators respond to potentially threatening behavior.

Last year was a watershed moment for Safer, with the community almost doubling the all-time total of files processed. Since 2019, Safer has processed 228.8 billion files and 3 million lines of text, resulting in the detection of almost 6.5 million potential CSAM files and nearly 3,200 instances of potential child exploitation. Every file or line of text processed, and every potential match made, helps shape a safer internet for children and platform users.

Build a Safer internet

Curtailing platform misuse and addressing online sexual harms against children requires an “all-hands” approach. Too many platforms still suffer from siloed data, inconsistent practices, and policy gaps that jeopardize effective content moderation. Thorn is here to provide resources and solutions to help trust and safety teams craft effective and cohesive child safety strategies.

Platforms that have organized and aligned their content moderation efforts provide their users with clear policies, their teams with cutting-edge tools, and children the freedom to simply be a kid.