Expert-backed solutions for CSE text and CSAM detection

Safer by the Numbers

  • 211B+ Files processed
  • 76.6M+ Hashes of verified CSAM in database
  • 2.4M+ CSAM files matched
  • 3.6M+ Files classified as potential CSAM

Maximize detection of known CSAM

Achieve broad coverage for known CSAM detection through Safer Match’s multiple hashing methods and an expansive database of verified-CSAM hashes.

  • Find exact matches with cryptographic hashes and slightly altered images with our proprietary perceptual hashing technology.

  • Match against a database aggregating 68.8M+ verified-CSAM hashes from trusted sources.

  • Configure detection to suit your workflows. You decide what to escalate and submit to reporting entities.

  • Optimize perceptual hashing with a self-sustaining feedback loop that improves matches, increases accuracy, and provides continuous service improvements.

  • Use fewer engineering resources to get started with CSAM detection via API. A self-hosted version of Safer Match is available with the Safer Enterprise package.

Hash Matching Info Sheet

Unleash predictive AI to identify novel CSAM

The Safer Predict CSAM classifier identifies potentially novel image and video CSAM by analyzing thousands of attributes within the content to predict the likelihood that the content contains CSAM.

Use the CSAM classifier to:

  • Customize your workflows to develop a strategic detection plan, hone in on high-risk accounts, and expand child sexual abuse detection coverage.

  • Prioritize and escalate content by setting a precision level and leveraging the provided label for pertinent content.

  • Find actionable content in your user report queues, saving your moderation team time.

CSAM Classifier Info Sheet

Features

  • Trusted training data

    Our CSAM classification models were trained using data from trusted sources – in part using data from the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline.

  • Configurable to uphold your policies

    Use the classification labels (CSAM, pornography, benign) and detection precision level to support your team in policy enforcement and escalation.

  • Deployment options

    Safer Predict offers flexibility and control as you scale your CSAM detection whether you choose a self-hosted (integrated into your infrastructure) or Thorn-hosted (API-based) solution.

Mitigate text-based child sexual exploitation

Empower your trust and safety team to proactively combat the misuse of your platform for child sexual exploitation (CSE). Access predictive AI technology to identify text-based interactions that contain or could lead to CSE. The Safer Predict text classifier analyzes messages line-by-line and at the conversation level and assigns a risk score for the following classification labels:

  • CSAM: Messages related to, asking for, transacting in, and sharing CSAM.
  • Child access: Messages discussing offline access to and harm of children.
  • Sextortion: Messages related to sextortion activities.
  • Self-generated CSAM: Requests for and discussions about self-generated content (“nudes” from a minor).
  • CSA discussion: Messages discussing the topic of child sexual abuse but where a minor is not present (fantasy and role playing, discussing the societal issue or news headlines, etc.).
CSE Text Classifier Info Sheet

Features

  • Contextual nuance

    Safer Predict’s language models understand complex language patterns that indicate harmful behavior and discern contextual nuances — line-by-line and at the conversation level.

  • Trained on trusted data

    Our text classifier was trained on real conversations, validated by trust and safety professionals, that included various forms of harm to make it performant without additional training.

  • Flexible configuration

    Use the multiple labels to configure the text classifier to support your policy enforcement and escalation. Empower your team to quickly target abuse and risky interactions.

Detect, review, and report CSAM with Safer Enterprise

Safer Enterprise is an all-in-one, comprehensive CSAM detection solution with a secure, self-hosted deployment. This package includes:

  • Safer Match for known CSAM detection.
  • Safer Predict CSAM classifier to identify potentially novel CSAM.
  • A content moderation UI with wellness features built in.
  • A reporting service that enables you to connect to your NCMEC or RCMP account for streamlined workflows.
Safer Enterprise Info Sheet

Features

  • Comprehensive CSAM detection

    Safer Enterprise combines two equally important technologies — hash matching and predictive AI — to detect both known and novel CSAM in image and video content.

  • Privacy-forward

    With a self-hosted deployment, you share limited data which cannot be used to directly identify a specific individual.

  • Informed by issue expertise

    Every aspect of Safer Enterprise was informed by Thorn's issue expertise from our comprehensive CSAM detection to our wellness-centered content moderation interface.

 A studio headshot of Risa Stein.
“Thorn makes it simple for businesses to set up and operate a robust child safety program. Their Safer tools are designed with flexibility in mind, and Thorn has provided excellent support to our product and engineering teams to ensure our implementation of these tools fits the unique context of our platform. Slack has long relied on Thorn to help keep our services safe in a responsible and privacy-protective way.”
Risa Stein, Director of Product Management, Integrity at Slack

On-demand demo

Learn how our trust and safety solutions can be tailored to your challenges

Our child sexual abuse and exploitation solutions are powered by original research, trusted data, and proprietary technology. Let’s build a safer internet together–your next step starts here.