Jul 21 2020

Announcing Safer for Detection and Post-Detection of CSAM

Post By: Safer / 3 min read

Product Updates
Blog

Safer has officially been released as a third-party comprehensive solution available for content-hosting platforms to identify, remove, and report CSAM. This is one tool in Thorn’s mission to eliminate CSAM from the internet and create a world where every child can be safe, curious and happy.

As a product, Safer provides detection and post-detection capabilities so platforms can quickly and securely identify, remove and report CSAM.

The ability to process content real-time and flag for action comes from Safer’s detection services which include:

  • Image Hash Matching: Our flagship service that generates cryptographic and perceptual hashes for images and compares those hashes to known CSAM hashes. At the time of publishing this update, our database includes 5.9M hashes. Hashing happens in the client’s infrastructure to maintain user privacy.
  • CSAM Image Classifier: Machine learning classification model developed by Thorn and leveraged within Safer that returns a prediction for whether a file is CSAM. The classifier has been trained on datasets totaling hundreds of thousands images including adult pornography, CSAM, and various benign imagery and can aid in the identification of potentially new and unknown CSAM.
  • Video Hash Matching: Service that generates cryptographic and perceptual hashes for video scenes and compares them to hashes representing scenes of suspected CSAM. At the time of publishing this update, our database includes over 650k hashes of suspected CSAM scenes.
  • SaferList for Detection: Service for Safer customers to leverage the knowledge of the broader Safer community by matching against hash sets contributed by other Safer customers to broaden detection efforts. Customers can customize what hash sets they would like to include.

In addition to proactively detecting real-time for CSAM, our priority was to build a tool that can support the entire process from CSAM detection to removal and reporting. This allows Trust & Safety teams to work more efficiently and with resilience. Our post-detection services include:

  • Safer Review: The review tool provides an interface for content moderation and reporting of CSAM. If a piece of content is suspected to be CSAM, either by hash matching or classification, it is added to the review queue. Given the nature of the content under review, Safer Review incorporates features such as blurring in service of wellness and resilience. Our current version is for images only at this time.
  • Reporting and Data Preservation: Service that enables reporting of detected CSAM to NCMEC’s CyperTipline and connects to secure storage to manage preservation requirements. The reporting service can be used with either Safer Review or a company’s own content moderation tool.
  • SaferList for Post-Detection: Safer customers are encouraged to contribute to the SaferList so that the Safer community at large can work together to stop the viral spread of content that is sexually exploitative of children. Customers can build, maintain, and share their own hash sets.
  • False Positive API: API built as a feedback mechanism for Safer customers to easily report false positives found through detection services. This feedback is used to improve Safer services as well as the knowledge of CSAM within the larger ecosystem.

We’re excited to see Safer in the wild and hope you are too. If you would like to learn how Safer can help protect your platform, contact us at info@getsafer.io. To learn more about our journey and why these features are important to eliminating child sexual abuse material from the internet, read more on our announcement blog.

You've successfully subscribed to Safer: Building the internet we deserve.!