Optimize CSAM Detection with SaferList

Improve CSAM detection on your platform with self-managed hashlists

In addition to our Matching Service, which contains millions of hashes of known child sexual abuse material (CSAM), Safer also provides self-managed hash lists that each customer can utilize to build internal hash lists (and even opt to share these lists with the Safer community). When you join Safer, we create a group of self-managed hash lists called SaferList.

Right from the start, your queries are set to match against these lists. All you have to do is add hashes to your lists. (Pro tip: If you’re using the Review Tool, it’s a simple button click to add hashes to SaferList.)

Each customer will have access to self-managed hash lists to address these content types:

  • CSAM - Hashes of content your team has verified as novel CSAM.
  • Sexually exploitative (SE) content - Content that may not meet the legal definition of CSAM but is nonetheless sexually exploitative of children.

There are unique benefits to using each of these lists. Additionally, every Safer customer can choose to share their list with the Safer community (either openly or anonymously).

Mitigate against re-uploads with your CSAM SaferList

With the established CSAM reporting process, there is a delay between when you submit novel CSAM to NCMEC and when you begin to match against that content. That gap makes it possible for verified CSAM to be re-uploaded to your platform undetected. SaferList can help combat this.

When you report CSAM to NCMEC, the content goes through an important but lengthy review process to confirm that it is indeed CSAM. Only after that process is complete do they add the hash to their list of known CSAM. This means that programmatic detection won’t flag that content if it is uploaded again. In other words, your platform is vulnerable to that content until you can match against it. SaferList helps patch that vulnerability.

Use your SE SaferList to support enforcement of your platform’s policies

Different platforms will have different policies about sexually exploitative content. Your platform will likely define within your Trust and Safety policies what constitutes SE content and how your platform intends to enforce policies related to that content.

Generally speaking, SE content is content that may not meet the legal definition of CSAM but is nonetheless sexually exploitative of children. Although not CSAM, the content may be associated with a known series of CSAM or otherwise used in a nefarious manner. This content should not be reported to NCMEC, but you will likely want to remove it from your platform.

Once identified, you can add SE content hashes to your SE SaferList to match against these hashes to detect other instances of this content and to help enforce your community guidelines.

The power of community

To eliminate CSAM from the web, we believe a focused and coordinated approach will be most effective. That’s why we created SaferList—so that platforms can share CSAM data. Every Safer customer can choose to share their list with the Safer community (either openly or anonymously).

You can also choose to match against SaferLists shared by other Safer users. The more platforms that use and share their SaferLists, the faster we break down data silos and the quicker we diminish the viral spread of CSAM on the open web.