Post By: Safer / 3 min read
Improve CSAM detection on your platform with self-managed hashlists
In addition to our Matching Service, which contains millions of hashes of known child sexual abuse material (CSAM), Safer also provides self-managed hash lists that each customer can utilize to build internal hashlists (and even opt to share these lists with the Safer community). When you join Safer, we create a group of self-managed hashlists called SaferList.
Right from the start, your queries are set to match against these lists. All you have to do is add hashes to your lists. (Pro tip: If you’re using the Review Tool, it’s a simple button click to add hashes to SaferList.)
Each customer will have access to self-managed hashlists to address three types of content:
- CSAM - Hashes of content your team has verified as CSAM.
- Sexually exploitative (SE) content - Hashes of content that may not meet the legal definition of CSAM but is nonetheless sexually exploitative of children.
- False positives - Hashes that were incorrectly identified as CSAM.
There are unique benefits to using each of these lists. Additionally, every Safer customer can choose to share their list with the Safer community (either openly or anonymously).
Mitigate against re-uploads with your CSAM SaferList
With the established CSAM reporting process, there is a delay between when you submit content to NCMEC and when you begin to match against that content. That gap makes it possible for verified CSAM to be re-uploaded to your platform undetected. SaferList can help combat this.
When you report CSAM to NCMEC, the content goes through an important but lengthy review process to confirm that it is indeed CSAM. Only after that process is complete do they add the hash to their list of known CSAM. This means that programmatic detection won’t flag that content if it is uploaded again — unless you use your CSAM SaferList. In other words, your platform is vulnerable to that content until you can match against it.

Use your SE SaferList to support enforcement of your platform’s policies
Different platforms will have different policies about sexually exploitative content. Your platform will likely define within your Trust and Safety policies what constitutes SE content and how your platform intends to enforce policies related to that content.
Generally speaking, SE content is content that may not meet the legal definition of CSAM but is nonetheless sexually exploitative of children. Although not CSAM, the content may be associated with a known series of CSAM or otherwise used in a nefarious manner. This content should not be reported to NCMEC, but you will likely want to remove it from your platform.
Once identified, you can add SE content hashes to your SE SaferList to match against these hashes to detect other instances of this content.
Improve your detection with your False Positive SaferList
False positives are hashes that were incorrectly identified as CSAM. When using perceptual hashing to identify known CSAM, we assess how close the files are to being the same. This can help to detect files that may have been altered in some way, such as having been cropped. False positives can happen when a benign image is somehow visually similar to the CSAM source image.
Perceptual hashes generate hash values that can be compared for similarity. A small change to a file results in a small change to the hash value. Examples of perceptual hash types are PhotoDNA and pHash.
Maintaining your False Positive SaferList will help lessen the number of times your content moderators have to review a unique false positive. By setting up some filtering, you can also keep your SaferList false positive matches from entering your content moderation review queue.
The power of community
To eliminate CSAM from the web, we believe a focused and coordinated approach will be most effective. That’s why we created SaferList—so that platforms can share CSAM data. Every Safer customer can choose to share their list with the Safer community (either openly or anonymously).
You can also choose to match against SaferLists shared by other Safer users. The more platforms that use and share their SaferLists, the faster we break down data silos and the quicker we diminish the viral spread of CSAM on the open web.