Post By: Safer / 4 min read
There has been a 406% increase in CSAM files reported to the National Center for Missing and Exploited Children (NCMEC)’s CyberTipline in the last six years. In 2023 alone, NCMEC received more than 104 million CSAM files.
Several factors may be contributing to the increase in reports:
- More platforms are deploying tools, such as Safer Match, to detect known CSAM using hashing and matching—which is why we believe more reports is not necessarily a bad thing.
- Online predators are more brazen and deploying novel technologies, such as chat bots, to scale their enticement. From 2022 to 2023, NCMEC saw an 132% increase in reports of online enticement of children for sexual acts.
- Self-generated CSAM (SG-CSAM) is on the rise. According to Thorn's own research, in 2023, 1 in 4 minors agree it's normal for people their age to share nudes with each other, and 1 in 7 minors admit to having shared their own explicit imagery (SG-CSAM).
Addressing this issue requires scalable tools to detect both known and unknown CSAM.
Hashing and matching is the core of CSAM detection
Hashing and matching is the foundation of CSAM detection. But, it’s only a starting point because it only identifies known CSAM by matching against hashlists of previously reported and verified content. This is why the size of your CSAM hash database is critical.
Safer Match offers the largest database of known CSAM hashes – 57.3+ million and growing. The solution also enables our customers to share hashlists with each other, further expanding the corpus of known CSAM and casting a wide net for CSAM detection.
How do you find unknown or new CSAM, you may be asking?
That’s where machine learning classification models come into play. Our solution Safer Predict detects new and previously unreported CSAM (image and videos) as well as potential text-based child sexual exploitation (conversations about or that may lead to sexual harms against children), using state-of-the-art AI prediction techniques. Together, CSAM and text-based CSE detection creates a powerful combination for combating this nefarious activity on your platform.
What is a classifier exactly?
Classifiers are algorithms that use machine learning to sort data into categories automatically.
How do Safer Predict’s CSAM classifiers work?
Safer Predict's classifier scans a file and assigns it a score that indicates the likelihood that the file contains a child sexual abuse image or video. To classify text, it uses language models to examine the context of messages and predictively match possible CSE, providing risk scores for CSAM, child access, sextortion, self-generated content and more.
Our customers are able to set the threshold for the classifier score at which they want the potential CSAM and text-based CSE flagged for human review. Once flagged for review and the moderator confirms if it is or is not CSAM or CSE, the classifier learns. It continually improves from this feedback loop so it can get even smarter at detecting novel CSAM and conversations around child sexual exploitation.
Our all-in-one solution for CSAM and text-based CSE detection, combines advanced AI technology with a self-hosted deployment to detect, review, and report CSAM at scale. In 2023, Safer’s CSAM Classifier made a significant impact for our customers, with 1,546,097 files classified as potential CSAM.
How does this technology help your trust and safety team?
Without classifiers, finding new and unknown CSAM often relies on user reports, which are most likely added to a growing backlog of content that needs human review. In addition to finding novel CSAM in newly uploaded content and detecting text-based child sexual exploitation, classifiers can be leveraged in a variety of ways by trust and safety, such as scanning your backlog or historical files. Our beta partners for text detection discovered a variety of powerful uses cases for their trust and safety teams including proactive investigation and reactive triaging.
Utilizing this technology, can help keep your content moderators focused on the high-priority content that presents the greatest risk to your platform and users. To put it in perspective, you would need a team of hundreds of people with limitless hours to achieve what a classifier can do through automation.
Flickr Case Study
In fact, image and video hosting site Flickr uses Safer’s CSAM Classifier to find novel CSAM on their platform. Millions of photos and videos are uploaded to the platform each day, and the company’s Trust and Safety team has prioritized identifying new and previously unknown CSAM. The inherent challenge, however, is that hashing and matching only detects known CSAM. To accomplish this, they required the use of artificial intelligence. That’s where the CSAM Classifier comes in.
As Flickr’s Trust and Safety Manager, Jace Pomales, summarized it, “We don’t have a million bodies to throw at this problem, so having the right tooling is really important to us.”
One recent classifier hit led to the discovery of 2,000 previously unknown images of CSAM. Once reported to the NCMEC, law enforcement conducted an investigation, and a child was rescued from active abuse. That’s the power of this life-changing technology.
A Coordinated Approach is Key
To eliminate CSAM from the internet, we believe a focused and coordinated approach must be taken. Content-hosting platforms are key partners, and we’re committed to empowering the tech industry with tools and resources to combat child sexual abuse and exploitation at scale. This is about safeguarding our children. It’s also about protecting your platform and your users. With the right tools, we can build a safer internet together.