Jun 11 2021

4 signs your platform should be proactively detecting child sexual abuse material

Post By: Safer / 6 min read

Blog

In 2019, nearly 70 million files of child sexual abuse material, or CSAM, were reported to the National Center for Missing & Exploited Children (NCMEC), and unfortunately the exponential growth of CSAM continues to disseminate across nearly every form of digital media.

It’s not just file-sharing sites. It’s not just social media. It’s not just image hosting services. It is everywhere online, on just about every platform.

But by taking a proactive approach to CSAM detection and elimination, tech companies can take active steps to remove this material from circulation before it can do additional harm — in an efficient, effective way. The technology exists to help streamline the process of identifying, reviewing and reporting this material, making it possible for platform owners to fight back and make the internet a safer place for everyone.

It’s not a question of being ready or at a certain place in your company’s lifecycle to determine when it’s right to implement these technologies, it’s simply a matter of your company’s commitment to digital safety and wellness and taking steps to improve your own corner of the internet — while also working to disrupt the cycle of trauma the continued spread of CSAM creates for child victims, even after the abuse has ended.

Yesterday might be best, but today is certainly better than tomorrow.

This isn’t legal advice, this isn’t an exhaustive list, but merely some observations we have gathered through our work with online platforms of all shapes and sizes. If you’ve ever wondered how to determine your company’s readiness for investing in proactive CSAM detection measures, it might be simpler than you think.

If one of these signals applies to your platform, it’s worth taking a look at your company’s policies and procedures for handling CSAM. If any of these resonate for you, it might be time to consider proactive CSAM detection:

 

1. You host shareable user-generated content.

CSAM isn’t just a concern for social media, messaging or file-sharing platforms, but truly any site with an upload button. That’s because user-generated content lives in various forms across the web, is hosted and shared in many different ways, and  platforms often cannot control what visual content is uploaded to their servers or when users access it. As a result, a wide range of services are at risk, from newsgroups to bulletin boards and peer-to-peer networks to online gaming sites and more.

Any platform that allows users to directly connect with one another is a potential host for CSAM, particularly when those features can be exploited by bad actors.

This risk is accelerating in the post-pandemic world. After more than a year of spending our lives online, millions of people have become accustomed to interacting in digital spaces, connecting with new people, sharing content and communicating in ways that were less common before 2020. Online video and image content is everywhere now, and so too is the threat of CSAM on these emerging platforms – no platform is immune.

 

2. CSAM has been previously reported on your platform.

With CSAM, a single instance on your platform isn’t an isolated case but rather symptomatic of a larger issue.

That’s why platforms should be thinking holistically about the cycle of CSAM that lives on the internet and the role that they play in the ecosystem. No single platform that hosts content exists in a vacuum. Simply flagging and removing a single piece of CSAM isn’t the end of the issue but rather the beginning of a process of proactively addressing the larger problem. Even if you’ve reported 500 files to NCMEC, all that likely means is that you or your users were able to find 500 instances of it. It’s extremely likely that there is much more to be found.

Think about what happens in the time it takes for CSAM to be uploaded on your platform and your team to remove it. Without proactive detection measures in place, it's likely an instance of CSAM would come to your attention through an end user report. How long has that file been hosted and viewable by other community members before it is discovered? Who else was exposed to it but didn’t report it?

 

3. Your Trust & Safety team is inundated with work.

The work of Trust & Safety can be highly reactive, making it difficult to create mitigation plans for events that haven’t happened yet. But reviewing all flagged content for potential CSAM can be taxing and time consuming, bogging down team efficiency and causing delays in taking action on urgent policy violations that can open up the organization to new risks. Duplicate reports, false positives, and other moderation burdens that Trust & Safety teams have to work through can take up a lot of time that could have been better spent working escalated cases that require human review.

Not every Trust & Safety task calls for human oversight. If your team is overwhelmed with various projects, there is a good chance that automating repetitive tasks or augmenting them with technology would allow their time to be better spent on higher level work that requires more insight. For instance, implementing sophisticated CSAM detection techniques such as perceptual hashing algorithms and machine learning classification models can help flag potential CSAM and prioritize critical content as well as end user accounts for review, removal, and reporting.

 

4. You’ve prioritized digital safety and wellness and are looking for ways to enforce your policies.

Trust & Safety has emerged as a strategic differentiator for many companies. After defining your platform's policies, you need to put practices in place to enforce those policies.

The truth is, just about anyone can create a digital platform business these days. Every day we’re seeing new social media and messaging apps popping up, not to mention new file sharing services and communities where CSAM can potentially take hold. Down the road, the differentiator between these upstart companies and their established, market-leading competitors is going to come down to policy enforcement. Is safety really something that is important to you? How are you addressing it? How are you doing it better than your competition?

This includes maintaining a safe environment for your employees as well. Content moderators have a very difficult job reviewing this content, alongside flagged content across various other abuse types, and it can be very traumatic. How can you leverage technology to mitigate that exposure? How can you make their jobs easier and more sustainable on a personal level?

 

A safer internet for everyone

The scale of CSAM on the internet is no longer a secret and the challenges facing many digital platforms are clear. Tech companies are uniquely positioned to have a significant impact in the fight against CSAM. By implementing proactive detection and reporting, digital platforms can contribute to lasting change in the ecosystem, not only making the internet a safer place for everyone but helping the community remain an open and welcoming place for all users. In some cases, this means contributing directly to the removal of children from harm.

Yes, this problem is massive and almost overwhelming. No, it isn’t going away on its own. But understanding that it’s platform-agnostic and touches all corners of the web is a powerful first step in addressing the threat. By combining the power of the many different digital platforms that are facing CSAM in a joint effort, it becomes possible to eradicate it from the internet.

________________________

Safer, built by Thorn, provides online platforms with solutions to quickly identify, remove, and report child sexual abuse material at scale. In support of Thorn’s dedication to increasing access to this technology, Safer has recently been listed on the AWS Marketplace with a 30% discount off the total first year contract for those that sign by Q3 2021.

 

START PROTECTING YOUR PLATFORM

________________________

You've successfully subscribed to Safer: Building the internet we deserve.!