Jun 11 2021

Four signs your platform should be proactively detecting child sexual abuse material

Post By: Safer / 4 min read

Learn
Blog

In 2021, nearly 85 million files of child sexual abuse material (CSAM) were reported to the National Center for Missing and Exploited Children (NCMEC). Unfortunately, every form of digital media (image, video, animation, etc.) has used to exploit children as reports of CSAM have grown exponentially.

It’s not just file-sharing sites. It’s not just social media. It’s not just image-hosting services. It is everywhere online, on just about every platform with an upload button.

A Proactive Approach

By taking a proactive approach to CSAM detection and elimination, tech companies can take active steps to remove this material from circulation before it can do additional harm — in an efficient, effective way. Leveraging technology to detect this material at scale makes it possible for platforms to fight back and create the internet we all deserve.

Safer makes it possible to incorporate CSAM detection at any stage of your platform’s lifecycle. Our solution is modular so you can use only the services you need, and as your platform grows, our tool can grow with you. Safer can help keep your platform and your community safe from child sexual abuse content.

We work with platforms of all shapes and sizes, at all stages of development and growth. And, while no two platforms are the same, CSAM mitigation is a common need. If one of these signals applies to your platform, it’s worth taking a look at your company’s policies and procedures for handling CSAM and reviewing your current CSAM detection solution.

1. You host user-generated content.

CSAM isn’t just a concern for social media, messaging or file-sharing platforms. It has been detected on sites you may not expect. The one thing they have in common: an upload button. That’s because user-generated content lives in various forms across the web, is hosted and shared in many different ways, and platforms often cannot control what content is uploaded to their servers or when users access it. As a result, a wide range of services are at risk, from newsgroups and bulletin boards to peer-to-peer networks, online gaming sites and more.

Any platform that allows users to directly connect with one another is a potential host for CSAM, particularly when those features can be exploited by bad actors.

This risk is accelerating in the post-pandemic world. We spend much of our lives online, and people have become accustomed to interacting in digital spaces, connecting with new people, sharing content and communicating in ways that were less common just a few years ago. Online video and image content is everywhere now and so too is the threat of CSAM.

Does your platform have an upload button? If so, let's chat about putting Safer to work for your platform.

2. CSAM has been previously reported on your platform.

With CSAM, a single instance on your platform is most likely not an isolated case but rather an indication of a larger issue. Relying on user reports alone is an unreliable approach to CSAM mitigation. Your brand’s reputation and safety is too important to crowdsource its protection.

Think about what could happen in the time between CSAM being uploaded on your platform and when your team removes it. How long would that file be hosted and viewable by community members before it was discovered? Who was exposed to it but didn’t report it? Without proactive CSAM detection measures in place, your platform is at risk.

3. Your Trust and Safety team is inundated with work.

The work of Trust and Safety can be highly reactive, making it difficult to create mitigation plans for events that haven’t happened yet. But reviewing all flagged content for potential CSAM can be difficult and time consuming, bogging down team efficiency and causing delays in taking action on urgent policy violations that can open up the organization to new risks. Duplicate reports, false positives, and other moderation burdens that Trust and Safety teams have to work through can take up valuable time that could have been better spent working escalated cases that require human review.

Not every Trust and Safety task calls for human oversight. If your team is overwhelmed with various projects, there is a good chance that automating repetitive tasks or augmenting them with technology would allow their time to be better spent on higher-level work that requires more insight. With cryptographic hashing, Safer can programmatically identify known CSAM. Safer’s perceptual hashing algorithms and machine learning classification models can help flag potential CSAM and prioritize critical content for human review.

Want to learn more? Contact us.

4. You’ve prioritized digital safety and wellness and are looking for ways to enforce your policies.

Trust and Safety has emerged as a strategic differentiator for many companies. After defining your platform's policies, you’ll need to put practices in place to enforce those policies.

Each year, new social media and messaging apps pop up, along with new file sharing services and communities where CSAM can potentially take hold. Down the road, the differentiator between these startups and their established, market-leading competitors may be policy enforcement. Users and advertisers expect platforms to be safe and free of abuse content.

A safer internet for everyone

Online platforms are uniquely positioned to have a significant impact in the fight against CSAM. By implementing proactive detection and reporting, content-hosting platforms can contribute to lasting change by helping to make the internet a safer place for everyone.