There’s now a child safety gap in Europe. Here's what that means.
On April 3, 2026, the legal basis that allowed platforms to detect child sexual abuse material (CSAM) in Europe expires.
Without it, online services across the EU no longer have a legal basis to proactively detect this content. Not because the technology doesn't exist, but because policymakers failed to reach an agreement in time. Children will now pay the price for that failure.
Here’s what happened, why it matters beyond Europe, and what comes next.
How this happened—the first time
This isn't the first time a legal gap has disrupted CSAM detection in Europe.
In 2021, a privacy regulation inadvertently omitted CSAM from the list of content types platforms could legally scan for.
Many companies chose to continue detecting and accepted the legal risk. Others stopped.
The result was immediate: files of CSAM from Europe to the National Center for Missing and Exploited Children (NCMEC) dropped by 58% in a single year.
That meant fewer leads for law enforcement, fewer investigations, and fewer opportunities to identify children in active harm.
Lawmakers eventually corrected that mistake with a temporary exemption. That exemption has now expired.
Why lawmakers couldn’t agree
This gap wasn’t inevitable. It’s the result of a deadlock between EU institutions over what detection should cover.
The European Commission and Council supported allowing platforms to detect a broad range of abuse — including new and previously unknown material, AI-generated content, and grooming behavior.
The European Parliament took a narrower approach, limiting detection to already known images of abuse.
Known images represent only part of the problem. The fastest-growing threats today involve new material, coercion, and evolving tactics like AI-generated abuse and online grooming.
Without agreement on scope, the legal basis for detection expired entirely — leaving platforms without a clear legal basis to detect either.
This time is different, and likely far worse
Two key factors make this detection gap more serious: how long it may last and how much the threat has evolved.
In 2021, the gap lasted about seven months. This time, there is no quick fix. Any new legislation will likely take 12 to 18 months to agree and implement. The result could be a gap in detection that lasts 2-3 times longer.
More troubling, the threat landscape has changed dramatically since 2021. AI-generated child sexual abuse material is rapidly emerging. Offenders are increasingly using generative AI to create realistic abuse content. Thorn’s research shows that 1 in 8 teens report knowing someone targeted with a deepfake image.
At the same time, newly identified abuse material is showing increasing severity, including higher rates of coercion, self-generated imagery, and more extreme and sadistic harm.
Reports of grooming — including text-based exploitation — are also rising. These are precisely the categories of harm that risk going undetected under limited policy approaches.
Unless this material is found, it spreads — prolonging harm, intensifying abuse, and making it harder to identify and protect children.
This is not a story about tech companies’ failure to act
The dominant narrative around online safety often focuses on companies failing to act. This situation is different.
Trust and safety teams at tech companies have spent years building detection systems. Several companies wrote directly to EU lawmakers asking for an extension of the exemption. More than 240 organizations, including child helplines, law enforcement partners, and survivor advocacy groups spanning six continents, formally condemned the failure to act before the deadline.
The detection systems exist. The people responsible for them want to use them. The child safety community can’t operate effectively without them.
What’s missing is the clear legal basis to do so.
Why this matters beyond Europe
This is not just a European issue.
Child sexual abuse is a global crime, and digital platforms operate across borders.
Today, 84% of CyberTipline reports are connected to abuse occurring outside the United States—a clear reflection of how global this crisis has become. More than 1,900 companies now report to NCMEC, including a growing number of international platforms that have voluntarily stepped in to help identify and protect children.
That system only works if detection has a clear legal basis.
Under EU data protection rules, companies must apply European privacy standards to EU citizens’ data no matter where they are in the world. That means when detection is no longer permitted in Europe, companies may also be forced to limit detection tied to EU users globally—including on systems outside the EU.
In practice, that could reduce detection far beyond Europe’s borders.
A child in the US can be abused and live-streamed to someone in Europe. Images taken in one country are shared across many. NCMEC serves as the global reporting hub, and when reports drop in one region, the impact ripples outward, robbing law enforcement around the world of the leads they need to find children being abused.
What happens now
Thorn is part of a broad coalition, alongside the Internet Watch Foundation, ECPAT International, Missing Children Europe, and many others, calling on EU policymakers to act. We, along with more than 240 organizations, have condemned this failure and are urging EU leaders to pass a permanent, effective legal framework.
The gap is here. The path forward is clear.
EU lawmakers must return to the table and pass legislation that reflects today’s threat landscape, including new material, AI-generated content and grooming, not just previously identified images.
Every day without a legal framework is another day platforms are constrained by legal uncertainty in doing what they are ready and willing to do.
We will continue working with partners across the ecosystem to push for a solution.
Watch Thorn's Director of Policy Emily Slifer break down what's happening and why it matters.