
Addressing the risks of nonconsensual image abuse
AI-generated deepfake nudes are accelerating the spread of nonconsensual image abuse, making it easier for bad actors to manipulate and weaponize imagery of children and adults alike. Our latest research at Thorn found that 31% of teens are already familiar with deepfake nudes, and 1 in 8 personally knows someone who has been targeted. These manipulated images can be used for harassment, blackmail, and reputational harm, causing significant emotional distress for victims. As the technology becomes more accessible, trust and safety teams must act swiftly to mitigate harm.
Understanding the Take It Down Act
The Take It Down Act, which recently passed the Senate and is now being considered in the House of Representatives, represents a significant development in the legal framework addressing nonconsensual intimate imagery. If the bill passes the House and is signed by the President into law, this legislation will have direct implications for platforms hosting user-generated content.
Key provisions for trust and safety teams:
Here are the key components of the Take It Down Act—all of which are important for covered platforms, or those that primarily host user-generated content, to know about and plan for:
- Criminal penalties for individuals who knowingly publish intimate visual depictions of minors (both authentic and AI-generated) with intent to humiliate, harass, or degrade the minor; or sexually arouse any person. This fills an important legal gap that exists right now regarding images of minors that do not meet the legal definition of CSAM, but are still problematic.
- Criminal penalties for individuals who knowingly publish intimate visual depictions of adults (both authentic and AI-generated) without the individual’s consent and with intent to cause harm.
- Protections against coercion and sextortion, specifically criminal penalties for offenders who threaten to distribute nonconsensual images.
- A mandatory notice-and-removal process requiring covered platforms to take down nonconsensual content, upon receiving a victim’s report, within 48 hours.
Implementation considerations for platforms
If passed into law, the Take It Down Act will require trust and safety teams to:
- Within one year of the Act’s passage, establish a process for individuals (or authorized representatives) to notify the platform of nonconsensual visual depictions of themselves and request their removal.
- Strengthen response mechanisms for the rapid identification and removal of nonconsensual intimate visual depictions, including deepfake content, within 48 hours of receiving removal requests.
Trust and safety teams can proactively prepare for potential compliance requirements by:
- Evaluating current processes: Assess existing reporting mechanisms and removal workflows against the 48-hour requirement
- Developing technical capabilities: Strengthen technical capabilities to identify both authentic and AI-generated nonconsensual intimate content
- Creating response protocols: Establish clear procedures for handling reports and documenting compliance with removal requirements
- Training staff: Ensure team members understand the nuances of the new legislation and can properly identify content that falls under its scope
Looking ahead: Preparing for the future of AI-generated image abuse
Deepfake technology will continue to evolve, and legal frameworks like the Take It Down Act could represent an important step in addressing this growing challenge. However, proactive platform policies, robust detection tools, and cross-industry collaboration remain critical to mitigating child safety risks and staying ahead of emerging threats.
- Read the full research report on deepfake nudes and youth safety.
- Explore Thorn’s Safety by Design recommendations for generative AI.
Together, we can ensure that platforms remain safe spaces—especially for the most vulnerable users.