2025 Online Child Safety Predictions for Trust and Safety
As we look ahead to 2025, our team at Thorn has identified several key trends that we believe will shape the future of child safety online. Drawing from our research and expertise in defending children from sexual abuse and exploitation, we're sharing our predictions for the coming year.
Companies will feel the regulatory pressure
Companies will face unprecedented scrutiny from regulators worldwide. With new legislation emerging from the EU, UK, and various U.S. states, platforms will need to navigate complex requirements around child safety or face significant penalties. To address this patchwork of regulations, platforms will face critical decisions to develop comprehensive safety measures that protect their global user base.
Platforms will face increased scrutiny on reporting features
The effectiveness of platforms' user reporting mechanisms will draw heightened public attention. We expect more stories to emerge highlighting the limitations of current reporting systems, particularly in high-risk scenarios involving child safety. This will likely spark important discussions about the essential role of user reporting in the larger safety net of resources for children navigating risky situations online, and will underscore the critical need for these systems to be more robust, accessible, and responsive.
Momentum gains around trust and safety professionalization
The trust and safety field will continue its evolution from a back-office function to a distinct professional discipline rich with standards, tooling, and specialized knowledge. These specialists will grow influence as independent experts, often company agnostic, advocating for ethical, user-centered design that prioritizes safety as a foundational component of product lifecycle and long-term business success. This change will create greater consistency and durability across the field, a move away from recent volatility in trust & safety market.
We’ll see the next frontier of AI-generated threats
The landscape of AI-related threats will continue to expand, driven both by technological advances and increased adoption of existing tools.
Threats we expect to see:
- More sophisticated deepfake capabilities
- Increased access to AI-generated video and audio tools
- Growing concerns about AI-facilitated sextortion
- Rising instances of peer-to-peer misuse of AI tools
Caregivers knowledge of online safety is put to the test
In 2025, we expect to see an increase in policies and features to block access by minors. Managing when (and if) kids can access certain technologies or parts of the internet is an important part of online safety, but the impacts of overly broad prohibitions may surprise some.
We predict caregivers will be caught off guard by increased requests to override age restrictions on new profiles and greenlight features to which their teens previously had access.
For some families, this will prompt important conversations about what risk looks like and how to respond before features are enabled, but for many, these conversations won’t happen at all - either because caregivers aren’t prepared, aren’t available, or simply aren’t interested. These efforts to protect children will often fall short of their intent - leaving many young people to navigate the same risks that existed before the restrictions but now with less support.
Looking ahead
These predictions highlight the complex challenges ahead in protecting children online. At Thorn, we remain committed to staying ahead of these evolving threats and working with partners across technology, law enforcement, and child protection to build a safer digital world for all children—one in which every child is free to simply be a kid.
This article reflects Thorn's research-informed predictions and should not be considered definitive statements about future events.