A stark rise in the number of children exposed to violent and degrading pornography online has been revealed in a new report by Children’s Commissioner Dame Rachel de Souza.
The findings paint a deeply troubling picture of just how easily young people are being drawn into harmful content—often without even searching for it.
“This report must act as a line in the sand. The findings set out the extent to which the technology industry will need to change for their platforms to ever keep children safe,” said Dame Rachel.
Startling Exposure at Shockingly Young Ages
- 70% of those surveyed had seen online pornography before turning 18, up from 64% in 2023.
- The average age of first exposure was 13.
- A staggering 27% said they were just 11 years old, while some recalled being as young as six.
The report compiled responses from 1,020 people aged 16 to 21, offering a chilling glimpse into the early and often accidental encounters many children have with explicit material.
Violent and Degrading Content All Too Common
More than half of respondents 58%, said they had seen pornography involving strangulation. Even more disturbing, 44% recalled content depicting rape involving someone who was asleep.
Dame Rachel described the materials as “violent, extreme and degrading” in many cases “illegal”, adding that the research is a “snapshot of what rock bottom looks like.”
Platforms Blamed for Dangerous Algorithms
One of the most alarming takeaways? Most children did not seek out this content. Instead, they were shown it through algorithms and recommendation systems.
- 59% said they stumbled upon porn by accident.
- In 2023, that figure stood at just 38%.
X (formerly Twitter) was named the most common source, with 45% of respondents seeing porn there, compared to 35% on dedicated adult sites, a growing gap.
“Take, for example, the vast number of children seeing pornography by accident. This tells us how much of the problem is about the design of platforms,” said Dame Rachel.
Misleading Messages About Consent
The exposure isn’t just about graphic content—it’s affecting how young people think. The report highlights troubling beliefs:
- 44% agreed with: “Girls may say no at first, but then can be persuaded to have sex.”
- 33% agreed with: “Some girls are teasers and pretend they don’t want sex when they really do.”
In both cases, young people who had viewed pornography were more likely to support these harmful statements.
With kids as young as six exposed to violent pornography and harmful ideas about consent being normalised, campaigners are demanding urgent action from tech giants and lawmakers.
The research comes just ahead of new online safety laws that took effect last month. These include age-verification systems aimed at keeping explicit content out of children’s reach.
Dame Rachel welcomed the changes, calling them a “real opportunity” for making children’s safety a non-negotiable priority, but she warned that much more needs to be done.
Separately, a study by the Molly Rose Foundation revealed that harmful content promoting suicide, self-harm, and depression is still being pushed by algorithms on TikTok and Instagram.
The foundation’s chair, Ian Russell, whose daughter Molly died in 2017 after viewing similar content, was blunt: “It is staggering that, eight years after Molly’s death, incredibly harmful suicide, self-harm and depression content like she saw is still pervasive across social media.”
Their research found that teenage accounts engaging with such content were then “bombarded” with similar posts on Instagram Reels and TikTok’s For You page.
Russell criticised the government and Ofcom, saying the regulator’s safety codes were “too weak” and the Prime Minister must act now to enforce stronger, life-saving legislation.
“Where Ofcom have been timid, it is time for him to be strong,” he added.
Social Media Giants Respond
Both Meta (Instagram’s parent company) and TikTok pushed back against the findings.
A Meta spokesperson said: “Tens of millions of teens are now in Instagram Teen Accounts… 99% of harmful content is removed proactively.”
TikTok also defended its safety tools: “Teen accounts have 50+ safety features… 99% of violative content is proactively removed.”
Still, campaigners argue these steps haven’t gone far enough to stem the tide of dangerous material reaching vulnerable young users.
Children are being exposed to deeply damaging content at younger and younger ages and often through no fault of their own. The spotlight is now on tech companies, regulators, and lawmakers to step up and fix the algorithms putting kids in harm’s way.