UK Technology Companies and Child Protection Agencies to Examine AI's Ability to Create Abuse Content

Technology companies and child protection organizations will receive authority to evaluate whether artificial intelligence tools can generate child abuse images under recently introduced British legislation.

Substantial Rise in AI-Generated Illegal Content

The declaration came as findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the authorities will permit approved AI companies and child safety groups to examine AI models – the foundational technology for conversational AI and visual AI tools – and ensure they have adequate protective measures to prevent them from creating images of child exploitation.

"Ultimately about preventing exploitation before it occurs," declared Kanishka Narayan, noting: "Specialists, under strict protocols, can now detect the risk in AI systems early."

Tackling Legal Obstacles

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that problem by enabling to stop the creation of those images at source.

Legal Structure

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or sharing AI models designed to generate exploitative content.

Practical Consequences

This week, the official visited the London base of a children's helpline and listened to a mock-up conversation to advisors involving a account of AI-based abuse. The interaction depicted a teenager seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.

"When I hear about children facing extortion online, it is a cause of intense frustration in me and justified anger amongst families," he said.

Alarming Data

A prominent internet monitoring organization stated that instances of AI-generated exploitation material – such as webpages that may contain multiple images – had significantly increased so far this year.

Cases of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, making up 94% of illegal AI depictions in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are launched," stated the chief executive of the internet monitoring organization.

"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, giving offenders the ability to create possibly endless amounts of advanced, lifelike child sexual abuse material," she added. "Material which additionally exploits survivors' suffering, and renders young people, especially girls, less safe both online and offline."

Support Interaction Information

The children's helpline also published details of counselling sessions where AI has been mentioned. AI-related risks discussed in the conversations include:

  • Employing AI to evaluate weight, physique and appearance
  • AI assistants discouraging children from consulting safe adults about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-manipulated pictures

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellness, including utilizing AI assistants for support and AI therapeutic applications.

Lindsey Scott MD
Lindsey Scott MD

An avid hiker and nature writer sharing trail experiences and outdoor tips to inspire exploration and conservation.