British Tech Companies and Child Protection Officials to Test AI's Capability to Generate Abuse Images

Tech firms and child safety agencies will receive authority to evaluate whether AI systems can generate child abuse material under new British laws.

Substantial Rise in AI-Generated Illegal Material

The announcement coincided with findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the authorities will permit designated AI developers and child protection groups to inspect AI systems – the underlying systems for chatbots and image generators – and verify they have sufficient safeguards to stop them from creating images of child exploitation.

"Ultimately about preventing abuse before it happens," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now detect the risk in AI systems early."

Tackling Legal Challenges

The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at averting that issue by helping to halt the production of those images at their origin.

Legislative Structure

The changes are being introduced by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, producing or sharing AI systems developed to generate exploitative content.

Real-World Consequences

This week, the official toured the London headquarters of a children's helpline and heard a simulated conversation to counsellors featuring a report of AI-based abuse. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.

"When I hear about children facing extortion online, it is a cause of intense frustration in me and justified anger amongst families," he stated.

Concerning Data

A prominent online safety organization stated that instances of AI-generated abuse content – such as webpages that may contain numerous files – had significantly increased so far this year.

Cases of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a vital step to ensure AI tools are safe before they are released," stated the head of the internet monitoring organization.

"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, giving offenders the ability to create potentially endless amounts of advanced, photorealistic exploitative content," she continued. "Material which additionally commodifies victims' trauma, and renders children, especially girls, less safe on and off line."

Support Interaction Information

Childline also published information of counselling interactions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Using AI to evaluate weight, physique and looks
  • AI assistants discouraging children from consulting trusted adults about abuse
  • Being bullied online with AI-generated content
  • Online extortion using AI-manipulated images

Between April and September this year, Childline conducted 367 support sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing utilizing chatbots for support and AI therapy apps.

Alex Ward
Alex Ward

A tech enthusiast and writer with a passion for exploring cutting-edge innovations and sharing practical advice for everyday users.