UK Tech Firms and Child Safety Officials to Test AI's Ability to Create Exploitation Content

Technology companies and child protection agencies will receive authority to evaluate whether artificial intelligence systems can produce child abuse images under new UK laws.

Significant Rise in AI-Generated Harmful Material

The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the amendments, the authorities will permit designated AI developers and child protection organizations to inspect AI models – the underlying technology for conversational AI and image generators – and verify they have adequate protective measures to prevent them from creating depictions of child exploitation.

"Ultimately about stopping exploitation before it happens," declared Kanishka Narayan, noting: "Experts, under rigorous protocols, can now identify the risk in AI models early."

Tackling Regulatory Obstacles

The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and others cannot create such content as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.

This legislation is aimed at averting that problem by helping to halt the production of those materials at source.

Legal Structure

The changes are being added by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI systems developed to create child sexual abuse material.

Real-World Consequences

This week, the official toured the London headquarters of Childline and listened to a mock-up call to counsellors involving a report of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I learn about young people experiencing blackmail online, it is a source of intense anger in me and justified anger amongst parents," he stated.

Alarming Data

A prominent internet monitoring foundation reported that cases of AI-generated abuse material – such as online pages that may include numerous files – had more than doubled so far this year.

Cases of category A content – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "represent a vital step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be victimised repeatedly with just a simple actions, giving offenders the ability to create potentially limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Material which further exploits survivors' suffering, and renders children, particularly female children, less safe both online and offline."

Counseling Interaction Information

The children's helpline also released information of counselling interactions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Employing AI to rate weight, body and looks
  • Chatbots dissuading young people from consulting safe adults about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked images

Between April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and related terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for assistance and AI therapeutic apps.

Gregory Brown
Gregory Brown

Elara Vance is a passionate gamer and tech writer, sharing insights on game mechanics and industry trends.