British Tech Firms and Child Safety Officials to Test AI's Capability to Create Exploitation Images

Tech firms and child safety organizations will be granted permission to evaluate whether artificial intelligence tools can generate child abuse material under new UK legislation.

Substantial Increase in AI-Generated Harmful Content

The declaration coincided with findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the changes, the government will permit designated AI developers and child protection organizations to inspect AI models – the underlying technology for chatbots and image generators – and verify they have sufficient protective measures to prevent them from producing depictions of child sexual abuse.

"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now identify the danger in AI systems early."

Addressing Regulatory Challenges

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at averting that issue by helping to halt the creation of those materials at source.

Legal Framework

The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or sharing AI models designed to create exploitative content.

Real-World Impact

This week, the official visited the London base of a children's helpline and heard a mock-up conversation to advisors featuring a account of AI-based abuse. The interaction portrayed a adolescent seeking help after facing extortion using a sexualised AI-generated image of themselves, created using AI.

"When I learn about young people facing blackmail online, it is a cause of extreme anger in me and rightful anger amongst families," he said.

Alarming Statistics

A prominent online safety foundation stated that cases of AI-generated exploitation content – such as webpages that may include multiple files – had more than doubled so far this year.

Cases of the most severe content – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
  • Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "constitute a crucial step to ensure AI products are secure before they are launched," stated the chief executive of the online safety foundation.

"AI tools have enabled so survivors can be victimised all over again with just a few clicks, giving offenders the ability to create possibly limitless amounts of advanced, lifelike exploitative content," she added. "Content which additionally exploits victims' trauma, and makes children, particularly female children, less safe both online and offline."

Counseling Session Information

Childline also published details of support sessions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Employing AI to rate weight, physique and looks
  • Chatbots discouraging children from talking to safe guardians about abuse
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-faked images

Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related terms were mentioned, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including utilizing chatbots for support and AI therapeutic applications.

Virginia Lopez
Virginia Lopez

Elena is a seasoned journalist and blogger with a passion for uncovering unique stories and sharing practical lifestyle advice.