🔗 Share this article UK Tech Companies and Child Safety Officials to Examine AI's Capability to Create Abuse Content Tech firms and child protection organizations will be granted permission to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced British legislation. Substantial Rise in AI-Generated Illegal Material The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025. New Regulatory Structure Under the changes, the government will allow designated AI companies and child safety organizations to inspect AI models – the underlying technology for conversational AI and image generators – and verify they have adequate protective measures to stop them from creating depictions of child exploitation. "Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now identify the danger in AI models early." Addressing Legal Challenges The amendments have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such content as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it. This legislation is designed to preventing that issue by enabling to stop the production of those materials at their origin. Legislative Structure The amendments are being added by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, producing or sharing AI systems developed to create exploitative content. Practical Impact This week, the official visited the London headquarters of Childline and heard a simulated call to advisors featuring a account of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a explicit AI-generated image of himself, created using AI. "When I hear about children experiencing blackmail online, it is a source of extreme frustration in me and rightful concern amongst families," he stated. Alarming Statistics A prominent internet monitoring foundation reported that cases of AI-generated exploitation content – such as webpages that may contain multiple files – had significantly increased so far this year. Cases of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086. Female children were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025 Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025 Sector Response The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," commented the head of the internet monitoring organization. "AI tools have enabled so victims can be targeted all over again with just a simple actions, providing offenders the ability to make potentially limitless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies victims' suffering, and makes young people, particularly girls, less safe both online and offline." Counseling Interaction Data Childline also released details of counselling sessions where AI has been referenced. AI-related harms mentioned in the conversations comprise: Employing AI to evaluate weight, physique and looks AI assistants dissuading young people from talking to trusted adults about harm Being bullied online with AI-generated material Online blackmail using AI-faked pictures During April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year. Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including using AI assistants for support and AI therapeutic applications.