Initial reports show schoolchildren in the UK are now using AI to generate indecent images of other children, with experts warning urgent action is needed to help children understand the risks of making this sort of imagery.

The UK Safer Internet Centre (UKSIC) says it has now begun receiving small numbers of reports from schools that children are making, and attempting to make, indecent images of other children using AI image generators.

Teachers are warning that pupils are using the technology to create imagery which legally constitutes child sexual abuse material.

Children may be making this imagery out of curiosity, sexual exploration, or for a range of other reasons, but images can quickly get out of hand and children risk “losing control” of the material, which can then circulate on the open web.

Parents and teachers are urged to help children understand the risks associated with making AI generated imagery of this sort.

The UK Safer Internet Centre, a child protection organisation made up of the Internet Watch Foundation (IWF), SWGfL, and Childnet, says this imagery can have many harmful effects on children – and warns it could also be used to abuse or blackmail children.

The UKSIC says schools must ensure their filtering and monitoring systems can effectively block illegal material across their school devices to combat this emerging threat.

Imagery of child sexual abuse is illegal in the UK, whether AI generated or photographic – with even cartoon or less realistic depictions still being illegal to make, possess, and distribute.

David Wright, Director at UKSIC and CEO at SWGfL, said children may be exploring the potential of AI image generators without fully appreciating the harm they may be causing, or the risks of the imagery being shared elsewhere online.

He said: “We are now getting reports from schools of children using this technology to make, and attempt to make, indecent images of other children.

“This technology has enormous potential for good, but the reports we are seeing should not come as a surprise. Young people are not always aware of the seriousness of what they are doing, yet these types of harmful behaviours should be anticipated when new technologies, like AI generators, become more accessible to the public.

“We clearly saw how prevalent sexual harassment and online sexual abuse was from the Ofsted review in 2021, and this was a time before Generative AI technologies.

“Although the case numbers are currently small, we are in the foothills and need to see steps being taken now, before schools become overwhelmed and the problem grows. An increase in criminal content being made in schools is something we never want to see, and interventions must be made urgently to prevent this from spreading further.

“We encourage schools to review their filtering and monitoring systems and reach out for support when dealing with incidents and safeguarding matters.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here