A first-of-its kind new analysis shows three to six year old children being manipulated into sexual activities, including penetrating themselves, bestiality, and sadism or degradation, via webcams and camera devices

New data from the Internet Watch Foundation (IWF) reveals thousands of images and videos of three to six year old children who have been groomed, coerced and tricked into sexually abusive acts, are now being found on the open internet.

The analysis, published today , shows for the first time how three to six year old children are now being targeted by “opportunistic” internet predators who manipulate them into sexual activities.

The abuse, which analysts have seen ranging from sexual posing and masturbation, to sadism, degradation, and even sexual acts with animals, is directed by perpetrators and often recorded without the child’s knowledge.

This so called “self-generated” child sexual abuse imagery, where a perpetrator is remote from the victim, is then shared far and wide on dedicated child sexual abuse websites.

The IWF, which is the UK’s front line against online child sexual abuse, welcomes Ofcom’s upcoming consultation on the use of automated content classifiers driven by artificial intelligence and machine learning techniques to detect illegal and harmful content, including previously undetected child sexual abuse material. However, it urges companies in and out of scope of the Online Safety Act to introduce these measures immediately, rather than waiting for the regulations to take effect later this year.

The abuse, which analysts have seen ranging from sexual posing and masturbation, to sadism, degradation, and even sexual acts with animals**, is directed by perpetrators and often recorded without the child’s knowledge.

This so called “self-generated” child sexual abuse imagery, where a perpetrator is remote from the victim, is then shared far and wide on dedicated child sexual abuse websites.

The IWF, which is the UK’s front line against online child sexual abuse, welcomes Ofcom’s upcoming consultation on the use of automated content classifiers driven by artificial intelligence and machine learning techniques to detect illegal and harmful content, including previously undetected child sexual abuse material. However, it urges companies in and out of scope of the Online Safety Act to introduce these measures immediately, rather than waiting for the regulations to take effect later this year.

LEAVE A REPLY

Please enter your comment!
Please enter your name here