Artificial intelligence is opening the door to a disturbing trend of people creating realistic images of children in sexual settings, which could increase the number of cases of sex crimes against kids in real life, experts warn.
AI platforms that can mimic human conversation or create realistic images exploded in popularity late last year into 2023 following the release of chatbot ChatGPT, which served as a watershed moment for the use of artificial intelligence. As the curiosity of people across the world was piqued by the technology for work or school tasks, others have embraced the platforms for more nefarious purposes.
The National Crime Agency, which is the UK’s lead agency combating organized crime, warned this week that the proliferation of machine-generated explicit images of children is having a “radicalizing” effect “normalizing” pedophilia and disturbing behavior against kids.
“We assess that the viewing of these images – whether real or AI-generated – materially increases the risk of offenders moving on to sexually abusing children themselves,” the NCA’s director general, Graeme Biggar, said in a recent report.
The agency estimates there are up to 830,000 adults, or 1.6% of the adult population in the UK that pose some type of sexual danger against children. The estimated figure is ten times greater than the UK’s prison population, according to Biggar.
The majority of child sexual abuse cases involve viewing explicit images, according to Biggar, and with the help of AI, creating and viewing sexual images could “normalize” abusing children in the real world.
“[The estimated figures] partly reflect a better understanding of a threat that has historically been underestimated, and partly a real increase caused by the radicalising effect of the internet, where the widespread availability of videos and images of children being abused and raped, and groups sharing and discussing the images, has normalised such behaviour,” Biggar said.
Stateside, a similar explosion of using AI to create sexual images of children is unfolding.
“Children’s images, including the content of known victims, are being repurposed for this really evil output,” Rebecca Portnoff, the director of data science at a nonprofit that works to protect kids, Thorn, told the Washington Post last month.
“Victim identification is already a needle-in-a-haystack problem, where law enforcement is trying to find a child in harm’s way,” she said. “The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.”
Popular AI sites that can create images based on simple prompts often have community guidelines preventing the creation of disturbing photos.
Such platforms are trained on millions of images from across the internet that serve as building blocks for AI to create convincing depictions of people or locations that don’t actually exist.
Midjourney, for example, calls for PG-13 content that avoids “nudity, sexual organs, fixation on naked breasts, people in showers or on toilets, sexual imagery, fetishes.” While DALL-E, OpenAI’s image creator platform, only allows G-rated content, prohibiting images that show “nudity, sexual acts, sexual services, or content otherwise meant to arouse sexual excitement.” Dark web forums of people with ill intentions discuss work-arounds to create disturbing images, however, according to various reports on AI and sex crimes.
Biggar noted that the AI-generated images of children also throws police and law enforcement into a maze of deciphering fake images from those of real victims who need assistance.
“The use of AI for child sexual abuse will make it harder for us to identify real children who need protecting, and further normalise abuse,” the NCA director general said.
AI-generated images can also be used in sextortion scams, with the FBI issuing a warning on the crimes last month.
Deepfakes often involve editing videos or photos of people to make them look like someone else by using deep-learning AI, and have been used to harass victims or collect money, including kids.
“Malicious actors use content manipulation technologies and services to exploit photos and videos—typically captured from an individual’s social media account, open internet, or requested from the victim—into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites,” the FBI said in June.
“Many victims, which have included minors, are unaware their images were copied, manipulated, and circulated until it was brought to their attention by someone else.”