AI-CSAM Is Not a ‘Victimless Crime’

notion image
This segment is a paid ad. If you’re interested in advertising, let's talk.
notion image
Scaling your User-Generated Content (UGC) platform should not compromise user safety. ActiveFence offers an intel-fueled, AI-powered solution that ensures robust Trust & Safety measures as your platform grows. Their advanced moderation technology leverages real-time intelligence and machine learning to detect and mitigate harmful content effectively. This means you can maintain a secure environment without extensive development resources.
ActiveFence's solution identifies a wide range of harmful behaviors, including hate speech, misinformation, and child exploitation. By automating the moderation process, it reduces the burden on human moderators and allows for swift action against violative content, protecting users and preserving the integrity of your platform. Learn More.
Last week, the FBI announced it arrested a man who used AI to generate child sexual abuse imagery. It’s a novel case because it’s one of the first instances of the FBI bringing charges against someone related to using AI to create child sexual abuse material (CSAM). Steven Anderegg is accused of creating “thousands of realistic images of prepubescent minors” he made using AI, commissioning requests from others for more images, and sending those images to a child through Instagram. He’s also accused of abusing his son.
Whenever news about AI-generated CSAM breaks, there’s a small but persistent contingent of people who wonder: Are these images even all that bad, given that they aren’t “real” photographs of “real” children? Is it not a “victimless crime?” Could it help pedophiles avoid contacting actual minors?
We can’t believe we have to say this, but this has now come up enough times for us to find it necessary to explain: AI-CSAM is in fact actively harmful to real people and children, and is in no way a “victimless crime.” AI’s ability to produce functionally infinite images powered by datasets containing millions of photographs of real people, including children, and real images of real CSAM, enshrines and perpetuates that abuse in a way that was previously impossible.
“Generative AI is having a clear impact on real people, and in the case of AI child abuse material - on real children,” a spokesperson for the UK law enforcement’s Online Child Sexual Exploitation and Abuse Covert Intelligence Team (OCCIT), told 404 Media in an email. “OCCIT are aware of AI tools being used at scale by offenders to 'nudify' images of real children and to 'deepfake' images of children into existing still and moving child abuse imagery. AI models trained on CSAM and/or real children are commodities that are highly valued and widely shared in offending communities because they enable the production of realistic AI child abuse material featuring real children.”
A real child does not have to be involved in depictions of the sexual abuse of minors in order for it to be federally illegal. In the U.S., the law is extremely strict about this, and includes “computer generated images or pictures, whether made or produced by electronic, mechanical, or other means” in its definition of what’s illegal as child obscenity. But some people make the argument—following stories like Anderegg’s arrest, and for years in the form of research and activism—that because there are no real children in the images, they should be treated as less harmful than downloading photographs, or even could be used as therapeutic tools for people struggling with sexual attraction to minors.
Experts in digital child abuse imagery and forensics say they don’t buy these arguments, and that the explosion of AI-generated CSAM has made the landscape exponentially worse.
💡
Do you know anything else about AI being used for child sexual abuse? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.
There are several groups of people being harmed by AI-generated abuse imagery, including real children, the people consuming and creating it, and investigators and researchers who have to look at these images in order to understand and prevent it from hurting anyone else.
In December, researchers at Stanford found that the LAION-5B machine learning dataset—used by Stable Diffusion and other major AI image generators—contained 3,226 suspected instances of child sexual abuse material, 1,008 of which were externally validated. That means that not only might everyone who downloaded the dataset locally be in possession of child sexual abuse material without realizing it, but all of the AI generated images generated with Stable Diffusion or other models that used LAION-5B were made on a bedrock of real child abuse, even when the outputs have nothing to do with children.
Shortly before these explosive findings were published, which led LAION to take down the 5B dataset entirely, Emanuel investigated the ecosystem of people creating content that could be categorized as CSAM, with users of the Stable Diffusion model sharing and image generating platform Civitai using prompts like “girl and dog, short girl, pimp, slut, petite girl, potty, vulva, very young, orgasm, nsfw, lascivious, lewd pose, interspecies, zoophilia, sex with dog,” while instructing the AI to make the girl in the image not look “adult, old” or have “big breasts.” There’s a massive demand for systems that will create AI-generated porn, and at the same time, it’s posing a complex problem for the creators of those systems who want to tow a precarious line between profiting from letting users generate whatever they want and enabling harm.
notion image
notion image
“Another valued and shared commodity are the image datasets used to create these AI models,” OCCIT spokesperson said. “Real children's social media accounts are being used as a source of material for these datasets, allowing legitimate and everyday imagery of children to be manipulated and misused in order to create AI child abuse material. There is a very clear link between AI and ‘real’ and it is therefore apparent that AI child abuse imagery is impacting on real children in a very devastating way.”
A “growing number” of child sex offenders are training Stable Diffusuion models on datasets of real child sexual abuse material, according to a 2024 OCCIT report about CSAM AI models shared with 404 Media. “The results are AI models that excel in the production of the most graphic abuse materials imaginable, much of which is now indistinguishable from reality to the naked eye,” the report says. “These custom models are trained and distributed online, entirely for the sexual gratification of pedophiles.”
The report goes on to note that “The levels of realism and extremity currently seen within AI CSAM is directly attributable to continuous development of offender created models,” and that “offenders are now capable of producing [AI CSAM] in potentially unlimited quantities.”
Aside from being trained on abuse content, generative AI allows people to churn out tons of realistic images of specific, real children being sexually abused. According to a 2023 report from the Internet Watch Foundation, communities of CSAM perpetrators tend to collect content of specific victims. “Perpetrators have ‘favourite’ victims; share content featuring that victim; and look for more,” the report says. “Now, perpetrators can train a model to generate as many new images of that victim as they like.” Deepfakes of adults made to place them in sexual scenarios are created with this purpose, too—and we’ve seen it happening not just in still images or even faked video clips, but as computer generated 3D models of real people.
“The same holds for celebrity children—just as the IWF has for a long time seen many examples of ‘shallowfake’ [material edited with basic, non-AI tools] and deepfake images featuring these well-known individuals, now the IWF is seeing entirely AI-generated images produced using fine-tuned models for these individuals,” the report continues. “An increasing number of AI CSAM shared on dark web forums features known victims and famous children. Many of these are requested by other users—of the type ‘Can you make a model of X’ or ‘can you make images featuring X’ —produced to specification.”
This is what prosecutors alleged Anderegg was doing: taking requests from others for generating new images. It’s also a booming business for bespoke generative AI creators who peddle in abusive content generally, not just of minors.
Investigators make important distinctions between types of consumers of CSAM: There are people who only collect or seek out the content, and those who create it. It’s not always the case, but the former sometimes transitions into the latter, and AI generated CSAM can play a part in that.
“It could take a while for an investigator to conclude that it is AI, which means those hours were taken away from actually searching for a real child being abused.”
For people who do cross from collecting or even generating images online to becoming “contact offenders” who abuse minors themselves, illustrated or computer generated CSAM is often used as a grooming tool. “We see it with the use of Simpsons characters, to Dora the Explorer,” Bryce Westlake, an associate professor in the Department of Justice Studies and a faculty member of the department's Forensic Science program, told 404 Media. “These materials are used to groom children. For example ‘Look, Dora the Explorer is doing this thing. She is someone you learn from right? If she is doing it, then surely it is okay for you to do it too.’”
The normalization of child sexual abuse through these images affects minors as well as the people creating it. “At some point, the fantasy and the image/video no longer elicits the same response, therefore the person needs to increase the stimuli,” Westlake said. “This either means more graphic 'fake' material or moving on towards offending against a child in the real world, or via a webcam or something.”
David Thiel, Chief Technologist at the Stanford Internet Observatory and the lead author on the Stanford study that exposed LAION-5B’s potential for containing CSAM, told 404 Media that it will take a while for there to be new research on how AI-CSAM affects this pipeline.
notion image
“However, we know a few things about people that use CSAM in general: they report high rates of contact offenses, anxiety they might commit contact offenses, and high rates of seeking out children online,” he said. “Users in communities involved in generating CSAM also commonly indicate having possessed photographic CSAM at some point (this is how they train models to recreate known victims), and while not evidence of causation, CSAM is highly normalized within these communities. Also, in cases where possession of illustrated material has made it to court, offenders are often found to possess photographic CSAM as well.”
On the part of investigators, researchers, and law enforcement whose jobs it is to track down child sexual abuse material online and study how it’s created and disseminated, AI-generated content muddies the waters.
“First, it is making it more challenging for investigators to determine whether what they see is a real child, and therefore needs to be investigated, or if it is something that is AI-generated,” Westlake said. “It could take a while for an investigator to conclude that it is AI, which means those hours were taken away from actually searching for a real child being abused.”
And because it’s become so realistic, and there’s no limit to the scenarios people can create, it’s causing increased psychological trauma for investigators, Westlake said. “The content that is being created is far more graphic than what people are doing to actual children (scary what the mind can come up with) and because it is so realistic, these investigators are having to view all this material and not be certain whether it is real or fake.”
“At some point on this timeline, realistic full-motion video content will become commonplace,” the IWF wrote. “The first examples of short AI CSAM videos have already been seen—these are only going to get more realistic and more widespread.”
For example, another OCCIT report from 2013 about offenders using AI video production tools including Stability.AI’s Stable Video Diffusion shared with 404 Media cites an example of someone on the dark web sharing an AI-generated deepfake CSAM video based on a popular, still-underage actor.
As we reported back in 2020, one of the worst parts about AI-generated porn using models trained on real sexual abuse is that it makes it much harder to remove that abuse from the internet. Sometimes, that’s because AI image generators make it so easy to create thousands and thousands of images. Other times, the AI-generated CSAM might not look like a specific human being, but is powered by real images of abuse. In both cases, AI-generated CSAM is far from a “victimless crime.” Quite the opposite, it is a crime that uses cutting edge technology to extend that abuse in a way that just a few years ago was not humanly possible.
About the author
Sam Cole is writing from the far reaches of the internet, about sexuality, the adult industry, online culture, and AI. She's the author of How Sex Changed the Internet and the Internet Changed Sex.