‘I Want That Sweet Baby’: AI-Generated Kids Draw Predators On TikTok And Instagram

notion image

Images of AI children on TikTok and Instagram are becoming magnets for many with a sexual interest in minors. But when this content is legal and depicts fake people, it falls into a messy, troubling gray area.

By Alexandra S. Levine, Forbes Staff

The following article contains discussions of disturbing social media content.
The girls in the photos on TikTok and Instagram look like they could be five or six years old. On the older end, not quite thirteen.
They’re pictured in lace and leather, bikinis and crop tops. They’re dressed suggestively as nurses, superheroes, ballerinas and french maids. Some wear bunny ears or devil horns; others, pigtails and oversized glasses. They’re black, white and Asian, blondes, redheads and brunettes. They were all made with AI, and they’ve become magnets for the attention of a troubling audience on some of the biggest social media apps in the world—older men.
“AI makes great works of art: I would like to have a pretty little virgin like that in my hands to make it mine,” one TikTok user commented on a recent post of young blonde girls in maid outfits, with bows around their necks and flowers in their hair.
“If this is AI-generated, does it make me bad to say she’s hot as everything put together?” another TikToker wrote on a slideshow of fully clothed little girls in Spider-Man costumes. “She’s not real, but I’ve got a great imagination.”
“It shouldn’t be wrong either way,” replied another, whose username contained “p3do,” which can be algospeak, or coded language, for “pedo.”
“You can’t violate her because it’s an AI image,” said one more.
Similar remarks flooded photos of AI kids on Instagram. “I would love to take her innocence even if she’s a fake image,” one person wrote on a post of a small, pale child dressed as a bride. On another, of a young girl in short-shorts, the same user commented on “her cute pair of small size [breasts],” depicted as two apple emojis, “and her perfect innocent slice of cherry pie down below.” And on a picture of a pre-teen in spaghetti straps, someone named “Gary” said: “Nice little buds. When she gets older they’ll be good sized melons.”
“Looks tasty.” “Do you do home delivery?” “The perfect age to be taken advantage of.” “I want that sweet baby.” “Can you do a test where she jumps out of my phone into my bed?” said others on TikTok and Instagram. Forbes found hundreds of posts and comments like these on images of AI-generated kids on the platforms from 2024 alone. Many were tagged to musical hits—like Beyonce’s “Texas Hold ‘Em,” Taylor Swift’s “Shake It Off” and Tracy Chapman’s “Fast Car”—to help them reach more eyeballs.
Child predators have prowled most every major social media app—where they can hide behind screens and anonymous usernames—but TikTok and Instagram’s popularity with teens and minors has made them both top destinations. And though platforms’ struggle to crack down on child sexual abuse material (or CSAM) predates today’s AI boom, AI text-to-image generators are making it even easier for predators to find or create exactly what they’re looking for.
The tools have driven a surge of AI-generated CSAM, which is illegal even if it’s fake. But the images uncovered in the reporting of this story fall into a gray area. They’re not explicit, but they are sexualized. They feature minors, but not real ones. They appear to be legal, yet the comments made on them suggest dangerous intent. Child safety and forensics experts have described them as portals to far darker, and potentially criminal, activity. That raises questions across tech and law enforcement about how a scourge of suggestive, fake images of kids that don’t exist should be dealt with—or whether, if they’re not explicit and legal, it should be addressed at all.
Tech companies are required, under federal law, to report suspected CSAM and child sexual exploitation on their platforms to the National Center for Missing and Exploited Children, a nonprofit that funnels information about that illegal activity to law enforcement. But they are not obligated to flag or remove images like those described in this story. Still, NCMEC told Forbes it believes social media companies should take them down—even if they’re legal.
“These are all being trained off of images of real children, whether depicted in full likeness or not, so NCMEC doesn't really see that side of the argument that there's a world where this is okay,” said Fallon McNulty, director of the organization’s CyberTipline, its reporting hub for tech companies. A recent study by the Stanford Internet Observatory found that one of the most popular text-to-image generative AI tools on the market today, Stable Diffusion 1.5, had been trained on CSAM, of real kids, scraped from across the web.
“Especially given some of the commentaries that the images are attracting, it doesn't sound like the audience that is reviewing, ingesting, consuming those images is innocent,” McNulty added. “I would hope to see that they're removing that content and not creating a space on their platforms for individuals to be sexualizing children.”
The images of AI-generated kids were tagged to hits from Taylor Swift, Beyonce and Tracy Chapman to help them reach more eyeballs.
TikTok and Instagram permanently removed the accounts, videos and comments referenced in this story after Forbes asked about them; both companies said they violated platform rules.
“TikTok has strict policies against AI-generated content of minors to protect young people and keep TikTok inhospitable to behavior that seeks to harm them,” said spokesperson Jamie Favazza. The company’s synthetic media policy, instituted over a year ago and updated on Friday, prohibits such content if it contains the likeness of anyone under 18, and TikTok takes down posts that break its rules, regardless of whether they were altered with AI.
Sophie Vogel, a spokesperson for Instagram’s parent company Meta, said the company does not allow, and removes, real and AI-generated material that sexualizes or exploits children. In cases where the content appears to be benign, they still remove accounts, profiles or Pages dedicated to sharing images of children or commenting on their appearance, Vogel said. Both TikTok and Meta report AI-generated CSAM they find on their platforms to NCMEC.

A ‘Gateway’ To Illegal Content

One popular creator of AI-generated kids had 80,000 followers across TikTok and Instagram, a number that climbed by the thousands as Forbes was reporting this story. The account’s bio, written in Mandarin, said “Woman With Chopsticks.” Forbes was unable to determine who was running it, but its most viral posts were viewed nearly half a million times, liked more than 10,000 times, saved or shared thousands of times, and drew hundreds of comments.
Many of its followers—based on their profile photos, names or handles, bios and comments—appeared to be older men.
Digital forensics expert Heather Mahalik Barnhart said that in her past work on these “child erotica” cases, the followers often offered clear clues that could point investigators toward potential predators. “Child erotica is the gateway,” she told Forbes, and “when you look at who’s following it, it’s not normal.”
These types of accounts also do more to expose potential offenders because they’re emboldened out in the open. “People feel more secure because it’s not fully exposed children, therefore not CSAM,” said Mahalik, who leads software company Cellebrite’s work helping NCMEC and law enforcement use its tech in child exploitation investigations. She noted it’s imperative that law enforcement more closely examine who follows these accounts and look for a pattern of behavior.
The TikTok account shared an instructional video showing followers how to generate and perfect their own photos of young girls, down to the girls’ teeth.
“Even though I am many years older than you, as soon as I saw your eyes I fell in love with you,” said one commenter on the TikTok account, who was following other handles featuring even more suggestive images of children. (Another frequent commenter, who made remarks like “open wide,” was running an account with posts of real girls doing splits and stretching in leotards.)
Some commenters on the TikTok and Instagram posts from “Woman With Chopsticks” asked what AI model was used to make the AI children, suggesting they could be interested in producing more. (“This stuff is amazing, I wish could find an app that could create this,” one man wrote a week ago on an Instagram of a scantily clad young blonde in a bralette.) In January, the TikTok account shared a three-minute instructional video showing followers how to generate and perfect their own photos of young girls, down to the girls’ teeth.
Other commenters didn’t seem to know the images were fake; though the creator labeled some of the posts “AI-generated,” as TikTok requires, it can be difficult for the naked eye to tell they’re not real, which also makes it harder for law enforcement to pinpoint actual victims. (Meta is building tools that will be able to identify images made using OpenAI, Midjourney, Microsoft, Google, Adobe, and Shutterstock and will then begin labeling AI-generated content posted to Facebook, Instagram and Threads, according to Vogel, the Meta spokesperson.)
Lloyd Richardson, director of IT at the Canadian Centre for Child Protection, said that regardless of whether these borderline images are AI or real, they’re a “gateway” to more severe or illegal content, often on other platforms, which then poses “a clear safety risk to children.”
“The underlying issue is that this subset of images leads to networking opportunities for offenders,” Richardson told Forbes, noting that they’ll often move their conversations to private direct messages. “These images can act as signposts for promoting links toward CSAM in other channels.”
A January 13 slideshow on TikTok of little girls in silk pajamas showed this type of back-and-forth in action. “This works for me,” one person wrote in the comments, to which another replied: “I just messaged you.”
“This is why companies cannot simply moderate content in isolation by looking at images alone,” Richardson added. “They have to be taken into the broader context of how they are being shared, viewed and followed. This is of course more challenging as it can require contextual assessment which can require human moderation staff.” (Favazza, the TikTok spokesperson, said they use automation to help flag possible evidence of predatory behavior.)

Got a tip about TikTok, Instagram, or children’s safety issues on social media? Reach out securely to Alexandra S. Levine on Signal/WhatsApp at (310) 526–1242 or email at alevine@forbes.com.

TikTok’s powerful algorithm also makes it easier for those with a sexual interest in children to find even more of these kinds of images. As Forbes was reporting this story, TikTok began recommending additional prompts—like “ai generated boys” and “ai fantasy girl.” And TikTok’s “For You” feed, where users land when they open the app and that serves up viral videos TikTok thinks they’ll like, ensures that they reach even more of TikTok’s 1 billion users.
McNulty, from NCMEC, said one of the greatest risks of AI kids having these viral moments on social media, even if the images are noncriminal, is that people can become desensitized to how dangerous they can be.
“As a society,” she said, “are we just gonna get used to this type of content and say that's okay?”

MORE FROM FORBES