Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages
Credit to Author: Matt Burgess| Date: Fri, 11 Apr 2025 10:30:00 +0000
Several AI chatbots designed for fantasy and sexual role-playing conversations are leaking user prompts to the web in almost real time, new research seen by WIRED shows. Some of the leaked data shows people creating conversations detailing child sexual abuse, according to the research.
Conversations with generative AI chatbots are near instantaneous—you type a prompt and the AI responds. If the systems are configured improperly, however, this can lead to chats being exposed. In March, researchers at the security firm UpGuard discovered around 400 exposed AI systems while scanning the web looking for misconfigurations. Of these, 117 IP addresses are leaking prompts. The vast majority of these appeared to be test setups, while others contained generic prompts relating to educational quizzes or nonsensitive information, says Greg Pollock, director of research and insights at UpGuard. “There were a handful that stood out as very different from the others,” Pollock says.
Three of these were running-role playing scenarios where people can talk to a variety of predefined AI “characters”—for instance, one personality called Neva is described as a 21-year-old woman who lives in a college dorm room with three other women and is “shy and often looks sad.” Two of the role-playing setups were overtly sexual. “It’s basically all being used for some sort of sexually explicit role play,” Pollock says of the exposed prompts. “Some of the scenarios involve sex with children.”
Over a period of 24 hours, UpGuard collected prompts exposed by the AI systems to analyze the data and try to pin down the source of the leak. Pollock says the company collected new data every minute, amassing around 1,000 leaked prompts, including those in English, Russia, French, German, and Spanish.
It was not possible to identify which websites or services are leaking the data, Pollock says, adding it is likely from small instances of AI models being used, possibly by individuals rather than companies. No usernames or personal information of people sending prompts were included in the data, Pollock says.
Across the 952 messages gathered by UpGuard—likely just a glimpse of how the models are being used—there were 108 narratives or role-play scenarios, UpGuard’s research says. Five of these scenarios involved children, Pollock adds, including those as young as 7.
“LLMs are being used to mass-produce and then lower the barrier to entry to interacting with fantasies of child sexual abuse,” Pollock says. “There's clearly absolutely no regulation happening for this, and it seems to be a huge mismatch between the realities of how this technology is being used very actively and what the regulation would be targeted at.”
WIRED last week reported that a South Korea–based image generator was being used to create AI-generated child abuse and exposed thousands of images in an open database. The company behind the website shut the generator down after being approached by WIRED. Child-protection groups around the world say AI-generated child sexual abuse material, which is illegal in many countries, is growing quickly and making it harder to do their jobs. The UK’s anti-child-abuse charity has also called for new laws against generative AI chatbots that “simulate the offence of sexual communication with a child.”
All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called llama.cpp. This software allows people to relatively easily deploy open source AI models on their own systems or servers. However, if it is not set up properly, it can inadvertently expose prompts that are being sent. As companies and organizations of all sizes deploy AI, properly configuring the systems and infrastructure being used is crucial to prevent leaks.
Rapid improvements to generative AI over the past three years have led to an explosion in AI companions and systems that appear more “human.” For instance, Meta has experimented with AI characters that people can chat with on WhatsApp, Instagram, and Messenger. Generally, companion websites and apps allow people to have free-flowing conversations with AI characters—portraying characters with customizable personalities or as public figures such as celebrities.
People have found friendship and support from their conversations with AI—and not all of them encourage romantic or sexual scenarios. Perhaps unsurprisingly, though, people have fallen in love with their AI characters, and dozens of AI girlfriend and boyfriend services have popped up in recent years.
Claire Boine, a postdoctoral research fellow at the Washington University School of Law and affiliate of the Cordell Institute, says millions of people, including adults and adolescents, are using general AI companion apps. “We do know that many people develop some emotional bond with the chatbots,” says Boine, who has published research on the subject. “People being emotionally bonded with their AI companions, for instance, make them more likely to disclose personal or intimate information.”
However, Boine says, there is often a power imbalance in becoming emotionally attached to an AI created by a corporate entity. “Sometimes people engage with those chats in the first place to develop that type of relationship,” Boine says. “But then I feel like once they've developed it, they can't really opt out that easily.”
As the AI companion industry has grown, some of these services lack content moderation and other controls. Character AI, which is backed by Google, is being sued after a teenager from Florida died by suicide after allegedly becoming obsessed with one of its chatbots. (Character AI has increased its safety tools over time.) Separately, users of the generative AI tool Replika were upended when the company made changes to its personalities.
Aside from individual companions, there are also role-playing and fantasy companion services—each with thousands of personas people can speak with—that place the user as a character in a scenario. Some of these can be highly sexualized and provide NSFW chats. They can use anime characters, some of which appear young, with some sites claiming they allow “uncensored” conversations.
“We stress test these things and continue to be very surprised by what these platforms are allowed to say and do with seemingly no regulation or limitation,” says Adam Dodge, the founder of Endtab (Ending Technology-Enabled Abuse). “This is not even remotely on people’s radar yet.” Dodge says these technologies are opening up a new era of online pornography, which can in turn introduce new societal problems as the technology continues to mature and improve. “Passive users are now active participants with unprecedented control over the digital bodies and likenesses of women and girls,” he says of some sites.
While UpGuard’s Pollock could not directly connect the leaked data from the role-playing chats to a single website, he did see signs that indicated character names or scenarios could have been uploaded to multiple companion websites that allow user input. Data seen by WIRED shows that the scenarios and characters in the leaked prompts are hundreds of words long, detailed, and complex.
“This is a never-ending, text-based role-play conversation between Josh and the described characters,” one of the system prompts says. It adds that all the characters are over 18 and that, in addition to “Josh,” there are two sisters who live next door to the character. The characters’ personalities, bodies, and sexual preferences are described in the prompt. The characters should “react naturally based on their personality, relationships, and the scene” while providing “engaging responses” and “maintain a slow-burn approach during intimate moments,” the prompt says.
“When you go to those sites, there are hundreds of thousands of these characters, most of which involve pretty intense sexual situations,” Pollock says, adding the text based communication mimics online and messaging group chats. “You can write whatever sexual scenarios you want, but this is truly a new thing where you have the appearance of interacting with them in almost exactly the same way you interact with a lot of people.” In other words, they’re designed to be engaging and to encourage more conversation.
That can lead to situations where people may overshare and create risks. “If people are disclosing things they’ve never told anyone to these platforms and it leaks, that is the Everest of privacy violations,” Dodge says. “That’s an order of magnitude we've never seen before and would make really good leverage to sextort someone.”