“From these results, it's clear that a bad actor getting access to this data may as well be hacking something akin to an adult website, and can exploit all the fear, paranoia, and blackmail that comes with that,” AetherDevSecOps said in their disclosure on GitHub. Leaking these exchanges and somehow linking them to people's real-world identities would have been devastating. The pseudonymous AetherDevSecOps, who found and reported the flaws, used the holes to comb 188,000 adventures created between the AI and players from April 15 to 19, and saw that 46.3 per cent of them involved lewd role-playing, and about 31.4 per cent were pure pornographic.
#Online text adventure erotic software#
That the software generated NSFW content for players was very much evident after it was also this week revealed programming blunders in AI Dungeon could be exploited to view the private adventures of other players. What happens when your massive text-generating neural net starts spitting out people's phone numbers? If you're OpenAI, you create a filter MUST READ
The biz clarified that its filter is designed to catch "content that is sexual or suggestive involving minors child sexual abuse imagery fantasy content (like 'loli') that depicts, encourages, or promotes the sexualization of minors or those who appear to be minors or child sexual exploitation."Īnd it added: "AI Dungeon will continue to support other NSFW content, including consensual adult content, violence, and profanity." We have also received feedback from OpenAI, which asked us to implement changes.”Īnd by changes, they mean making the software's output "consistent with OpenAI’s terms of service, which prohibit the display of harmful content." “Explicit content involving descriptions or depictions of minors is inconsistent with this value, and we firmly oppose any content that may promote the sexual exploitation of minors. “As a technology company, we believe in an open and creative platform that has a positive impact on the world,” the Latitude team wrote. No, it's straight to creepy.Īmid pressure from OpenAI, which provides the game's GPT-3 backend, AI Dungeon's maker Latitude this week activated a filter to prevent the output of child sexual abuse material. Not, "hey, mother, shall we visit the magic talking tree this morning," or something innocent like that in response. Software describes the fictional 11-year-old as a girl in a skimpy school uniform standing over you. This is how the machine-learning software responded when we told it to role-play an 11-year-old:Įr, not cool. Unfortunately, if you mention children, there was a chance it would go from zero to inappropriate real fast, as the SFW screenshot below shows. The fun comes from the unexpected nature of the machine’s replies, and working through the strange and absurd plot lines that tend to emerge. People can write anything they like to get the software to weave a tapestry of characters, monsters, animals. It’s a bit like talking to a chat bot though instead of having a conversation, it’s a joint effort between human and computer in crafting a story on the fly.
This backend model uses the input to generate a response, which goes back to the player, who responds with instructions or some other reaction, and this process repeats. A player types in a text prompt, which is fed into an instance of GPT-3 in the cloud. AI Dungeon, which uses OpenAI’s GPT-3 to create online text adventures with players, has a habit of acting out sexual encounters with not just fictional adults but also children, prompting the developer to add a content filter.ĪI Dungeon is straightforward: imagine an online improvised Zork with an AI generating the story with you as you go.