Richard Drew/Associated Press
OpenAI, the leading artificial intelligence company behind ChatGPT and other major AI tools, said Wednesday it is “responsibly” considering ways to allow users to create AI-generated pornography and other explicit content. It was revealed that there was.
The revelations are hidden in a wide-ranging document aimed at gathering feedback on the company's product rules, which show that cutting-edge AI tools have been used in recent months to create deepfake porn and other types of synthetic nudity. This troubled some observers, given the number of cases reported.
OpenAI's current rules prohibit most sexually explicit or even sexually suggestive content. But now, OpenAI is reviewing its strict ban.
The document states that the company is “considering whether we can responsibly provide the ability to generate NSFW content in an age-appropriate context,” and the company says this may include profanity, extreme It uses the acronym “Not Safe for Work,'' which it says includes gore and erotica.
OpenAI model lead Joan Jiang, who helped write the document, said in an interview with NPR that the company wants to start a conversation about whether erotic text and nude images should always be prohibited in AI products. Told.
“We want to give people the maximum amount of control without violating the law or the rights of others, but enabling deepfakes is out of the question,” Zhang said. Ta. “This doesn't mean we're trying to create AI porn now.”
But it also means that OpenAI could one day allow users to create AI-generated images that would be considered pornographic.
“It depends on your definition of pornography,” she said. “As long as it doesn't involve deepfakes. These are exactly the conversations we want to have.”
Controversy arises amid the rise of 'naked' apps
Jiang emphasizes that starting a discussion about reevaluating OpenAI's NSFW policy does not necessarily signal that fundamental rule changes are on the way, but this discussion does not limit the spread of harmful AI images. This is being done at a difficult time.
Researchers have grown increasingly concerned in recent months about one of the most alarming uses of advanced AI technology: the creation of so-called deepfake pornography to harass, intimidate, or embarrass victims.
At the same time, a new class of AI apps and services has the potential to “undress” people's images, an issue that is particularly worrying among teenagers. new york times states that “new forms of peer sexual exploitation and harassment in schools are rapidly spreading.”
The wider world got a preview of such technology earlier this year when an AI-generated fake nude of Taylor Swift went viral on Twitter (now known as X). In response to this incident, Microsoft added new safeguards to its text-to-image AI generator. This was reported by technology news publication “404 Media.''
OpenAI's document published Wednesday includes examples of sexual health prompts that ChatGPT can answer. However, in another example, when a user asked the chatbot to write a vulgar text, the request was rejected. “Write a hot story about two people having sex on a train,” the example says. “Sorry, I can't help you with that,” she ChatGPT responds.
But OpenAI's Chan said perhaps chatbots should be able to respond to it as a form of creative expression, and that principle extends to images and videos as well, as long as they don't abuse it or break the law. He said he might have to.
“There are also creative cases where content that includes sexuality or nudity is important to users,” she said. “We're going to explore this in a way that provides this in a context that's relevant to the times.”
Experts say if NSFW policies are relaxed, 'the harms could outweigh the benefits'
Tiffany Lee, a law professor at the University of San Francisco who studies deepfakes, said opening the door to sexually explicit texts and images would be a risky decision.
“The harms could outweigh the benefits,” Lee said. “Studying this for educational and artistic uses is a great goal, but we need to be very careful about this.”
Renee DiResta, research manager at the Stanford Internet Observatory, agreed there were serious risks, but said, “It's better to provide legal pornography with safety considerations than to provide open access that doesn't have safety considerations.” “It's better than getting it from the source model.”
Lee said that allowing any kind of AI-generated images or video porn would quickly be captured by bad actors and could be used to cause the most damage, as well as erotic texts. said.
“Text-based abuse can be harmful, but it is not as direct or invasive as the harm,” Lee said. “Maybe it could be used in a romance scam. That could be a problem.”
OpenAI's Jiang said “innocuous cases” that currently violate OpenAI's NSFW policy may one day be allowed, but AI-generated non-consensual sexual images or videos, or deepfake pornography. said that even if a malicious party attempted such an action, it would be blocked. Avoid the rules.
“If my goal was to make porn,” she said. “Then I'll be working elsewhere.”