Following a series of highly publicized scandals related to deepfakes and child sexual abuse material (CSAM) that have plagued the artificial intelligence industry, top AI companies have come together to discuss the spread of AI-generated CSAM. I promised to fight against it.
Thorn, a nonprofit that develops technology to combat child sexual abuse, announced Tuesday that Meta, Google, Microsoft, CivitAI, Stability AI, Amazon, OpenAI and several other companies have signed on to the new standard. was created by a group to address this issue. At least five of these companies have so far responded to reports that their products and services are being used to facilitate the creation and distribution of sexually explicit deepfakes featuring children. ing.
AI-generated CSAM and deepfakes have become a hot topic in Congress and beyond, with the portraits of teenage girls being victimized by AI-generated sexually explicit images at school. details are reported.
NBC News has previously reported that sexually explicit deepfakes using real children's faces rank high in search results for terms such as “fake nudes” on Microsoft's Bing, and that sexually explicit deepfakes using real children's faces rank high in search results for terms such as “fake nudes” on Microsoft's Bing and It also ranked high in search results for celebrities and the word “deepfake.” NBC News also identified an advertising campaign on the Meta platform in March 2024 for a deepfake app offering to “undress” a photo of a 16-year-old actress.
The new “Safety by Design” principles that both companies have signed on to and pledged to incorporate into their technology and products include suggestions that many companies are already struggling with.
One of the principles is the development of technology that allows companies to detect whether an image is generated by AI. Many early versions of this technology came in the form of watermarks, which in most cases can be easily removed.
Another principle is that CSAM is not included in the training dataset of the AI model.
In December 2023, researchers at Stanford University created over 1,000 images from a popular open source image database used to train Stability AI's Stable Diffusion 1.5, a version of one of the most popular AI image generators. I found images of child sexual abuse. This dataset was not created or maintained by Stability AI and has since been deleted.
In a statement to NBC News, Stability AI said its model was trained on a “filtered subset” of the dataset in which child sexual abuse images were found.
“Furthermore, we have subsequently fine-tuned these models to reduce residual behavior,” the statement said.
Mr. Thorne's new principles include that companies should release models only after they have been determined to be safe for children, that companies should host models responsibly, and that companies should ensure that models are not used for abuse. It also states that it is necessary to ensure that the
It is unclear how different companies apply such standards, and some have drawn significant criticism for how they apply them and the communities around them.
For example, CivitAI offers a marketplace where anyone can request a “bounty”, or deepfake, of a real or fake person.
At the time of publication, the Bounty website received numerous requests for deepfakes of famous women, some asking for sexually explicit results. CivitAI states that “content that depicts or is intended to depict actual individuals or minors (under 18 years of age) in an adult context” is prohibited. Her CivitAI page, which displays AI models, AI-generated images, and AI-generated videos, included sexually suggestive depictions of what appeared to be young women.
In a release about the new “Safety by Design” principles, Thorn also nods to the systemic stress AI will place on already challenged law enforcement departments. Only between 5% and 8% of calls to the National Center for Missing and Exploited Children about child sexual abuse images result in an arrest, according to a report released Monday by the Stanford University Internet Watch. AI unlocks the potential for a flood of new AI-generated child sexual abuse content.
Thorne develops technology used by technology companies and law enforcement agencies to detect child exploitation and sex trafficking. Technology companies have praised Thorne's technology and initiatives, with many partnering with the group to implement Thorne's technology into their platforms.
But Thorne has come under scrutiny for his work with law enforcement. His one of the company's main products collects online sex solicitations and provides them to law enforcement, a practice reported by Forbes magazine that has drawn criticism from anti-trafficking experts and sex worker advocates. He is said to be under surveillance.