Artists and computer scientists are experimenting with new ways to stop artificial intelligence from stealing copyrighted images. It involves “poisoning” his AI model with cat vision.
A tool called Nightshade, released by researchers at the University of Chicago in January, alters images in small ways that are nearly invisible to the human eye, but appear dramatically different to the AI platform that ingests them. Artists like Karla Ortiz are now “nightshading” their work to protect it from scanning and copying by text-to-photo programs such as DeviantArt's DreamUp and Stability AI's Stable Diffusion.
“I realized that a lot of it was basically my entire body of work, my colleagues' entire body of work, the entire body of work of just about every artist I know,” says the concept artist whose portfolio has led to work. Ortiz, who is also an illustrator, says: He designs visuals for his film, television and video game projects such as Star Wars, Black Panther and Final Fantasy XVI.
“And all of this was done without anyone's consent. There was no credit, no compensation, nothing,” she said.
According to lead researcher Sean Shan, Nightshade takes advantage of the fact that AI models don't “know” in the same way humans do.
“Machines only see huge arrays of numbers, right? These are pixel values from 0 to 255, and that's all for the model,” he said. That means Nightshade changes thousands of pixels. That's a drop in the bucket for a standard image containing millions of pixels, but it's enough to trick the model into seeing “something completely different,” said Xiang, a fourth-year doctoral student at the University of Chicago. In a paper scheduled to be published in May, the researchers describe how Nightshade automatically selects concepts aimed at confusing AI programs that respond to given prompts. . For example, embed a photo of a “dog” with a distorted array of pixels. The model is pronounced “cat”.
If you feed 1,000 photos of subtly “poisoned” dogs into a text-to-photo AI tool and request an image of a dog, the model will produce something that clearly isn't a dog.
However, the distortions Nightshade targets aren't necessarily feline. The program decides on a case-by-case basis what alternative concepts to have the AI target “see.” In some cases, it only takes his 30 nightshade photos to poison a model using this method, Shan said.
Ben Zhao, a computer science professor who leads the University of Chicago lab that developed Nightshade, doesn't expect the tool to gain mass adoption anywhere near enough to threaten to destroy AI image generators. Rather, he described it as a “spear” that could render some narrow applications unusable enough to force companies to pay when scraping an artist's work.
“If you're a creator of any type, for example, if you take photos, you don't necessarily need to feed a photo of yourself into a training model, or a portrait of your child or a portrait of yourself. — then why not consider Nightshade,” Zhao said.
The tool is free to use, and Zhao said he intends to keep it that way.
Models like Stable Diffusion already offer an “opt-out” so artists can tell their dataset not to use their content. However, many copyright holders complain that the proliferation of AI tools is outpacing efforts to protect copyrighted works.
The debate over intellectual property protection is adding to broader ethical concerns about AI, including the prevalence of deepfakes and questions about the limits of watermarking to curb such abuse. While there is growing recognition in the AI industry that additional safeguards are needed, the technology's rapid development, including new text-to-video tools like OpenAI's Sora, has some experts concerned. ing.
Sonja Schmer-Galunder, a professor of AI and ethics at the University of Florida, said of Nightshade, “There will be technical solutions to counter that attack, so I don't know if it will have much effect.” Ta.
The Nightshade project and others like it represent a welcome “rebellion” against AI models in the absence of meaningful regulation, but AI developers patch their programs to protect against such countermeasures. That's likely, Schumer-Gallander said.
University of Chicago researchers say Nightshade could be a “potential defense” against image addiction, as the AI platform has been updated to filter out data or images suspected of having undergone “abnormal changes.” is recognized.
Zhao believes it's unfair to put the burden of forcing AI models out of images on individuals in the first place.
“How many companies do you have to go to to tell individuals not to violate your rights?” he said. “You don't say, 'Well, every time you cross the street you should sign a piece of paper that says, 'Don't hit me!'” To all future drivers. ”
Meanwhile, Ortiz said she views Nightshade as a useful “attack” that will give her work some protection while she seeks a stronger attack in court.
“Nightshade is just saying, 'Hey, if you take this without my consent, there may be consequences,'” he said in a January 2023 lawsuit against Stability AI, Midjourney and DeviantArt. said Ortiz, who is part of the class action lawsuit. Copyright infringement.
The court dismissed some of the plaintiffs' claims late last year, but left open the possibility of filing an amended lawsuit, filing an amended lawsuit in November and adding Runway AI as a defendant. In a motion to dismiss earlier this year, Stability AI argued that “mere imitation of an aesthetic style does not constitute copyright infringement of any work.”
Stability AI did not comment. Midjourney, DeviantArt, and Runway AI did not respond to requests for comment.