Experts say the problem is only getting worse. Nowadays, the quality of some fake images is so good that it is almost impossible to distinguish them from the real thing. In one high-profile case, a financial manager at a Hong Kong bank used AI to transfer approximately $25.6 million to a fraudster who posed as an employee's boss over a video call. And the tools to create these fakes are free and widely available.
A growing group of researchers, academics, and startup founders are working on ways to track and label AI content. They are using a variety of methods and partnering with news organizations, big tech companies, and even camera manufacturers to ensure that the public's ability to understand what is true and what is false is not further compromised by AI images. I'm thinking of doing that.
“A year ago, we were still looking at AI images, and they were goofy,” said Rijul Gupta, founder and CEO of deepfake detection startup DeepMedia AI. . “Now it's perfect.”
Here we summarize the main methods being developed to stave off the AI imaging apocalypse.
Digital watermarks are not new. They have been used for years by record labels and movie studios who want to protect their content from piracy. But they have become one of the most popular ideas to help deal with the wave of images generated by AI.
When President Biden signed a landmark executive order on AI in October, he directed the government to create standards for companies to follow when watermarking images.
Some companies are already adding visible labels to images created by AI generators. OpenAI pastes five small colored boxes in the bottom right corner of the image created with the Dall-E image generator. However, labels can easily be cut out of images or edited in Photoshop. Other popular AI image generation tools, such as Stable Diffusion, don't even add labels.
As a result, the industry is increasingly focused on invisible watermarks baked into the images themselves. These are invisible to the human eye, but can be detected by social media platforms, for example, and labeled before the viewer sees them.
However, they are far from perfect. Previous versions of watermarks could be easily removed or altered by simply changing the image color or flipping the image. Google, which provides image generation tools to consumer and business customers, announced last year that it had developed a tamper-resistant watermarking technology called SynthID.
But researchers at the University of Maryland showed in a paper in February that they have the potential to defeat approaches developed by Google and other tech giants to watermark AI images.
“That doesn't solve the problem,” says one of the researchers, Sohail Faizi.
Developing a robust watermarking system that Big Tech and social media platforms agree to abide by should help alleviate the problem significantly. About deepfakes misleading people online. Nico Dakens, director of intelligence at cybersecurity firm Shadow Dragon, said: Shadow Dragon is a startup that develops tools to aid investigations using internet images and social media posts.
“Watermarks definitely help,” Dakens says. But “this is certainly not a waterproof solution, as anything that is digitally combined can be hacked, spoofed or tampered with,” he said.
In addition to watermarking AI images, the tech industry is starting to talk about labeling the actual images as well. This overlays data on each pixel of a photo at the moment it is taken by the camera, providing a record of what the industry calls “provenance.”
Even before the AI boom began with OpenAI's release of ChatGPT in late 2022, camera manufacturers Nikon and Leica introduced special ” They had begun developing a method to engrave “metadata.” Canon and Sony have launched similar programs, and Qualcomm, which makes computer chips for smartphones, said it was working on a similar project to add metadata to images taken with cellphone cameras.
News organizations including the BBC, Associated Press and Thomson Reuters are working with camera companies to create systems to check authentication data before publishing photos.
Social media sites will also use this system to label real and fake images as such, similar to how some platforms label content that may contain anti-vaccine disinformation or government propaganda. could be labeled to help users understand what they're looking at. These sites can also prioritize real content with algorithmic recommendations and allow users to filter AI content.
But building a system where real images are verified and labeled on social media and news websites can have unintended consequences. Hackers could figure out how camera companies apply metadata to images and add them to fake images, and gain access to social media because of the fake metadata.
“It's dangerous to believe that there are real solutions to malicious attackers,” says R&D Director of Imatag, a startup that helps businesses and news organizations watermark and label real images to prevent them from being misused. says Vivien Chappelier, . But making it harder to accidentally spread fake images and giving people more context about what they're seeing online can still be helpful.
“What we're trying to do is raise the bar a little bit,” Chapellier said.
Adobe has been selling photo and video editing software for years and now provides AI image generation tools to customers, but AI companies, news organizations and social media platforms can identify and label real images. We have promoted standards that must be followed. And deepfakes.
Adobe General Counsel Dana Rao said AI images are here to stay, and controlling them will require a combination of different methods.
Some companies, such as Reality Defender and Deep Media, have built tools to detect deepfakes based on the underlying technology used in AI image generators.
By showing an AI algorithm tens of millions of images that have been labeled as fake or real, the model is able to distinguish between the two, and its internal “understanding” of what factors determine an image to be fake. ”. The image is run through this model, and once those elements are detected, the image is determined to have been generated by AI.
The tool can also highlight which parts of the image the AI has determined are fake. Humans may classify images as AI-generated based on an unusual number of fingers, but AI often zooms in on areas of light or shadow that it deems doesn't look right. .
Ben Colman, founder of Reality Defender, says there are other things to look for, such as whether the veins are visible in the correct anatomical location. “You’re either a deepfake or a vampire,” he said.
Colman envisions a world where scanning for deepfakes is just a normal part of a computer's cybersecurity software, in the same way that email applications like Gmail automatically filter out obvious spam. . “That's where we're going,” Coleman said.
But it's not easy. Some warn that it will likely be impossible to reliably detect deepfakes as the technology behind AI image generators changes and improves.
“If the problems are difficult today, they will be even more difficult next year,” said Feige, a researcher at the University of Maryland. “It will be almost impossible within five years.”
Even if all these techniques were successful and Big Tech companies were fully on board, people would still need to be critical of what they see online.
“Assume nothing, believe nothing, and question everything,” said Dakens, an open source researcher. “When in doubt, assume it's fake.”
With elections coming up this year in the U.S. and other major democracies, the technology may not be ready to handle the amount of misinformation and AI-generated fake images posted online.
“The most important thing they can do in the upcoming election is tell people not to believe everything they see and hear,” said Rao, Adobe's general counsel.