Americans are worried about what will happen. AI deepfake Some people are refraining from using social media out of fear that it could spark a wildfire of misinformation ahead of this fall's U.S. presidential election.
Several the study A majority of Americans show that they are concerned about how generative artificial intelligence can be used to spread misinformation, and many corroborate this (if not most) If you are a resident of the United States, especially Worried about false alarms caused by AI Because it has to do with the 2024 presidential election. for example, Axios survey We surveyed found that 53% of Americans believe misinformation spread by AI will influence which candidate wins.
Adding to that mountain of research, Adobe commissioned an opt-in survey of more than 2,000 U.S. residents ages 18 and older, finding an even more surprising number of Americans are concerned that deepfakes will influence election results. (80%). The survey shared with Quartz also found that 40% of respondents have reduced or stopped using certain social media platforms due to the amount of misinformation circulating. .
Their concerns are well-founded. AI-generated content is already being used by politicians and hackers alike.governor of florida Ron DeSantis' now-defunct presidential campaign posts deepfake of Donald Trump Last summer, she kissed Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases. Also, in the summer of 2023, Polish Prime Minister Donald Tusk posted a partially AI-generated video of his opponent during the election campaign. Meanwhile, the Chinese-backed online group Spamoflauge spread a video of an AI-generated newscaster reporting fake news ahead of Taiwan's presidential election. A group of Microsoft researchers also said that China is likely to use AI to interfere in the US presidential election.
“At Adobe, we recognize the transformative potential of generative AI to improve creativity and productivity. But in the age of deepfakes, AI-powered misinformation poses a significant risk to election integrity. ,” said Andy Parsons, senior director of content authenticity initiatives at Adobe. “Our team's opinion is that there will be some impact from AI in this election. It's not the end of civilization and democracy, but if the right checkmarks aren't set, it's likely that there will be some impact from AI. The situation will only get worse over time.”
“Once people are fooled by deepfakes, they may no longer believe what they see online. And when people begin to doubt everything and are unable to distinguish fact from fiction, democracy itself is threatened.” he said.
Adobe answer: Content credentials
For the past several years, Adobe has been leading the industry's efforts to combat AI-generated misinformation.
Adobe launches Content Authenticity Initiative CAI partnered with the New York Times and Twitter in 2019 to develop open source code called Content Credentials that companies can integrate into their products for free. This technology attaches metadata to digital content and is designed to be interoperable across systems. As a result, photos and videos display an icon that users can click to see if the content was created, how much it was edited, and whether AI was used. This icon appears with photos and videos, for example, when they are exported from Adobe Premiere and posted to platforms like YouTube and Instagram.
“CAI's mission remains firm and unchanging: to serve consumers, fact-checkers, [and] “Creators need the ability to know what they're doing,” Parsons said.
Adobe does not receive revenue from participation in CAI or group collaborative projects. Coalition on Content Origin and Authenticity (C2PA) was launched in 2021 with leading companies such as Microsoft, Arm, and Intel to create a digital content authentication standard. Meta announced he would use Content Credentials in February, the same month, Google joins C2PA Together with Adobe and its current 2,500 partners, we will help further develop that very standard. Microsoft and Adobe are also working on a Content Credentials tool for political campaigns ahead of the next election.
“That little CR [Content Credentials] I predict that pins will become as recognizable as copyright marks in the coming years, but this is not the case. [digital content] Is it true, true or false, but there is more information [about where the content came from], that's why we love this nutrition label metaphor. No one will stop you from buying unhealthy food when you go to the supermarket, but you have a fundamental right to know what's in it. ”
Parsons said Adobe started this whole effort in 2016 after a presenter at its annual conference in Las Vegas showed how easy it was to create a deepfake audio of Jordan Peele's voice. It is said that it was time.
Parsons said Adobe's efforts to further develop standards around AI-generated content will benefit Creative Cloud customers who seek such tools. He also said content credentials, which are codes that identify AI-generated content, are stronger than digital watermarks, which have been introduced as a potential solution to deepfakes.
“It is very easy to train an AI to defeat watermarks. So while we support the idea of watermarks, we believe that combining watermarks with content credentials is a more powerful defense. ” he said.
Still, Parsons acknowledged that more needs to be done to prevent AI-related misinformation and disinformation. He said widespread adoption of digital content authentication standards such as Content Credentials and media literacy education is needed.
“None of these measures are silver bullets. Truly tackling misinformation will require governments, civil society, technology companies and a variety of technological approaches,” he said.