“When I started this, I thought it was going to be so bad that I wouldn't be able to fool anyone, but I was surprised,” Stevenson, who co-founded the site in 2021, said in an interview. “And we're not sophisticated. If we could do this, anyone with an actual budget would be able to do a good enough job that you and I would be fooled. That would be scary.”
as tight As the 2024 presidential election draws ever closer, experts and officials are increasingly sounding the alarm about the potentially destructive power of AI deepfakes, saying they will further erode the nation's sense of truth and sway voters. There are concerns that this could lead to instability.
There are signs that AI and the fears surrounding it are already having an impact on race. Late last year, former President Donald Trump falsely accused the creators of an ad that featured his own well-documented public gaffes of trafficking in AI-generated content. Meanwhile, actual fake images of Trump and other politicians, This policy, designed to both boost and hurt the economy, has been propagated time and time again, causing chaos at key points in the election cycle.
Some officials are now rushing to respond. In recent months, the New Hampshire Department of Justice announced it was investigating spoofed robocalls featuring an AI-generated voice of President Biden. Washington state warns voters to be wary of deepfakes. And lawmakers from Oregon to Florida have passed bills restricting the use of such technology in campaign communications.
And in Arizona, a key battleground state in the 2024 election, the top election official used his own deepfakes in training to prepare staff for the onslaught of falsehoods to come. The exercise inspired Stevenson and his colleagues at Arizona Agenda. The Arizona Agenda's daily newsletter seeks to explain complex political stories to its approximately 10,000 subscribers.
They brainstormed ideas for about a week and enlisted the help of tech-savvy friends. On Friday, Stevenson released a piece that included three deepfake videos of Lake.
It begins with a ruse that tells readers that Lake, a far-right candidate whom Arizona Agenda has trolled in the past, has decided to record a testimony about how much he enjoys the outlet. However, the video quickly cuts to the punchline of the giveaway.
“Subscribe to the Arizona Agenda for real shocking news,” Fake Lake tells the camera, adding: “And then there are the previews of terrifying artificial intelligence coming to the next election, like this video: a deepfake of the Arizona Agenda designed to show how good this technology is.”
By Saturday, the video had racked up tens of thousands of views, but received such dissatisfaction from the real Lake camp that his campaign lawyer sent a cease-and-desist letter to Arizona Agenda. The letter demanded that “the aforementioned deepfake video be immediately removed from all platforms where it was shared or disseminated.” The letter says Lake's campaign will “pursue all legal remedies” if the media outlets do not respond.
A spokesperson for the campaign declined to comment when contacted Saturday.
Stevenson said he was consulting with his attorney about how to respond, but as of Saturday afternoon had no plans to remove the video. He says deepfakes are a great learning device, and he wants to give readers the tools to detect such forgeries before they flood in as election season heats up.
“It's up to all of us to fight this new wave of technological disinformation during this election cycle,” Stevenson wrote in an article attached to the clip. “The best defense is to know what's out there and use critical thinking.”
Hany Farid, a professor at the University of California, Berkeley who studies digital propaganda and misinformation, said the Arizona Agenda video was a useful public service announcement carefully crafted to limit unintended consequences. . Still, he said news organizations should be careful about how they frame deepfake coverage.
“I support the PSA, but it's balanced,” Farid said. “We don't want our readers and viewers to assume that anything that doesn't align with their worldview is fake.”
Farid said there are two different “threat vectors” for deepfakes. First, malicious actors can generate fake videos of people saying things they don't actually say. Second, people are more likely to dismiss authentic embarrassing or incriminating footage as fake.
Farid said this dynamic was especially evident during Russia's invasion of Ukraine during a conflict where misinformation is rampant. Early in the war, Ukraine promoted deepfakes showing Paris under attack, prompting world leaders to react to the Kremlin's attacks with the same urgency they would if the Eiffel Tower were targeted. I urged them to do so.
Farid said this was a strong message, but it also opened the door to Russia's baseless claims that subsequent videos from Ukraine showing evidence of Kremlin war crimes were similarly fabricated. Ta.
“I'm worried that everything is in doubt,” he said.
Mr. Stevenson's backyard, which has recently become a political battleground and a melting pot of conspiracy theories and false claims, has similar fears.
“We've been fighting for years over what's true,” he says. “While objective facts can be ignored as fake news, objective videos will now be ignored as deepfakes and deepfakes will be treated as reality.”
Researchers like Farid are hard at work developing software that will make it easier for journalists and others to detect deepfakes. Farid said the suite of tools currently in use makes it easy to classify Arizona Agenda's videos as fake, a sign of hope against future floods of fakes. However, deepfake technology is rapidly advancing and it could become much more difficult to spot fakes in the future.
And even Stevenson's admittedly subpar deepfakes succeeded in fooling some people. After making headlines in Friday's newsletter with the headline “Lake Kali gave us a solid win,'' some paying readers unsubscribed. Perhaps they thought Lake's support was genuine, Stevenson suspects.
Megan Vasquez contributed to this report.