Days before New Hampshire's Jan. 23 primary, there was a robocall that sounded like President Joe Biden asking voters to stay home and not vote.
“Only by voting this Tuesday can the Republican Party seek re-election of Donald Trump,” the voice said. An operative for Biden's leading opponent, Rep. Dean Phillips (D-Minn.), later told NBC News that he commissioned the ad himself without the campaign's knowledge.
On the Republican side, Donald Trump's campaign posted an audio clip that appeared to show Florida Governor Ron DeSantis talking to Adolf Hitler, and DeSantis' campaign posted a photo that appeared to show President Trump hugging Anthony Fauci. Posted.
This use of AI extends beyond normal campaign fraud to actual hoaxes. You might think it's illegal, but the federal agency that oversees elections has yet to take regulatory action against so-called deepfakes.
The head of the Federal Election Commission (FEC) said he expects the issue to be resolved “later this year,” leaving the possibility that this misinformation will remain unregulated through the 2024 election cycle. .
Preparing to vote: See who's running for president and compare their positions on important issues with our voter guide
Good government advocates worry that deepfakes could allow voters to see candidates do things they never actually did.
Robert Wiseman, president of Public Citizen, a nonprofit group that is pushing the Federal Election Commission to adopt the new rules, said, “Candidates are drunk, passed out, hugging criminals, kissing their opponents. You can imagine what it's like to be shown.” “It is very difficult for affected candidates to refute content that appears to be authentic.”
Fake or fact?The first AI elections are set to take place in 2024.
Group asks Federal Election Commission to ban deepfakes
FEC regulations already prohibit candidates and their campaign agents from fraudulently misrepresenting themselves as part of another party's campaign to the other party's detriment. Public Citizen asked the FEC to clarify that using artificial intelligence to put words in someone's mouth constitutes that type of misrepresentation.
This is the advocacy group's second attempt after the FEC refused to act on Public Citizen's initial petition in June, but three Republican commissioners voted against moving forward with public comment. , committee members rejected it in a deadlocked vote with three Democratic committee members voting yes. Good feeling.
Now, since commissioners voted 6-0 in August to move forward with Public Citizen's second attempt, dozens of groups, the public, and Democrats in Congress have commented in support of the proposal. It has been submitted. But Wiseman said the FEC is moving too slowly and it's unlikely the rulemaking process will be completed by Nov. 5.
“A properly functioning FEC would have moved proactively to address this problem a long time ago,” Wiseman said. “They didn't.”
He said agencies “should not be dragged along kicking and screaming.”
FEC Chairman Sean Cooksey disputed Wiseman's comments.
“Public Citizen statements are typically long on angry expressions and short on content,” Cooksey wrote in a statement to USA TODAY. “Any suggestion that the FEC is not doing its job with respect to pending AI rulemaking applications is completely false. We expect the Commission to continue its regulatory review process and resolve the petitions later this year.”
Spokeswoman Judith Ingram said the FEC enforces regulations on a case-by-case basis and typically files lawsuits in response to complaints.
The FEC's enforcement powers are civil, so it typically imposes fines, but partisan vote deadlocks like the one in June are common, and certain gray areas of the law are not being enforced. It means.
“The FEC's enforcement record is dismal, and we hope that if the FEC adopts this rule, its enforcement will be less dire,” Wiseman said.
fake robocalls. Processed video.Why Facebook is being asked to solve election issues.
AI deepfake bills appear in at least 30 state legislatures
Republican and Democratic state lawmakers are taking on the issue themselves. Wendy Underhill, director of elections and redistricting for the National Conference of State Legislatures, said her team tracked 55 bills introduced in 30 states. Several laws were passed this year. California and Texas have implemented bookkeeping laws since 2019.
In Indiana, a bill with bipartisan support before the governor's desk would require ads that use deepfake technology to have “elements of this media that have been digitally altered or artificially generated.” Mandatory disclosure. This law applies to photos, videos, and audio. Affected state and federal candidates can seek damages in civil court.
“This bill is very easy to pass because no one here wants people to be fooled about what they're seeing,” said state Rep. Julie Olthoff, a Republican who sponsored the bill. Because I don't want it.” “And secondly, who's going to say 'no'? Are we going to vote for 'No, we want people to do whatever they want with AI generation'?”
Olthoff said he based his bill on laws already in place in Washington state. She said she wasn't entirely satisfied that economics was the only option, and that lawmakers had removed language that would have created criminal penalties or banned the technology entirely. Stated.
“If a deepfake ad comes out three days before you lose a race, it's already too late,” Olthoff said. “You lost the race because of it. All you get is what the judge says.” She added: “Maybe we can make a little more progress in the future, but there's an urgency to this. ” he added.
In Kansas, Republican state Rep. Pat Proctor worked with Democratic leaders to pass a bill that would require the release of deepfake ads and make it a crime to impersonate election officials such as the secretary of state. To persuade them not to vote. The language was inspired by Biden's robocalls.
“This technology can create materials that are so realistic that they are indistinguishable from the real thing, and anyone can make them very cheaply, so I think we need to put guardrails in place to protect voters from this.” said Mr. Proctor.
Proctor said that although the bill is currently dysfunctional, it can and will be revived. “This is definitely something that has bipartisan support because we're both going to be victims of this and we all know that,” he said.
Fake video of President Trump?How to spot deepfakes on Facebook and YouTube before the presidential election
more:Big tech promises to crack down on election AI deepfakes in 2024. Will they keep their word?
Big tech companies demand disclosure of deepfake AI
Starting in November, Google will require “clear and conspicuous” disclosures from election advertisers who use AI in photos, videos, and audio. YouTube, a subsidiary of the company, also announced in November that it would be installing an update that would “notify viewers if the content they're watching is synthetic.”
Meta, which owns Facebook, Instagram and Threads, says its staff removes ads that are false, doctored, partially false or missing context. It also prohibits ads that prevent people from voting, question the legitimacy of elections, or prematurely claim victory. This policy includes AI-generated ads.
Andrew Critch, an AI researcher at the University of California, Berkeley, said creating deepfakes should be illegal. He said federal agencies could establish norms and Congress could find bipartisan agreement on criminalizing deepfake production.
“We need to make it illegal to create deepfakes, and we need to educate people that there needs to be a way to tell what is a deepfake and what is not,” Critch said.
On Wednesday, Sen. Amy Klobuchar of Minnesota and Sen. Lisa Murkowski, R-Alaska, introduced a bill that would require disclaimers on AI-generated political ads and force the FEC to address violations of the law. It became mandatory.
Olthoff said the federal government needs to take action to avoid forcing the federal movement to cherry-pick a patchwork of state laws. “Now you need to understand what the disclaimer is, and it varies from state to state,” she said. “Or, if you have been harmed, you have to sue in more than 30 states.”
Wiseman said that even if the FEC passes a rule, it may not solve all the challenges of deepfakes, especially since the FEC has stalled in voting on enforcement decisions and failed to take any action. He said this is because there are many cases where there is no such thing. But at least the books need rules, he said.
“We need enforcement to stop robberies and robberies, but if it were completely legal we would expect more people to steal things,” he says.