WASHINGTON (AP) — President Joe Bidencampaign and Democratic candidates are in a fierce competition with Republicans over who can best harness the potential of artificial intelligence, a technology that could transform U.S. elections and perhaps threaten democracy itself. .
It's still smart to be outsmarted by social media donald trump In 2016, Democratic strategists said they were being cautious about deploying tools that plagued disinformation experts. So far, Democrats say they are primarily leveraging AI to help find and motivate voters and identify and overcome deceptive content.
“Candidates and strategists are still figuring out how to leverage AI in their work. ,” said Betsy Huber, director of digital organizing for President Barack Obama's 2012 campaign and co-founder of progressive venture capital firm Higher Ground Labs. “But they are aware of the risks of misinformation and have been intentional about where and how they use it in their work.”
Election campaigns of both parties have long used AI (powerful computer systems, software, or processes that emulate aspects of human tasks and cognition) to collect and analyze data.
However, recent developments in powerful generative AI now allow candidates and consultants to generate text and images, replicate human voices, and create videos at unprecedented volumes and speeds. Ta.
That's why disinformation experts believe that whether it takes the form of robocalls, social media posts, or fake images or videos, they can potentially suppress or mislead voters or incite violence. There are increasingly serious warnings about the risks posed by AI's ability to spread certain falsehoods.
These concerns come in the wake of high-profile incidents such as the spread of an AI-generated image of former President Donald Trump being arrested in New York and an AI-generated robocall imitating Biden's voice telling voters in New Hampshire not to vote. It became more urgent.
The Biden administration has sought to shape AI regulation through executive action, but Democrats overwhelmingly agree that Congress needs to pass legislation to put safeguards in place for the technology.
Top tech companies have taken steps to quell concerns in Washington, including vowing to regulate themselves. For example, leading AI companies have signed agreements to combat the use of AI-generated deepfakes around the world. But some experts said the voluntary efforts were largely symbolic and Congressional action was needed to prevent AI from being misused.
Meanwhile, the campaign and its consultants have generally avoided talking about how they intend to use AI to avoid surveillance or leaking trade secrets.
Democrats have gotten “much better at just shutting down and doing the work and talking about it later,” said Jim Messina, a veteran Democratic strategist who ran President Obama's re-election campaign.
In a statement, the Trump campaign said, “Like many other campaigns across the country, we use a suite of proprietary algorithmic tools to streamline email delivery and protect registration lists from being filled with false information. “There is,” he said. Spokesman Stephen Chan also said the campaign did not “involve or utilize” tools provided by AI companies and declined to comment further.
The Republican National Committee declined to comment, but it has experimented with generative AI. Hours after Biden announced he would seek re-election last year, the RNC released an ad that used artificial intelligence-generated imagery to portray Republicans' dystopian fears about a second term for Biden: China invaded Taiwan; , storefronts are boarded up, troops line American streets, and migrants cross the U.S. border.
A key Republican proponent of AI is digital consultant Brad Parscale, who in 2016 partnered with scandal-plagued British data mining company Cambridge Analytica to aggressively target social media users. . Most strategists agree that the Trump campaign and other Republicans have made better use of social media than Democrats this cycle.
Democrats tread carefully
The Biden campaign, Democratic candidates and progressives are scarred by memories of 2016, grappling with the power of artificial intelligence and worried about not being able to keep up with Republicans' technology adoption, according to interviews with consultants and strategists. ing.
They want to use it in a way that maximizes its capabilities without crossing any ethical lines. But some said they were concerned that using it could lead to charges of hypocrisy. While the White House has prioritized curbing AI-related abuses, they have long accused President Trump and his allies of engaging in disinformation.
The Biden campaign said it is using AI to model and build audiences, draft and analyze email copy, and generate content for volunteers to share on the ground. The campaign is also testing AI's ability to help volunteers categorize and analyze a large amount of data, including notes taken by volunteers during doorknocks and after speaking with voters by phone or text message.
A campaign official, who spoke on condition of anonymity because he was not authorized to discuss AI publicly, said experiments using AI to generate fundraising emails were more effective than emails created by humans. It is said that sometimes it became clear.
Biden campaign officials said they plan to consider the use of generative AI this term, but will adhere to strict rules when introducing it. Prohibited tactics include using AI to mislead voters, spread disinformation or so-called deepfakes, or intentionally manipulate images. The campaign also prohibits the use of AI-generated content in advertising, social media, and other copy without review by staff.
The campaign's legal team has set up a task force made up of lawyers and outside experts to deal with misinformation and disinformation, mainly from AI-generated images and videos. The group is similar to the internal team formed during the 2020 campaign and is known as the “malarkey factory,” a play on a phrase Biden often uses: “What a piece of malarkey.”
The group was tasked with monitoring what misinformation was being spread online. Rob Flaherty, deputy campaign manager for the Biden campaign, said these efforts will continue and suggested that some AI tools could be used to counter deepfakes and other content before it spreads.
“The tools we use to mitigate myths and disinformation are the same, we just need to do it at a faster pace,” Flaherty said. “It means we have to be more vigilant, be more careful, monitor things in different places and try some new tools, but the fundamentals remain the same.”
The Democratic National Committee said it was an early adopter of Google AI and is using some of its capabilities, including its ability to analyze voter registration records to identify patterns of voter deletions or additions. . The commission said it is also experimenting with AI to generate text for fundraising emails and help interpret voter data collected over decades.
Arthur Thompson, the DNC's chief technology officer, said the organization believes generative AI is a “very important and impactful technology” that will help elect Democrats.
“At the same time, it is important to deploy AI responsibly and enhance the work of trained staff, rather than replacing them. We can and must do both. That's why we continue to take safety measures to stay on the cutting edge,” he said.
progressive experiment
Progressive groups and some Democratic candidates are experimenting with AI more aggressively.
Higher Ground Labs, the venture capital firm co-founded by Hoover, is creating an innovation hub known as the Progressive AI Lab, along with Zinc Collective and Cooperative Impact Lab, two political technology coalitions focused on boosting Democratic candidates. Established.
The goal was to create an ecosystem where progressive groups could streamline innovation, organize AI research and exchange information about large-scale language models, Huber said.
Higher Ground Labs, which also works closely with the Biden campaign and DNC, has since funded 14 innovation grants, hosted forums where organizations and vendors can showcase their tools, and conducted dozens of AI training sessions. It is being held.
Huber said more than 300 people attended the group's AI-focused conference in January.
Jessica Alter, co-founder and president of Tech for Campaigns, a political nonprofit that uses data and digital marketing to fight extremism and attract downvotes for Democrats, conducted 14 campaigns in Virginia last year. We conducted experiments using AI.
According to Alter, emails written by AI raised three to four times more money per work hour than emails written by staff.
Alter said he worries the party is being too cautious and falling behind on AI.
“We understand the downsides of AI, and we need to address them,” Alter said. “But my biggest concern right now is that the conversation in politics is dominated by fear, and that's not leading to balanced conversations or beneficial outcomes.”
It's difficult to talk about “AK-47”
Rep. Adam Schiff, the leading Democratic candidate for the California Senate, is one of the few candidates to be open about the use of AI. Campaign manager Brad Elkins said the campaign is leveraging AI to increase efficiency. The company partnered with Quiller, a company that received funding from Higher Ground Labs and developed tools to draft, analyze, and automate fundraising emails.
Schiff's camp also experimented with other generative AI tools. During a fundraiser last May, Schiff shared an AI-generated image of himself as a Jedi online. The caption reads, “The Force is all around us. It's you. It's us. It's the grassroots team. #MayThe4thBeWithYou.”
The campaign faced online backlash but remained transparent about the casual deepfakes, and Elkins said deepfakes are a technology that has become more widely available and cheaper to use. He said this is an important guardrail for integration.
“We are still figuring out how to ethically use AI-generated audio and video of honest candidates,” Elkins said, adding that there is no appetite to regulate and legislate against the influence of deceptive artificial intelligence. He added that it is difficult to envision progress until then.
This incident highlighted a challenge that all campaigns seem to face: that even talking about AI can be dangerous.
“Explaining how generative AI is a net positive when so many bad actors are using bad AI sets against us: robocalls, fake images, fake video clips, etc.” It's really difficult,” said a Democratic strategist close to the Democratic Party. The Biden campaign was granted anonymity because it was not authorized to speak publicly. “How do you talk about the benefits of the AK-47?”
___
Associated Press writers Alan Suderman and Garance Burke contributed to this report.
—-
This article is part of “AI Campaign,” an Associated Press series examining the impact of artificial intelligence on the 2024 election cycle.
—-
The Associated Press receives funding from the Omidyar Network to support our reporting on artificial intelligence and its impact on society. AP is solely responsible for all content. Learn about AP's criteria for working with philanthropies, a list of supporters, and funding locations at AP.org.