Bradley Childers (Aric Crabb), information systems manager for the Contra Costa County Department of Clerks, Records and Elections, stands with a technician in a room verifying signatures on ballots on Thursday, February 29, 2024 in Martinez, California. /Bay Area News Group)
As Contra Costa County election staff met with local law enforcement and FBI agents to plan protection and response to voting-related threats for the 2024 election year, an unusual new risk was added. Ta. It's generative artificial intelligence, Silicon Valley's blockbuster product.
In one mock scenario, news reports highlighted problems at local polling places and appeared to try to prevent people from voting. However, the news report was false information and a malicious party used his AI to spread misinformation. Election officials and law enforcement officials are scrambling to find ways to thwart the threat by investigating who is behind the sources and disseminating accurate information to the public.
Now, on the eve of next week's Super Tuesday primaries, election departments around the Bay Area and around the country are responding to a campaign by election officials around the Bay Area and around the country, especially after a forged version of President Joe Biden's voice was used to block voting in a robocall in January. There is a discussion on AI risks. New Hampshire primary. California Attorney General Rob Bonta joined other state legislators in condemning the AI intervention, which Bonta said could undermine “the integrity of the voting process.”
Less than three weeks after news of the fake Biden robocall broke in January, the Federal Communications Commission made it illegal to use AI-generated voices in unsolicited robocalls, and the commission's commissioner Chief Executive Jessica Rosenworcel cited the use of the technology by “bad actors” to “misinform voters” as well as blackmail and impersonating celebrities. Masu.
In Santa Clara County, election officials are connected to an information-sharing network with agencies across the country to track how new AI technology could impact local elections, said Matt, assistant registrar of voters. Moreles said. He and his colleagues have little worry about AI hacking voting systems or tampering with results because the defenses are robust. But they are more concerned that AI-generated materials could be used to mislead voters.
“It only spreads misinformation and confusion,” Moreles said.
After permeating our daily lives through apps like Apple's Siri bot and driver-assistance technology, artificial intelligence suddenly gained attention with the public launch in 2022 of San Francisco startup OpenAI's ChatGPT-generated AI bot. Other companies soon introduced products that could realistically generate text, sounds, and images in response to user prompts.
This explosion has led to piracy by companies that collect online data to “train” software, the replacement of human workers with AI, student cheating on exams, propaganda and political misinformation. Concerns range from people spreading false material as false information.
“Misinformation is definitely something to be concerned about this election cycle,” said Susan Hyde, a political science professor at the University of California, Berkeley. Hyde said election fraud is nothing new, and efforts to prevent people from voting have been going on for decades. However, AI can be used to spread misinformation faster and more widely than has been possible in the past few years.
“We have to be careful about foreign interference. It's been around for a while,” Hyde said. “We should be concerned about partisan forces, from local to national.”
AI is providing new tools to instill convincing election-related falsehoods in voters, and social and family networks where people may believe false information because of the source's proximity Hyde said this could have a ripple effect on the country. Misinformation attacking the legitimacy of the election leads people to conclude that American democracy is a sham, leading to candidates with “cults of personality” and “win-at-all-costs” They may be more receptive to partisan thinking. Hyde said.
Marci Andino, senior director at the Center for Internet Security, said she expects AI-based interference in this year's elections to peak as the November general election approaches.
The federal Cybersecurity and Infrastructure Security Agency warns that the technology could be used to spread false voting information through text, email, social media channels or publications. “AI tools could be used to create audio and video files impersonating election officials that spread false information to the public about the security and integrity of the election process,” the agency said. This was stated in a safety bulletin. “AI-generated content, such as compromising deepfake videos, can be used to harass, impersonate, or delegitimize election officials.”
The agency warned that convincing but erroneous election results could be generated and used to manipulate public opinion. Systems could also be compromised if voice clones were used to impersonate election office staff and access “sensitive election administration or security information,” the agency warned. Or, the agency said, the AI could create “fake videos of election vendors making false statements that call into question the security of election technology.”
AI consultant Reuben Cohen's biggest concern is the use of generative AI to create “weaponized apathy” by persuading people not to vote.
“It's actually easier to not get someone to do anything than to get them to do something,” says Cohen, who is based in Toronto and advises Fortune 500 companies.
Newly released software makes it cheap and easy to generate realistic videos, allowing election meddlers to buy data from the dark web and target people according to demographics, purchasing habits, and psychological profiles. Cohen said they can be targeted.
“Our situation today is a thousand times different than where we were in the last election in terms of our actual ability to do this,” Cohen said. “What I'm concerned about is ease of access.”
Officials say reliable information is key to preventing AI from harming elections, and they say people should seek out government election websites and official social media channels to obtain information, confirm or deny information. He urged people to call local election offices and rely on trusted news sources. Arrives from other sources.
The news isn't all bad. Georgetown University researcher Josh Goldstein said there is currently no evidence that AI-powered propaganda can influence election outcomes.