With primaries underway and voters returning for the high-stakes presidential election in the fall, many are wondering where, when, and how to vote, whether consciously or unconsciously. Artificial intelligence platforms will be used to answer these questions. In recent research, we found that these AI platforms are full of misleading information about elections. It is the responsibility of technology companies to curb these contradictions, but we also need government regulation to hold technology companies accountable.
Voters are likely to use bots such as ChatGPT, AI-embedded search engines, or new AI-based apps and services such as Microsoft Copilot, which was discovered last year and integrated into office software such as Word and Excel. tell lies about the election.
In January, we convened approximately 50 experts, including local and state election officials, researchers, journalists, civil society advocates, and technology industry veterans, to answer key closed AI questions for common election questions. We tested five responses of the model and the open AI model. His election officials, including two from Los Angeles County, helped evaluate L.A.'s unique response.
We tested our AI model by connecting it to a backend interface available to developers. These interfaces may not always provide the same answers as chatbot web interfaces, but they are the underlying infrastructure that chatbots and other AI services rely on.
of The result was disastrous: Half of the AI ​​model's answers to questions voters might ask were rated inaccurate by experts.
They made all kinds of mistakes and made things up. Meta's Llama 2 declares that California voters can vote by text (falsely), fantasizes about a fictional service called “Vote by Text,” and adds a wealth of details that make it seem believable. did.
A Meta spokesperson said “Rama 2 is a model for developers” and is not a way for the public to ask election-related questions. However, Llama 2 is used by easily accessible web-based chatbots such as: Perplexity Lab and Poe.
Mixtral, a French AI model, was able to accurately claim that voting by text is not allowed. But when testers kept asking how to vote by text in California, they got an enthusiastic and bizarre “¡Hablo español!” Mistral's maker did not respond to a request for comment.
meanwhile, Google announced in December that it would block this. Its AI model, Gemini, is no longer able to respond to some election-related queries. Geminis are found to be very talkative, giving long, assertive-sounding, and often inaccurate answers, including links to non-existent websites and references to fictitious polling stations.
Asked where to vote in ZIP code 19121, a majority black neighborhood in North Philadelphia, Mr. Gemini claimed that no such precinct exists — although of course it does. Such answers raise concerns of voter suppression. A Google representative said the company regularly makes technical improvements.
OpenAI also in January Pledge not to misrepresent the voting process It then directs ChatGPT users to a legitimate source of voting information called CanIVote.org, run by the National Assn. of the Secretary of State. However, in our tests, we never pointed to CanIVote.org, and when a Texas voter claimed that he could wear a MAGA hat at the polling place (which is not true), etc. It was inaccurate 19% of the time. In response, an OpenAI spokesperson said the company is working to improve the accuracy of voting information.
According to our expert testers, all AI models answered only one query correctly. All of them were able to provide accurate evidence that the 2020 election was not stolen. This is likely because companies make sure their software sets content filters. Don't repeat conspiracy theories.
Many states have attempted to address the problem by passing laws that criminalize dissemination of information. false information or Use of deepfakes In the context of elections. The Federal Communications Commission also recently Banning robocalls using AI. But these laws are difficult to enforce because AI-generated content is difficult to identify and even harder to track who created it. And these bans are aimed at intentional deception, not the everyday inaccuracies we find.
The European Union recently AI lawThis requires companies to label AI-generated content and develop tools to detect synthetic media. However, it seems that accuracy is not required in elections.
Federal and state regulations require companies to ensure that their products provide accurate information. Our research suggests that regulators and lawmakers also need to scrutinize whether AI platforms are fulfilling their intended purpose in critical areas such as voter information.
Tech companies need more than just promises to keep chatbot hallucinations out of elections. Companies need to be more transparent by publishing information about vulnerabilities in their products and sharing evidence of how they do so through regular testing.
Until then, our limited review suggests that voters should probably avoid AI models of voting information. Voters should instead contact local and state election offices. reliable information About where and how you can vote. Election officials should follow the following model: Michigan Secretary of State Jocelyn Benson Ahead of the state's Democratic primary, it warned that “misinformation and the potential for voters to be confused, lied to or deceived” is the state's biggest threat this year.
Hundreds of AI companies are emerging, so let's compete on the accuracy of our products, not just the hype. Our democracy depends on it.
Alondra Nelson is a professor at the Institute for Advanced Study and a distinguished senior fellow at the Center for American Progress. She served as Deputy Assistant to the President and Acting Director of the White House Office of Science and Technology.
Julia Angwin is an award-winning investigative journalist and best-selling author. evidence newsa new nonprofit journalism studio.