interview A fake image of Donald Trump that black voters endorsed, a middle school student created a pornographic deepfake of his female classmate, and Google's Gemini chatbot failed to accurately generate photos of white people.
These are some of the latest disasters listed in the AI Incident Database. The AI Incident Database is a website that monitors all the ways technology can go wrong.
The AI Incident Database was originally launched as a project under the auspices of Partnership On AI, an organization that aims to ensure that AI benefits society, but is now the largest and oldest (founded in 1894). ) is a nonprofit organization funded by Underwriters Laboratories, an independent company. US testing agency. He has tested all kinds of products, from furniture to computer mice, and his website has cataloged more than 600 of his unique automation and AI-related incidents. Masu.
“There's a huge information asymmetry between manufacturers of AI systems and consumers, and it's unfair,” said Patrick Hall, assistant professor at George Washington University School of Business and current director of the AI Incident Database. claim. .he said register: “We need more transparency, and we feel like it's just our job to share that information.”
The AI Incident Database is modeled after the CVE program established by the nonprofit MITER, the National Highway Traffic Safety Administration's website for reporting publicly disclosed cybersecurity vulnerabilities and vehicle accidents. “Every time there's a plane crash, a train crash, or a major cybersecurity incident, it's important to record what happened to understand what went wrong and prevent it from happening again. It’s been a common practice for decades.”
The website is currently maintained by about 10 people, plus several volunteers and contractors who review and post AI-related incidents online. Heather Frase, a senior researcher at Georgetown's Center for Security and Emerging Technologies who specializes in AI assessment and director of the AI Incident Database, said the website not only focuses on the risks and harms of AI, but also on the real world. claimed to be unique in its focus on the impact of Software vulnerabilities and bugs.
The organization is currently collecting incidents from media reports and reviewing issues reported by people on Twitter. The AI Incident Database recorded 250 unique incidents before the release of his ChatGPT in November 2022 and currently lists over 600 unique incidents.
Monitoring AI issues over time may reveal interesting trends and help people understand the current real harms of the technology.
George Washington University's Hall found that about half of the reports in its database were related to generative AI. Among them are dodgy products sold on Amazon with the title “We can't fulfill your request” (an obvious sign that the seller is using an extensive language model to write the description). There are also other examples of AI-generated spam, such as “Funny and Ridiculous” such as “Signs” and other examples of spam generated by AI. But some are “really depressing and serious,” like the accident in San Francisco where Cruise's robot taxi hit a woman and dragged her under its wheels.
“Right now, AI is pretty much the Wild West, with a go-fast-and-break-things attitude,” he lamented. It is not clear how this technology is shaping society. The team hopes that the AI incident database can provide insight into how it is exploited and uncover unintended consequences. We hope to have better information so that developers and policymakers can improve and regulate models. The most pressing risk.
“There's a lot of hype going around. People are talking about existential risks. It's true that AI could pose very serious risks to human civilization, but these more realistic It's clear that some of the risks in the world are things like a lot of injuries related to self-driving cars and perpetuating bias through algorithms used in consumer finance and employment. That’s what we’re seeing.”
“We know we miss a lot, right? Not everything is reported or covered in the media. A lot of times people don't realize the harm they're experiencing. They may not even realize that it's coming from AI,” Frase said. “We expect physical harm to increase significantly. [mostly] There are psychological harms and other invisible harms that arise from large-scale language models, but I think the physical harms will increase significantly once generative robotics becomes a reality. ”
Frese's biggest concern is that AI could erode human rights and civil liberties. She believes that collecting AI incidents will tell us whether policies have made the technology safer over time.
“To solve problems, you have to measure,” Hall added.
The organization is always recruiting volunteers and is currently focused on capturing more incidents and raising awareness. Freys emphasized that the group's members are not AI advocates, saying, “Maybe we come across as quite anti-AI, but we're not. We actually use it. We just want good things.”
Hall agreed. “For the technology to continue to move forward, someone has to do the work of making it safer,” he said. ®