A new bill has been introduced in the Senate that seeks to track security issues by requiring the creation of a database that records all breaches of AI systems.
The Secure Artificial Intelligence Act, introduced by Sen. Mark Warner (D-Va.) and Sen. Thom Tillis (R-North Carolina), would establish an Artificial Intelligence Security Center at the National Security Agency. The center would lead research into what the bill calls “counter-AI,” or technologies that learn how to manipulate AI systems. The center will also develop guidance to prevent AI countermeasures.
The bill also requires the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency to create a database of AI breaches, including “near misses.”
The bill proposed by Warner and Tillis focuses on techniques to counter AI, and categorizes AI into data poisoning, evasion attacks, privacy-based attacks, and abuse attacks. Data poisoning refers to a technique in which code is injected into data scraped by an AI model, corrupting the model's output. This has emerged as a common way to prevent AI image generators from copying art on the internet. Evasion attacks change the data that an AI model examines until the model becomes confused.
AI safety is one of the key items in the Biden administration's AI executive order, which directs NIST to develop “red team” guidelines and requires AI developers to submit safety reports. Red teaming is when a developer intentionally tries to make her AI model respond to prompts it wasn't supposed to.
Ideally, developers of powerful AI models should test the safety of their platforms and undergo extensive red teaming before releasing them to the public. Some companies, such as Microsoft, have created tools that make it easy to add safety guardrails to AI projects.
The Secure Artificial Intelligence Act must pass out of committee before being considered by the larger Senate.