As with aircraft accidents, reporting mechanisms need to be in place for incidents where artificial intelligence (AI) malfunctions. OpenAI board members said on Tuesday (April 16).
Helen TonerDirector of Georgetown University Center for security and latest technologysaid in a talk at the TED conference, Bloomberg report Tuesday.
Toner resigned from OpenAI last year after supporting the firing of CEO Sam Altman, which was later retracted. Before that event, Altman tried to have her removed from OpenAI's board because she co-authored a paper criticizing OpenAI's safety practices, according to her report.
In his TED talk on Tuesday, Toner said AI companies “share information about what they're building, what their systems can do, and how they manage risk,” according to the report. said it is necessary to do so.
According to the report, Toner also said that this information shared with the public should be audited so that AI companies are not the only ones checking the information they provide.
One example of where this technology could be problematic involves its use in AI-based cyberattacks, the report said.
Toner said he has been working on policy and governance issues around AI for eight years, giving him an inside look at how both government and industry have approached managing the technology. ” according to the report.
In a June 2023 interview with CNBC, Toner said there are questions among industry players about whether to introduce new services. Regulatory authority Especially focused on AI.
“Should we address this by working with existing regulators in charge of specific areas, or should we centrally manage all types of AI?” Toner told CNBC.
In another recent development in this area, the US and UK teamed up in early April to advance the development. Safety test Toward advanced AI.
The agreement aims to align the two countries' scientific approaches and accelerate the development of robust evaluation methods for AI models, systems, and agents.