A report commissioned by the US State Department warns that rapidly evolving AI could pose a “catastrophic” risk to national security, and indeed to humanity as a whole.
The document, titled “Action Plan to Improve the Safety and Security of Advanced AI,” was first reported. timethe U.S. government needs to take “swift and decisive” action, including measures that could include limiting the computational power allocated to training these AIs, or else face “extinction-level threats to humanity.” He warned that there was a risk of posing a threat.
“The Rise of Advanced AI and AGI” [artificial general intelligence] “This could destabilize global security in a way reminiscent of the introduction of nuclear weapons,” the report said.
Although AI models, commonly known as AGI, are not yet at the point where they can compete with humans on an intellectual level, many argue that it is only a matter of time and that governments should step in and get ahead of the problem. ing. Before it's too late.
This is because AI technology poses an “existential risk” to humanity, including Yann LeCun, chief AI scientist at Meta, and Demis Hassabis, head of AI at Google, the UK's so-called “godfather” of the technology. It's just the latest example of what experts have warned. and former Google CEO Eric Schmidt.
A recent study also found that more than half of AI researchers surveyed said there was a 5% chance that humanity would suffer an “extremely bad outcome” such as extinction.
The 247-page State Department report, commissioned in late 2022, included conversations with more than 200 experts, including employees of companies such as OpenAI, Meta, and Google DeepMind, as well as government officials.
To prevent AI from leading to the end of our species, the authors recommend that U.S. government agencies set limits on the amount of computing power they can use to train certain AI models. AI companies will also need to seek government permission to train new models beyond a certain threshold.
Interestingly, the report also recommends making it a crime to open source or publish the inner workings of powerful AI models.
These recommendations aim to address the risk of AI labs “losing control” of AI systems, which they say “could have devastating consequences for global security.”
“AI is already an economically transformative technology,” said Jeremy Harris, one of the report's authors and CEO of Gladstone AI. CNN. “This may allow us to treat diseases, make scientific discoveries, and overcome challenges that once seemed insurmountable.”
“But it can also pose serious risks, including catastrophic risks, and we need to be aware of that,” he added. “And a growing body of evidence, including empirical studies and analyzes presented at the world's top AI conferences, suggests that beyond a certain threshold of ability, AI can spin out of control.”
In a video posted on Gladstone AI's website, Harris said that current safety and security measures are “woefully inadequate compared to the national security risks that AI may soon introduce.” He claimed that it was sufficient.
This isn't the first time we've heard industry leaders warn about the potential dangers of AI, even though tens of billions of dollars have been poured into developing AI technology.
But it remains to be seen whether governments will heed these warnings. The news comes in the same week that the European Union passed the world's first major legislation regulating AI, which will likely set the tone for future AI regulation in other parts of the world. .
This is an alarming report that is bound to raise some eyebrows, especially considering the current state of AI regulation in the United States. Are the authors' concerns justified, or are they far-fetched claims that include recommendations that amount to government overreach and stifling innovation?
After all, as stated on the report's first page, it does not “reflect the views of the U.S. Department of State or the U.S. Government.”
“I think it is very unlikely that this recommendation will be adopted by the U.S. government,” said Greg Allen, director of the Wadhwani AI Advanced Technology Center. time.
Learn more about AI extinction: Scientists say this is the probability that AI will wipe out humanity