new york
CNN
—
A new report reportedly commissioned by the U.S. State Department paints an alarming picture of the “catastrophic” national security risks posed by rapidly evolving artificial intelligence, the authors say, and the federal government They warn that time is running out to avert disaster.
The findings are based on more than a year of interviews with more than 200 people, including executives from leading AI companies, cybersecurity researchers, weapons of mass destruction experts, and national security officials within the government.
The report, published this week by Gladstone AI, flatly states that, in the worst case scenario, cutting-edge AI systems could “pose an extinction-level threat to humanity.”
The US State Department did not respond to a request for comment.
The report's warnings are another reminder that while the potential of AI continues to fascinate investors and the public, there are also real risks.
“AI is already an economically transformative technology, enabling us to treat diseases, make scientific discoveries, and overcome challenges once thought insurmountable. “Maybe,” Jeremy Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.
“But it can also pose serious risks, including catastrophic risks, and we need to be aware of that,” Harris said. “And a growing body of evidence, including empirical studies and analyzes presented at the world's top AI conferences, suggests that beyond a certain threshold of ability, AI can spin out of control.”
White House Press Secretary Robin Patterson said President Joe Biden's executive order on AI is “the most significant action taken by governments around the world to win promise and manage the risks of artificial intelligence.”
“The President and Vice President will continue to work with our international partners to urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.
“Clear and urgent need” for intervention
Researchers warn of two major dangers posed broadly by AI.
First, the most advanced AI systems can be weaponized to cause potentially irreparable damage, Gladstone AI said. Second, the report cited private concerns within the AI Institute that at some point they would “lose control” of the very systems being developed, with “potentially catastrophic implications for global security.” He said there is.
“The Rise of AI and AGI” [artificial general intelligence] “It has the potential to destabilize global security in a way reminiscent of the introduction of nuclear weapons,” the report said, raising the risk of an “arms race”, conflict and “fatalities on the scale of weapons of mass destruction” for AI. He added that there is.
Gladstone AI's report calls for dramatic new measures to tackle this threat, including the creation of a new AI agency, “urgent” regulatory action and limits on the computer power available to train AI models. .
“There is a clear and urgent need for U.S. government intervention,” the authors wrote in the report.
The report notes that although the work was prepared for the State Department, its views “do not broadly reflect the views” of the State Department or the U.S. government.
According to a government notice issued in 2022 by the State Department's Office of Nonproliferation and Disarmament Funds, the government offered to pay up to $250,000 to assess the “proliferation and security risks” associated with advanced AI.
Harris, the Gladstone AI executive, said his team's “unprecedented level of access” to public and private sector stakeholders led to the surprising conclusion. Gladstone AI said it has spoken with the technical and leadership teams of ChatGPT owner OpenAI, Google DeepMind, Facebook parent company Meta, and Anthropic.
“Along the way, we learned some hard facts,” Harris said in a video posted on Gladstone AI's website announcing the report. “Behind the scenes, the safety and security landscape for advanced AI appears to be quite inadequate compared to the national security risks that AI may soon introduce.”
The Gladstone AI report says competitive pressures are causing companies to accelerate AI development “at the expense of safety and security” and that cutting-edge AI systems have been “stolen” and “weaponized” against the United States. He said that the outlook is increasing that there is a possibility of
This conclusion joins a growing list of warnings about the existential risks posed by AI, even from some of the industry's most powerful figures.
Almost a year ago, Jeffrey Hinton, known as the “Godfather of AI,” quit his job at Google to blow the whistle on the technology he helped develop. Hinton said there is a 10% chance that AI will lead to human extinction within the next 30 years.
Last June, Hinton and dozens of other AI industry leaders and academics signed a statement stating that “reducing the risk of extinction from AI should be a global priority.”
Business leaders are increasingly concerned about these risks even as they pour billions into investing in AI. 42% of CEOs surveyed at last year's Yale University CEO Summit said AI could wipe out humanity in five to 10 years.
In its report, Gladstone AI cited several prominent people who have warned about the existential risks posed by AI, including Federal Trade Commission Chairman Elon Musk. Lina Khan and former CEO of OpenAI.
Gladstone AI says some employees at the AI company share similar concerns privately.
“One official at a prestigious AI institute expressed the view that this would be 'a very bad thing' if certain next-generation AI models were released as open access,” the report said. ing. If this ability were used in areas such as election interference and vote manipulation, it could “subvert democracy.” ”
Gladstone asked Frontier Institute AI experts to privately share their personal estimates of the likelihood that an AI incident would cause a “global and irreversible impact” in 2024. said. Estimates ranged from 4% to as much as 20%, the paper said. The report says the estimates are unofficial and likely contain significant bias.
One of the biggest wildcards is how quickly AI is evolving. Specifically, AGI is a hypothetical form of AI with human-like or even superhuman-like learning capabilities.
The report states that AGI is considered a “key driver of catastrophic risk of loss of control,” and that OpenAI, Google DeepMind, Anthropic, and Nvidia could all reach AGI by 2028. has publicly stated, but points out that others think it is much further away. .
Gladstone AI says disagreements over AGI timelines will make policy and safeguards difficult to develop, and there is a risk that regulations will “prove harmful” if the technology develops more slowly than expected points out.