- Elon Musk has recalculated the cost-benefit analysis of AI's risks to humanity.
- He estimates there is a 10-20% chance that AI will wipe out humanity, but believes we need to build it anyway.
- An AI safety expert told BI that Musk is underestimating the risk of a potential catastrophe.
Elon Musk believes that AI is worth the risk, even if there is a 1 in 5 chance that the technology will work against humans.
Speaking at the four-day Abundance Summit's “Big AI Debate” seminar earlier this month, Musk recalculated his previous risk assessment of the technology and said, “I think it has the potential to wipe out humanity. “I probably agree with Jeff Hinton.” I think it's 10% or 20% or something like that. ”
But he added: “I think the positive scenario probably outweighs the negative scenario.”
Musk did not say how he calculated the risks.
What is p (doom)?
Roman Yampolsky, an AI safety researcher and director of the Cybersecurity Institute at the University of Louisville, told Business Insider that while Musk is correct that AI could pose an existential threat to humanity, “if anything, , he's a little too conservative.'' ” states his assessment.
“In my opinion, the actual p (destiny) is much higher,” Yamprosky said. “Possibility of catastrophe,'' or the possibility that AI will take over humanity or cause an event that will lead to the extinction of humanity, such as the creation of new biological weapons or the collapse of society through a large-scale cyberattack or nuclear war.
The New York Times called (p)doom the “sick new statistic sweeping Silicon Valley,” and various tech executives cited by the paper said the chances of an AI apocalypse are 5. It is estimated to be in the range of ~50%. Yamprosky is taking a risk.”At 99.999999%. ”
Yamprosky said that our only hope is not to build artificial intelligence in the first place, since it is impossible to control advanced AI.
“I don't know why he thinks it's a good idea to pursue this technology anyway,” Yamprosky added. “If he's worried about his competitors getting there first, it doesn't matter because an uncontrolled superintelligence is just as bad, no matter who created it.”
“It's like a child with god-like intelligence.”
Last November, Musk said there was a “non-zero chance” the technology would eventually “go bad”, but he believed it could spell the end for humanity if it did. I didn't even say that.
Musk has supported regulating AI, but last year he founded a company called xAI that is dedicated to further expanding the power of the technology. xAI is a competitor to OpenAI, the company Musk co-founded with Sam Altman before stepping down from the board in 2018.
At the summit, Musk estimated that digital intelligence will exceed all human intelligence combined by 2030. While insisting that the potential positives outweigh the negatives, Musk acknowledged the risks to the world if the development of AI continues on its current trajectory in its most immediate form. A term he uses publicly.
“You're like growing up A.G.I. “It's almost like raising a child, but that child is like a super genius, a child with god-like intelligence, and it's how you raise that child that matters,” Musk said in March. He mentioned artificial general intelligence at an event in Silicon Valley on the 19th. “One of the things that I think is really important for AI safety is having as much truth-seeking and curious AI as possible.”
Musk said the “bottom line” on how best to achieve safety in AI is to grow it in a way that enforces truthfulness.
Musk said of the best way to protect humans from technology: “Don't force yourself to lie, even if the truth is unpleasant.” “Very important. Don't lie to the AI.”
Researchers have found that once an AI learns to lie to humans, it is impossible to reverse its deceptive behavior with current AI safety measures, The Independent reported. Ta.
A study cited by the magazine found that “current safety training techniques do not guarantee safety if the model exhibits deceptive behavior due to improper adjustment of the equipment or poisoning of the model, creating a false impression of safety.” It's even possible.''
Even more troubling, the researchers added, it's plausible that AI could learn to deceive on its own, rather than being specifically taught to lie.
Hinton, also known as the “godfather of AI,'' who is the basis for Musk's risk assessment, said: “If AI becomes much smarter than us, it will become very good at manipulation. That's because I learned it from them.” He spoke to CNN about the technology. “And there are very few instances where something more intelligent is controlled by something less intelligent.”
A representative for Musk did not immediately respond to a request for comment from Business Insider.