There are bright spots to consider the next time you use an AI tool. Most people involved in artificial intelligence believe that it has the potential to destroy humanity. That's bad news. The good news is that the odds of that happening vary greatly depending on who you listen to.
p(doom) is the “probability of doom,” or the chance that the AI will take over the Earth or do something that destroys us, such as creating biological weapons or starting a nuclear war. Yann LeCun, one of his “three godfathers of AI” who currently works at Meta, is at the brightest end of the p (destiny) scale, with a probability that he is less than 0.01%. , which means that the possibility of an asteroid wiping out humanity is lower than that.
Sadly, no one else can be as optimistic. Jeff Hinton, one of the three godfathers of AI, has said that in the next 20 years he has a 10% chance of AI destroying humanity, and one of the three godfathers of AI, he said, The third, Yoshua Bengio, raises that number to his 20%.
99.999999% probability
The most pessimistic is Roman Yampolsky, an AI safety scientist and director of the Cybersecurity Institute at the University of Louisville. He believes it is almost certain to happen. He believes that the probability that AI will destroy humanity is 99.999999%.
Speaking at the four-day Abundance Summit's “The Big Debate on AI'' seminar earlier this month, Elon Musk said, “I think there's a certain chance that AI will wipe out humanity. I'd say it's 10% or 20%.'' “I probably agree with Jeff Hinton that it's about that.” He's a % or something like that,'' before adding, “I think the positive scenarios probably outweigh the negative ones.''
In response, Yampolsky said: business insider He thought Musk's estimates were “a bit too conservative” and that as artificial intelligence becomes more sophisticated it will be nearly impossible to control, so we should abandon developing the technology now. .
“I don't know why he thinks it's a good idea to pursue this technology anyway,” Yamprosky said. “If he [Musk] We're worried that our competitors will get there first, but that doesn't matter because uncontrolled superintelligence is just as bad no matter who creates it. ”
At the summit, Musk had a solution to avoid human extinction due to AI. “There's no need to force yourself to lie, even if the truth is unpleasant,” Musk said. “Very important. Don't lie to the AI.”
If you'd like to see where other AI researchers and forecasters currently sit on the p(doom) scale, check out the list here.