In Cambridge, Sam Altman answered questions from MIT president and cell biologist Sally Kornbluth.
Her first question was “What is your PDoom?” In other words, what are the chances that AI will wipe out all human life?
In response, Altman criticized the structure of asking people to rate their doomsday predictions on a scale of 1 to 100.
“Whether you think it's 2, 10 or 90, it's not zero,” he said, adding that PDoom's development is based on a static system.
“There will always be room for fatalists in society,” he said, expressing support for that tolerance, but instead of assessing the possibility of our destruction, he suggested avoiding AI disaster. He also suggested that important questions should be asked that may help.
“What do we need to sail safely?” he said. “Take it, face it, take it seriously.”
As for ChatGPT, he said the model will improve over time.
“We have a ton of work ahead of us,” he said.
He contrasted some kind of mythical AI creature with the earlier concept of a superintelligence that would “rain money” on people, and talked about how the technological revolution is now emerging as a fundamental trend. .
“There are new tools in humanity's technology tree, and people are creating a lot of things with them,” he said. “I think it will continue to become more capable and integrated into society in important and transformative ways.”
He contrasted AI with human cognition, noting that the combination will ultimately enable many things.
“If we can create something as smart as all the super smart students here, that's a great accomplishment in some ways,” he said. “There are already a lot of smart people in the world.”
As AI helps reinvent our environment, Altman was hopeful when it came to predicting the ensuing impact.
“It improves the quality of life and makes the economy cycle a little faster,” he said.
In response to another question from Kornbluth, Altman also addressed bias, saying we have made surprisingly good progress in addressing its effects.
“People like to talk about this and say, 'Oh, it's OK to use these because they're just emitting toxic waste all the time,' but GPT works well (in a sense). “And… who decides what bias means? How do we decide what the system should do?”
Next, he mentioned an important trade-off. Where do you set hard limits on the use of AI?
“I think it’s important to give people a lot of control,” he said. “That being said, there are some things the system should not do.”
Kornbluth asked how to navigate between privacy and the need for shared data.
In response, Altman predicted that the future will feature personalized AI that will “completely record your life.”
“You can imagine that that would be very helpful,” he said. “You can also imagine the privacy concerns that that brings. …If you stick with it, how are you going to navigate the privacy, utility, safety trade-offs, or the security trade-offs that come with that?”
He thought, for example, that an AI might testify against you or be subpoenaed by a court.
“It will be a new direction for society,” he says. “We're already seeing[some of this privacy issues]in the services we all use. AI creates greater risks and greater trade-offs.”
This is the first part of Mr. Altman's talk. I'll explain the rest in a future article.