It is not the responsibility of private companies to manage the impact of artificial intelligence on society, said the founder of the nonprofit Artificial Intelligence Institute. Instead, it is up to elected governments to properly regulate this sector to keep people safe.
speak at Fortune's At the BrainstormAI conference in London on Monday, EleutherAI co-founder Connor Leahy said the tech industry should be held accountable for how innovative technologies impact ordinary people. He said no.
“Companies don't even have to answer” society-wide questions about AI, Leahy said. Fortune's Ellie Austin.
“This may be controversial, but it's not the oil companies' responsibility to solve climate change,” he explained. Instead, he said, the government's role is to stop oil companies from causing climate change, or at least make them pay to clean up the mess they make.
He added that guardrails should be phased in from the government level, at least when it comes to society-wide issues, rather than coming from within the industry.
But the boss of EleutherAI, which was founded in 2020 and operates primarily through an open Discord server, said the onus is on companies when it comes to expectations about how much AI can do.
At the moment, the technology is “very unreliable,” he continued, adding, “It's not at a human level.” [of] reliability. “
AI leaders want to be regulated
Some of the most prominent voices in the technology industry agree with Leahy, and even the sector's disruptors are looking to the government for some kind of safety net.
“Regulation of AI is essential,” Sam Altman, president of ChatGPT maker OpenAI, told the Senate Judiciary Subcommittee last May.
He supported “appropriate safety requirements, including internal and external testing before release” for AI software, and called for some kind of licensing and registration system for AI systems beyond a certain capability.
But the billionaire CEO who was fired and rehired also called for a governance framework that was “flexible enough to adapt to new technological developments” and said regulations would “ensure that people have access to the benefits of technology.” He said there needed to be a balance between “encouraging safety while ensuring safety.”
Similarly, Elon Musk, CEO of Tesla, which uses AI for everything from its large-scale language model Grok to its humanoid robot Optimus and self-driving cars, said the regulations were “a nuisance” but necessary. .
“I think we've learned over the years that having referees is a good thing,” Musk said in a conversation with British Prime Minister Rishi Sunak at the UK AI Safety Summit.
Under construction
CEOs longing for some kind of regulation that is appropriate for multiple markets may have gotten their wish in recent weeks.
Earlier this month, the US and UK governments signed a memorandum of understanding pledging a common approach to AI safety testing and guidance.
We urge governments to work closely together and invite other countries to join this approach.
“Our partnership makes clear that we are not running away from these concerns, but rather heading toward them,” U.S. Commerce Secretary Gina Raimondo said at the time.
“By working together, we are furthering the long-standing special relationship between the United States and the United Kingdom and laying the foundations to ensure the security of AI now and in the future.”