Some of the world's biggest companies and wealthiest people are fighting over an issue that will help shape the future of AI: Should companies reveal exactly how their products work?
Tesla and SpaceX CEO Elon Musk has chosen to make public the computer code behind his AI chatbot Grok, upending the debate in recent days.
This move contrasts with the approach taken by OpenAI, which developed the popular AI text bot ChatGPT. OpenAI, which is owned by tech giant Microsoft, has released relatively few details about the latest algorithms behind its products.
Elon Musk did not respond to ABC News' request for comment. Neither did OpenAI.
In a statement earlier this month, OpenAI denounced claims that the company kept its AI models secret.
“We further our mission by building widely available and useful tools. We are committed to supporting our mission in ways that empower people and improve their everyday lives, including through open source contributions. “We are making our technology widely available,” the company said. “We provide broad access to today's most powerful AI, including a free version that hundreds of millions of people use every day.”
Here's what you need to know about Grok, why Elon Musk released the computer code, and what it means for the future of AI.
What is Musk's AI chatbot Grok?
Last year, Musk launched an artificial intelligence company called xAI, vowing to develop generative AI programs to compete with established products like ChatGPT.
Musk has warned several times about the risk of political bias in AI chatbots, which can help shape public opinion and risk spreading misinformation.
But content moderation itself has become a polarizing subject, with Musk voicing opinions that place his approach within its high-profile political context, some experts have previously said. told ABC News.
xAI debuted an early version of its first product, Grok, in November. Grok responds to user prompts with humorous comments modeled after his classic science fiction novel, The Hitchhiker's Guide to the Galaxy.
Grok is powered by Grok-1, a large-scale language model that generates content based on statistical probabilities learned from scanning large amounts of text.
“We believe in designing AI tools that work for people of all backgrounds and political views. We also want to make AI tools available to users in accordance with the law,” xAI said in a November blog post. I'm thinking about it.'' “Our goal with Grok is to explore and publicly demonstrate this approach.”
Why did Musk release the code?
The decision to release the code behind Grok touches on two important issues for Musk. It's an ongoing battle against the threat posed by AI and its rival, OpenAI.
Musk has long warned that AI risks serious societal harm. In 2017 he tweeted: “If you're not worried about the safety of AI, you don't need to be.” And most recently, in an open letter in March 2023 warning of the “serious risks to society and humanity” posed by AI. signed.
In his remarks Sunday, Musk appeared to frame the decision to open source as a way to ensure transparency, protect against bias, and minimize the danger posed by Grok.
Musk: “We still have work to do, but this platform is already the most transparent and truth-seeking one.'' Said In the post of X.
The move is also directly related to the public feud between Musk and OpenAI.
Musk, who co-founded OpenAI but left the company in 2018, dismissed OpenAI and its CEO Sam Altman earlier this month, accusing the company of abandoning its mission to benefit humanity in its rush to profit. Appealed.
Days after filing the lawsuit, Musk Said He told X that he would drop the lawsuit if OpenAI changed its name to “ClosedAI.”
OpenAI said in a statement earlier this month that it plans to move toward dismissing all of Musk's legal claims.
“When we were discussing a for-profit organization to further our mission, Elon wanted us to merge with Tesla or have full control. “I said we needed another company and I was going to go, 'he said he would support us in finding our own path,” OpenAI said.
What are the stakes in the battle over open source AI vs. closed source AI?
The debate over whether to make the computer code behind AI products public is divided along two competing visions of how to limit harm, eliminate bias, and optimize performance.
On the other hand, open source advocates argue that the published code will allow a broader community of AI engineers to identify and fix flaws in the system, or to tune it for purposes other than its original purpose. claims.
In theory, open source code offers programmers the opportunity to improve the security of a given product while ensuring accountability by making everything public.
Sauvik Das, a professor at Carnegie Mellon University who specializes in AI and cybersecurity, said, “Whenever someone writes software, there are bugs that can be exploited in ways that can lead to security vulnerabilities.'' “There could be,” he told ABC News. “It doesn't matter if you're the best programmer in the world.”
“With open source, you have a whole community of practitioners digging in and gradually building patches and defenses over time,” Das added.
In contrast, proponents of closed source believe that the best way to protect AI is to keep computer code out of the wrong hands who might reuse it for malicious purposes. Insist on keeping it private.
Closed-source AI is also advantageous for companies that want to take advantage of advanced products that are not generally available.
Christian Hammond, a computer science professor at Northwestern University who studies AI, told ABC News: “Closed-source systems are often used for malicious reasons simply because they already exist and are limited in what they can do. It's more difficult to reintroduce.” .
Last month, the White House announced it would seek public comment on the benefits and risks of open source AI systems. The move comes as part of a comprehensive set of AI rules issued by the Biden administration through an executive order in October.
Carnegie Mellon University's Das said Musk's open source releases may be motivated by both public and personal interests, but the move is a much-needed update on this aspect of AI safety. He said that it had sparked a debate.
“The fact that this is raising public awareness of the idea of open and closed and the benefits and risks of both, even if the motives aren't always completely pure, is exactly what we need in society right now.” This is to raise public awareness,'' Das said.