As government regulators in North America, Europe and Asia grapple with legal, safety, national security and ethical issues raised by advances in artificial intelligence, business leaders are worried that different rules will add complexity in the short term. It is said that it is causing
“The big story is that everyone is pursuing AI regulation, and if there are 100 standards, there are no standards,” said Danny, head of law firm DLA Piper’s AI and data analytics practice. Toby says.
The European Union is the most advanced, establishing regulatory rules that could be approved by regional parliaments and come into force as early as 2025. The 27 member states are leaning towards a risk-based approach and could ban the technology in extreme cases. Some “high-risk” AI systems will require approval before being brought to market. China and India are among the countries considering their own paths to regulating AI.
Executives expect to rely on existing frameworks already in place for other forms of technology, which they say can be applied to emerging regulations for AI. Juniper Networks employs people who “basically monitor compliance laws around the world,” said Bob, his chief AI officer at the company, which sells networking and cybersecurity products in 175 countries. said Mr. Friday.
Friday acknowledges that AI introduces new complexities. “When you build a plane or a car, there are a lot of regulations to make sure that the car or plane is safe for the public,” he says Friday. However, AI has cognitive reasoning skills. “It's not completely deterministic,” he added. “These systems don't have consistent behavior.”
DLA Piper predicts that Europe is likely to be the most stringent on AI, reflecting the region's approach to other technologies. That's why the company urges its clients to plan on general principles that are broadly applicable. That means keeping humans involved at all times, testing the AI both pre- and post-launch, and providing clear explanations about the technology whenever possible.
“Deploying multiple control systems within one company is too much, so we are developing a baseline approach for many of our multinational clients,” says Tobey.
“If you think about how we approach data usage more generally, there are a lot of similarities,” says Elise Houlik, chief privacy officer at Intuit, the TurboTax and CreditKarma software provider. “Be honest about what you're doing, give proper notice, and provide the right amount of transparency to understand where consumer choice should come into play. It will port well.”
Intuit regularly meets with policymakers to discuss AI, with the goal of ensuring regulatory language is clear and does not limit innovation. Tensions around AI are largely due to consumer anxiety about the technology, Hourik said. People want a clear understanding of when AI is being used, whether they can opt in or out of the technology, and how their data is being used.
“And then the next layer is: 'Okay, we're convinced this is worth it. And now we're going to make sure it's safe and make sure we're getting the best possible outcome. And it's my job to make sure that the right data is provided.' It's pulled in and used at the right time,” Hourik said.
In the United States, at least 40 states, Puerto Rico, the Virgin Islands, and Washington have introduced AI bills during the 2024 legislative session, ranging from election-related content, child pornography and other criminal uses, and medical The focus is on use and transparency in decision-making. of data usage.
“If we were to treat each state like a small country, it would be like 50 small countries,” said Friday, noting that such fragmented regulation creates new costs for companies to understand. He added that it is possible.
Regulators also need to sort out who is responsible in the event of a breach. They must determine answers to questions such as: If a large language model created by an AI manufacturer violates Chinese or US regulations, will the AI manufacturer face fines and enforcement actions, or will the company that introduced the technology? How would such regulations affect large companies that develop their own AI models while relying on models created by tech giants like Microsoft and Google?
There is also uncertainty regarding the risk-based approach being pursued in Europe. Experts say it will be easiest to agree on low-risk AI, but determining what counts as high and medium risk will be difficult.
When Europe targeted social media companies like Meta, other technology providers were also targeted. “We were required to comply with all European privacy regulations,” it said Friday, even though network companies were not the intended target of these regulations.
Tom Siebel, founder and CEO of C3.ai, doesn't like any regulatory talks coming out of the US or European markets. “What they're trying to do is criminalize science,” Siebel says. “I don't think they're considered enough, and I don't think the people writing them understand what they're saying.”
There are now millions of publicly available algorithms worldwide, and he worries that regulators won't be able to keep up. And Siebel doesn't believe that even if regulators have more time to examine the technology, they won't be able to read the algorithm and decide whether it's safe or not.
Siebel acknowledges that governments must act against AI, but advocates legislation rather than regulation. And in the private sector, CEOs should ultimately be responsible for the safety of the AI they create or use, he says.
“Do you think we need to put rails on the use of AI?” Siebel asks. “absolutely.”