Request from Senator Mitt Romney federal oversight The widespread use of artificial intelligence (AI) faces hurdles, experts say, as the technology's rapid advances and wide range of applications pose major challenges to effective regulation.
Many observers agree that AI can. pose a risk, but they also point out how difficult it is to determine which AI systems require rigorous monitoring. They emphasize the importance of finding a balance between promoting new technology development and containing risks.
“The evidence is clear – existing safeguards for LLMs are widely available. [large language models] Easy to avoid. ” Daniel Christmanco-founder of an AI cybersecurity company skullhe told PYMNTS. “Red teams and malicious actors have repeatedly exploited vulnerabilities to create and disseminate safety and security threats. For example, LLMs can be manipulated to provide instructions to create harmful devices. There was a case where it was output.
Currently, there is limited federal legislation specific to AI in the United States. In March, European Parliament passes artificial intelligence law, marking the world's first broad horizontal legal framework for AI. This law establishes uniform EU rules on data quality, transparency, human oversight and accountability.
A call to action on AI
Romney and colleagues Sen. Jerry Moran, Sen. Jack Reed, and Sen. Angus S. King Jr. wrote a bipartisan paper. letter It was addressed to Congressional leaders on Tuesday (April 16). In the letter, they discussed not only the potential dangers of AI, but also its benefits, such as improving the quality of life for Americans. But they warned that AI could raise issues such as disinformation, fraud, bias and privacy issues, which could affect elections and jobs.
To manage these risks, senators proposed four ways to regulate AI. This includes creating a new committee that will leverage existing resources and expertise from the Departments of Commerce and Energy to coordinate efforts across the agency. Lawmakers also proposed creating a new agency dedicated to AI surveillance.
The plan could face bureaucratic hurdles. Nicholas ReesProfessor at New York University International Exchange Centertold PYMNTS that the main challenge with this proposal will be clearly defining what types of AI and specific use cases it encompasses. He noted that AI encompasses a wide range of technologies, making it difficult to determine how to regulate each.
Rees explained that the plan includes creating a new government agency with special powers, which would require passing new legislation.
“Commerce and NIST” [National Institute of Standards and Technology] “It is not set up as an 'oversight' body as envisaged in the plan, and there will be a need for adjustments in authority, which must be done by statute,” he said.
“Second, creating a new federal agency specifically to address national security risks to AI is an extreme step,” Reese added. “This means that DoD and IC implementations of AI (however defined) will have an added layer of approval bureaucracy. They will need to be more agile. Some organizations will be more complex.”
Reese said AI will improve fields such as biotechnology, but oversight should not be done by newly created U.S. government agencies. As an alternative, the Department of Homeland Security has dedicated experts, he said. Bureau of Weapons of Mass Destruction Countermeasures.
“They would be ideally placed to oversee and mitigate the risks of the convergence of AI and weapons of mass destruction,” he said.
john clayVice President of Threat Intelligence at a cybersecurity company trend microtold PYMNTS that he believes regulation requires a thoughtful balance.
“Government oversight should not impede technological advances unless they pose a significant threat to humanity or America's critical infrastructure,” Clay said. “But we should also not intervene completely and allow private industry to develop whatever it wants. A balance is needed that allows technological progress without unduly hindering or restricting it. ”
Clay said global competition in AI development needs to be considered, noting that “other nation-states are likely to advance similar technologies rapidly, and the United States will We should not limit our own development.”
Is AI really a threat?
As reported by PYMNTS, experts change significantly Assessing the risks that AI may pose highlights a lively debate regarding the potential impact of AI on humanity and business.
For example, a survey conducted in March found that Prediction Institute We surveyed researchers, AI experts, and elite forecasters known as “superforecasters” to gather their opinions on the dangers of AI.
The survey revealed that AI experts are generally more concerned about AI risks than super forecasters. Despite grave warnings about an impending AI takeover, many AI experts maintain a more cautious view of the technology.
When asked about the current threat posed by AI, Clay maintained a pragmatic perspective.
“We appear to be in the early stages of this technology. The threat at this point is primarily its use by adversaries to power cyber-attacks such as phishing, deepfakes, and misinformation campaigns,” Clay said. he said.
He acknowledged that while “Skynet is not realistic at this point,” the potential benefits of AI still appear to outweigh the risks.
Christman said AI is rapidly shaping a variety of sectors, but it also poses unique threats that could impact global security.
“AI is certainly a double-edged sword,” Christman explained. “On the one hand, we are seeing notable benefits such as improvements in medical diagnostics and financial forecasting. But on the other hand, we are seeing the dark side of how AI can be misused in ways that amplify traditional security threats.” There are also sides.”
Just as the Internet has revolutionized communications and commerce, while also introducing new forms of cyberattacks and security breaches, AI has similarly transformative but potentially dangerous capabilities, Christman said. said.
“Think about today's cyberattacks and imagine that they leverage AI. They can become more sophisticated, faster, and harder to detect,” he said. .
Mr. Christman emphasized the importance of pre-emptive action in the form of a strong regulatory framework to address these concerns.
“As technology evolves, so does the potential for abuse. We need a framework that is not only rigorous, but also adaptable to proactively mitigate these risks,” he said.