WASHINGTON (AP) – Don Beyer's car dealership was one of the first in the United States to launch a website. As a representative, the Virginia Democratic Party leads a bipartisan group focused on promoting fusion energy. He reads books about geometry as a hobby.
So when questions arose about the regulation of artificial intelligence, Beyer, 73, took what seemed like an obvious step to him: enrolling at George Mason University to earn a master's degree in machine learning. Beyer's journey is an outlier in an era when lawmakers and Supreme Court justices sometimes admit they don't understand emerging technologies, but lawmakers are trying to learn about artificial intelligence as they consider legislation that will shape its development. This highlights the extensive efforts of these people. .
It's scary for some, thrilling for others, and perplexing for many. Artificial intelligence represents a transformative technology, a threat to democracy, and even an existential threat to humanity. It will be up to lawmakers to find ways to regulate the industry in a way that promotes its potential benefits while mitigating the worst risks.
But first they need to understand what AI is and is not.
“I tend to be optimistic about AI,” Beyer told The Associated Press after a recent afternoon class at George Mason's campus in suburban Virginia. “We can't even imagine how different our lives will be in five, 10, 20 years because of AI. Probably not. But there are other deeper existential risks that we need to pay attention to.”
Massive job losses in industries made obsolete by AI; programs that obtain biased or inaccurate results; deepfake images that can be exploited for political disinformation, fraud, and sexual exploitation; Video, audio, etc. risks. On the other side of the equation, onerous regulations could hinder innovation and put the United States at a disadvantage as other countries seek to harness the power of AI.
Striking the right balance will require input not only from technology companies but also from industry critics and industries that AI could transform. While many Americans may have formed their ideas about AI from science fiction movies like The Terminator and The Matrix, it's important that lawmakers have a clear understanding of the technology, says California Republican said Representative Jay Obanorte and Chairman. House of Representatives AI Task Force.
When lawmakers ask questions about AI, Obernolte is one of the people they look for. He studied engineering and applied science at Caltech and earned a master's degree in artificial intelligence from UCLA. The California Republican also founded his own video game company. Obernolte said he is “very impressed” with how seriously his colleagues on both sides of the aisle are taking his responsibility to understand AI.
That's not surprising, Obernolte says. After all, lawmakers regularly vote on bills that touch on complex legal, financial, health, and scientific subjects. If you think computers are complicated, check the regulations governing Medicaid and Medicare.
Ever since the steam engine and cotton gin transformed the nation's industrial and agricultural sectors, keeping up with technological advances has been a challenge for Congress. Nuclear power and weapons have been a major issue that lawmakers have had to grapple with in recent decades, said Kenneth Rowande, a political scientist at the University of Michigan who has studied expertise and how it relates to policymaking in Congress. This is another example of a specialized topic.
Congressional members established several offices, including the Library of Congress and the Congressional Budget Office, to provide resources and expert opinion when needed. We also rely on staff with specific expertise in subjects such as technology.
Additionally, there are more informal forms of education that many members of Congress receive.
“Interest groups and lobbyists are banging on doors and giving explanations,” Rowande said.
Beyer said he has been interested in computers all his life and when AI emerged as a topic of public interest, he wanted to know more. A lot more. Almost all of his classmates are several decades younger. Bayer said most people don't seem too upset to learn that his classmates are members of Congress.
He said the classes, which he takes in between his busy schedule in Congress, are already paying off. He learned about his developments in AI and the challenges facing the field. He said this helped him understand challenges such as bias and unreliable data, as well as opportunities such as improved cancer diagnosis and more efficient supply chains.
Beyer is also learning how to write computer code.
“Learning to code helps me think about these kinds of mathematical algorithms step-by-step, which helps me think differently about many other things, like how I put together an office or how I treat my work.” I realized that it was helpful: creating legislation,” Beyer said.
Chris Pearson, CEO of cybersecurity firm Blackcloak, said a computer science degree is not required, but AI's impact on the economy, national defense, health care, education, personal privacy and intellectual property rights is important. It is essential for legislators to understand this.
“AI is neither good nor bad,” said Pearson, who previously worked in Washington for the Department of Homeland Security. “That's how you use it.”
Efforts to protect AI have already begun, but so far they have been led by the executive branch. Last month, the White House announced new rules requiring federal agencies to show that their use of AI does not harm the public. Under an executive order issued last year, AI developers must provide information about the safety of their products.
In terms of more substantive action, the United States is catching up with the European Union, which recently enacted the world's first major rules governing the development and use of AI. The rules prohibit some uses, such as routine AI-enabled facial recognition by law enforcement, while requiring other programs to submit information about safety and public risk. It is hoped that this landmark legislation will serve as a blueprint for other countries as they consider their own AI laws.
As Congress begins the process, Obernolte said the focus should be on “mitigating potential harm,” and hopes lawmakers from both parties can find common ground on how to prevent the worst AI risks. He said he was optimistic.
“Nothing of substance will be accomplished without bipartisanship,” he said.
To guide the discussion, lawmakers created a new AI Task Force (co-chaired by Obernolte) and an AI Caucus made up of lawmakers with specific expertise or interest in the topic. They invited experts to explain the technology and its impact to lawmakers. We invited not only computer scientists and technology experts, but also representatives from various sectors who think for themselves about the risks and benefits of AI.
Rep. Anna Eshoo is the Democratic chair of the caucus. She represents parts of California's Silicon Valley, which recently challenged tech companies and social media platforms like Meta, Google and TikTok to identify and label AI-generated deepfakes so the public is not misled. A bill has been introduced that would require it to be attached. She said the caucus has already proven its value as a “safe space” where members can ask questions, share resources and begin building consensus.
“There are no bad questions or stupid questions,” she said. “You have to understand something before you accept or reject it.”
Copyright 2024 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.