DENVER — While artificial intelligence made headlines on ChatGPT, behind the scenes, the technology is quietly infiltrating everyday life, helping people review job resumes, apply for rental apartments, and even provide medical care. It is also used for making decisions.
While many AI systems have been found to discriminate in favor of certain races, genders, and incomes, there is little government oversight.
Lawmakers in at least seven states are taking major legislative moves to regulate bias in artificial intelligence to fill the void left by Congressional inaction. These proposals are some of the first steps in a decades-long debate over balancing the benefits and widely documented risks of this ambiguous new technology.
“In fact, AI will impact every part of your life, whether you know it or not,” said Brown University professor Suresh Venkatasubramanian, co-author of the White House's AI Bill of Rights Blueprint. I am giving,” he said.
“Now, you wouldn't care if they all worked. But they don't.”
Success will depend on lawmakers grappling with complex issues while negotiating with an industry worth hundreds of billions of dollars and growing at light-year speeds.
Of the roughly 200 AI-related bills introduced in state legislatures last year, only about 12 passed, according to BSA The Software Alliance, which advocates on behalf of software companies.
These bills, like the more than 400 AI-related bills being considered this year, are primarily aimed at regulating small-scale AI. That includes nearly 200 targeted deepfakes, including a proposal to ban pornographic deepfakes like the one of Taylor Swift that flooded social media. Other companies are trying to throttle chatbots like ChatGPT so that they don't spit out instructions to make bombs, for example.
These are in addition to seven industry-wide state bills to regulate AI discrimination – one of technology's most perverse and complex issues – that are being debated from California to Connecticut. .
Those who study AI's discriminatory tendencies say countries are already slow to put up guardrails. The use of AI to make consequential decisions (what the bill calls “automated decision-making tools”) is widespread, but largely hidden.
It is estimated that 83% of employers use algorithms to assist with hiring. According to the Equal Employment Opportunity Commission, that percentage reaches 99% of Fortune 500 companies.
But a Pew Research poll shows that the majority of Americans are unaware that these tools are being used, let alone whether the system is biased.
AI can learn bias through the data it is trained on, typically historical data that can hold a Trojan horse of past discrimination.
Amazon canceled a hiring algorithm project about 10 years ago after it was discovered that it was giving preferential treatment to male applicants. The AI was trained to evaluate new resumes by learning from past resumes (mostly from male applicants). Although the algorithm did not recognize applicants' gender, it downgraded resumes that included the word “female” or mentioned a women's college. Part of the reason is that the historical data the algorithm learned from didn't include women's colleges.
Class action lawyer Kristin Weber said: “We're going to have the AI learn the decisions that existing managers have historically made, and that those decisions have historically been favorable to some people and disadvantageous to others. If you do, the technology will learn.” A lawsuit alleges that an AI system that scores rental applicants discriminated against Black or Hispanic people.
Court documents state that Mary Lewis, a Black woman who is one of the plaintiffs in the lawsuit, applied to rent an apartment in Massachusetts and was informed that “a third-party service we use to screen all prospective tenants… It states that he received a cryptic response saying, “We have refused your permission to move in.''
According to court records, when Louis submitted two landlord references to prove that he had paid rent early or on time for 16 years, the complaint said, “Unfortunately, we will not accept the appeal. “We cannot invalidate the results of the tenant screening process,” he said.
This lack of transparency and accountability is partly due to the bill's targeting following last year's failed California proposal, the first comprehensive attempt to regulate AI bias in the private sector. There are also things that are.
Under the bill, companies using these automated decision-making tools would be required to conduct an “impact assessment” that includes an analysis of how the AI is involved in decision-making, the data collected, and the risks of discrimination. This needs to be done along with an explanation of the countermeasures. Depending on the bill, these assessments could be submitted to states or required by regulators.
Some bills would require companies to inform customers that AI will be used to make decisions and allow them to opt out with certain caveats.
Craig Albright, senior vice president of U.S. government relations for the industry lobby group BSA, said its members generally support some of the proposed measures, including impact assessments.
“Technology is moving faster than the law, but there are actually benefits to the law catching up. It helps[companies]understand their responsibilities, and consumers are more responsive to technology.” You can trust them,” Albright said.
However, the enactment of the bill got off to a lackluster start. The Washington state bill has already stalled in committee, and a California proposal to be introduced in 2023, on which many of the current proposals are modeled, also died.
California Assemblywoman Rebecca Bauer-Kahan, who backed a bill that failed last year with the backing of some technology companies including Workday and Microsoft, removed a requirement that companies submit periodic impact assessments. Revised. Other states with proposed or expected bills include Colorado, Rhode Island, Illinois, Connecticut, Virginia, and Vermont.
Brown University's Venkatasubramanian said these bills are a step in the right direction, but impact assessments and the ability to capture bias remain murky. Without enhanced access to reports, which many bills restrict, it will also be difficult to know whether a person is being discriminated against by AI.
A more focused and accurate way to identify discrimination is to require bias audits (tests to determine whether AI is discriminatory) and make the results public. The industry has objected to this, saying it would expose trade secrets.
Requirements for regularly testing AI systems are not included in most legislative proposals, and nearly all still have a long way to go. Still, this is the beginning of lawmakers and their constituents grappling with the technology that exists and will continue to exist.
“It covers everything in your life. Thanks to that, you have to care,” said Venkatasubramanian.
——-
Associated Press writer Trân Nguyễn in Sacramento, California, contributed.