Last month, the White House announced new rules establishing how the federal government uses artificial intelligence systems, including basic safety and civil rights protections.
Given the well-documented potential of AI to cause harm, including increased discrimination and increased surveillance, this rule is urgently needed as federal agencies race to deploy this technology. It has been.
The good news is that most of the new rules announced in a memo by the Office of Management and Budget are clear, sensible, and strong. Unfortunately, they give agencies too much discretion to opt out of key safeguards, severely undermining their effectiveness.
Before federal agencies move further down the AI path, the Biden administration needs to make changes to ensure opt-outs are the exception rather than the rule.
But let's start with the good. We have a lot. The OMB memo sets out a wide range of “minimum risk mitigation practices” that federal agencies must implement.
Before using AI that could impact people's rights or safety, government agencies must ensure that they are not using AI that could affect people's rights or safety, such as falsely arresting racial minorities or denying benefits to low-income people based on facial recognition errors. Impact assessments should be conducted with particular attention to “potential risks to underserved communities.” Families can be lost due to flawed algorithms.
They also need to assess whether AI is a “better” fit than other means to achieve the same goals. This is an important threshold given that AI systems that are not suited to the task often also cause harm. They must then test real-world performance and mitigate emerging risks through continuous monitoring.
If a government agency fails to implement these practices, or if testing shows that AI is unsafe or violates people's rights, the agency will be prohibited from using the technology. All of this underscores a key tenet of the OMB memo: AI is off the table if the government cannot guarantee that it will protect people from harm caused by algorithms.
But given how robust these new rules are, it's even more alarming that OMB is giving agencies wide latitude to circumvent them.
One loophole allows an agency, and the agency alone, to determine that compliance “increases the risk to the overall security or rights” or “creates an unacceptable impediment to the agency's critical operations.” The law allows for exemptions from minimum practices. Such vague standards are easy to abuse. Additionally, understanding practices to reduce risk is difficult. increase they.
Agencies will also have the leeway to opt out if they decide that AI is not the “primary basis” for a particular decision or action. New York City's law to combat AI bias in hiring has similar loopholes, undermining its effectiveness. The law requires employers to audit the use of AI-powered recruitment tools for racial and gender bias and post the results, but this does not mean that these tools do not rely on human decision-making. limited to cases in which it “substantially supports or replaces” As a result, very few employers have reported their audits.
You don't have to look far to see the consequences of such broad regulatory exemptions. Government agencies are already integrating AI into a variety of functions with few safeguards. The results were not encouraging.
For example, the apps used by Customs and Border Protection to screen immigrants and asylum seekers rely on facial recognition, which has been found to be less accurate at identifying people with darker skin tones. This disproportionately prevents black asylum seekers from applying for asylum.
In 2021, the Justice Department found that the algorithms it uses to evaluate who should be granted early release from federal prison made Black, Asian, and Hispanic people disproportionately likely to reoffend. found that they were less likely to qualify.
AI has also infiltrated programs jointly administered by the federal government and states, such as Medicaid benefits, which provide home care assistance to the elderly and disabled. More than 20 states use algorithms associated with arbitrary and unwarranted reductions in home health hours, with thousands of beneficiaries being inappropriately denied care and some having their medical appointments missed. They are forced to skip, abandon meals and sit in urine-soaked clothes.
Worse, the decision to opt out of OMB's minimum practices would be solely at the discretion of the “chief artificial intelligence officer,” the agency's designated person responsible for overseeing the use of AI. . These officials must report these decisions to her OMB and explain them to the public in most circumstances, unless the decision involves classified information, for example. However, these decisions are final and not subject to appeal.
And longstanding weaknesses in the way government agencies monitor themselves could undermine the important oversight role of chief AI officers. The Department of Homeland Security's privacy and civil rights watchdog agencies, for example, are chronically understaffed and isolated from operational decision-making. Under their watch, the department has circumvented basic privacy obligations and engaged in intrusive and biased surveillance practices of questionable intelligence value.
These flaws don't have to doom the OMB memo. Federal agencies should limit waivers and opt-outs to truly exceptional circumstances and ensure that the exercise of their discretion prioritizes public trust over expediency or confidentiality. OMB also needs to carefully scrutinize such decisions and ensure they are clearly explained to the public. If you find that waivers or opt-outs are being abused, you should reconsider whether they should be allowed.
Ultimately, however, the responsibility for enacting comprehensive protections lies with Congress, which can codify these protections and establish independent oversight over how they are enforced. The risks are too high and the harm is too great to leave large loopholes in place.
Amos To is a senior consultant at the Brennan Center for Justice.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.