All U.S. federal agencies will be required to have a senior leader overseeing all AI systems they use, as the government wants to ensure the use of AI in public services is secure.
Vice President Kamala Harris announced new OMB guidance in a briefing with reporters, saying agencies must establish AI governance committees to coordinate how AI is used within the agency. Agencies will also be required to submit an annual report to the Office of Management and Budget (OMB) that describes all AI systems they use, the risks associated with them, and plans to mitigate those risks.
“We have directed all federal agencies to appoint a chief AI officer with the experience, expertise, and authority to oversee all AI technologies used by the agency. to ensure that AI is used responsibly,” Harris told reporters. He told the group.
Depending on the structure of the federal agency, the chief AI officer does not necessarily have to be a political appointee. A governing board must be established by the summer.
The guidance expands on previously announced policies outlined in the Biden administration's AI Executive Order, which requires federal agencies to develop safety standards and increase the number of AI talent working in government.
Some agencies had already started hiring chief AI officers before today's announcement. The Department of Justice announced Jonathan Meyer as its first CAIO in February. He will lead a team of cybersecurity experts to explore ways to use his AI in law enforcement.
The U.S. government plans to hire 100 AI experts by the summer, according to OMB Chair Shalanda Young.
One of the responsibilities of agency AI personnel and governance committees is to frequently monitor AI systems. Young said government agencies will need to submit an inventory of the AI products they use. If an AI system is deemed “sensitive” enough to be delisted, the agency must publicly provide a reason for delisting. Government agencies must also independently evaluate the safety risks of each AI platform they use.
Federal agencies will also verify that the AI they deploy meets safeguards to “reduce the risk of algorithmic discrimination and provide transparency to the public about how the government uses AI.” There is a need. OMB's fact sheet provides several examples:
Travelers can opt out of the use of TSA facial recognition when they are at the airport without delay or losing their place in line.
When AI is used in the federal health system to support important diagnostic decisions, humans oversee the process of validating the tool's results to avoid disparities in health care access.
When AI is used to detect fraud in government services, impactful decisions will be subject to human oversight and affected individuals will have the opportunity to seek redress from AI for damages.
The fact sheet states, “If an agency is unable to apply these safeguards, agency leadership must explain why doing so increases the risk to overall safety or rights or is acceptable to the agency's critical operations. The use of AI systems must be discontinued unless there is a justification for why the failure occurs.”
Under the new guidelines, government-owned AI models, code and data must be made publicly available unless they pose a risk to government operations.
The United States does not yet have laws regulating AI. The AI Executive Order provides guidelines for agencies under the executive branch on how to approach technology. Although several bills have been introduced to regulate some aspects of AI, there has been no significant movement toward legislation for AI technology.