The Department of Homeland Security has become the first federal agency to embrace artificial intelligence (AI).
DHS on Monday (March 18) released a “roadmap” for the planned use of AI in three pilot projects.
For one, Homeland Security Investigations (HSI) will test AI to detect fentanyl to combat child sexual exploitation. Second, the Federal Emergency Management Agency (FEMA) will leverage AI to support community hazard mitigation plans. And the U.S. Citizenship and Immigration Services (USCIS) plans to use AI to train immigration agents.
“We cannot ignore this,” Department of Homeland Security Secretary Alejandro Mayorkas told the New York Times in a report on Monday.
“And if we are not prepared to recognize and address the potential for good and the potential for harm, it will be too late. That's why we're acting quickly.”
According to the report, DHS will work with companies such as OpenAI, Anthropic, and Meta on the pilot program.
DHS will hire 50 AI experts to protect critical infrastructure from AI attacks and thwart the use of the technology for things like child sexual abuse and creating biological weapons, according to the report. It's planned.
Additionally, the department plans to use chatbots to train immigration officers to interview refugees and asylum seekers. The company also plans to use chatbots to gather information about communities across the country to help create disaster relief plans.
DHA plans to report on the results of the pilot by the end of the year, Eric Heisen, the department's chief information officer and head of AI, told the Times. He said the agency has selected OpenAI, Anthropic and Meta to experiment with different tools, and will also use cloud providers Microsoft, Google and Amazon in the pilot.
“We can't do this alone,” Heisen said. “We need to work with the private sector to help define what is responsible use of generative AI.”
The DHS effort joins a number of other White House efforts in the past year aimed at governing the use of AI, including the American AI Safety Institute (AISI), a Commerce Department program that develops technical guidelines for use by regulators. It follows the launch.
Also last year, President Joe Biden issued an executive order aimed at promoting safe AI development, requiring developers of the “most powerful AI systems” to provide safety test results and other important information. He asked them to share the information with the government.
Meanwhile, PYMNTS examined the debate over the threat posed by AI last week, noting that some AI experts believe the gloomy headlines about the technology have been exaggerated.
“Simply put, these machines need humans, and will continue to do so for some time,” Niagara University professor Sean Daly said in an interview with PYMNTS.
“We provide not only the infrastructure but also critical guidance that is essential to machines. When it comes to nefarious influences using AI for nefarious purposes, we have managed the nuclear age pretty well. , I think this is reassuring.”