Global, Network / Cyber, Pentagon
WASHINGTON — Representatives from 60 countries met last week outside Washington, D.C., and selected five countries to lead a yearlong effort to explore new safety guardrails for military AI and automation systems, officials said. I made it. exclusively he told Breaking Defense.
The United States will be joined by Five Eyes partner Canada, NATO ally Portugal, Middle East ally Bahrain, and neutral Austria to gather international feedback ahead of next year's second global conference. Which person is in charge?Representatives from the Departments of Defense and State say it is an important intergovernmental effort to protect artificial intelligence.
As AI proliferates in militaries around the world, from Russian attack drones to American combatant commanders, the Biden administration is promoting the “responsible military use of artificial intelligence and autonomy” globally. There is. This is the title of the official political declaration issued by the United States at the international REAIM conference in The Hague 13 months ago. Since then, 53 other countries have signed.
Just last week, representatives from 46 of these governments (including the United States) and another 14 observer countries that have not formally endorsed the Declaration met outside Washington, D.C., to share the Declaration's 10 principles. We discussed how to implement it.
“It's very important to both the Department of Defense and the Department of Defense that this is not just a piece of paper,” said Madeline Motelmans, acting assistant secretary of defense for strategy.Gee, After the meeting, he spoke to Breaking Defense in an exclusive interview. “It's about the condition practice and how to build state capacity to meet the standards we call committed to. ”
He stressed that this does not mean imposing American standards on other countries with vastly different strategic cultures, institutions, and technological sophistication. “The United States is certainly leading in AI, but there are many countries that have expertise that we can benefit from,” Mortelmans said. His keynote speech concluded the conference. “For example, our partners in Ukraine have had unique experience in understanding how AI and autonomy can be applied to conflicts.”
“As we have said often…we do not have a monopoly on good ideas,” said Mallory Stewart, assistant secretary of state for arms control, deterrence, and stability, who opened the conference with a keynote address. He agreed. Still, she told Breaking Defense, “Having the Department of Defense bring over 10 years of experience to the table…was invaluable.”
So when more than 150 representatives from 60 countries spent two days of discussions and presentations, the topic was the Department of Defense's response to AI and automation, based on the AI Ethical Principles adopted as the foundation for AI. We put a lot of emphasis on the approach.Then-President Donald T.This follows the rollout last year of an online Responsible AI Toolkit to guide authorities. To maintain momentum until all groups meet again next year (location to be determined), countries have established three working groups to dig deeper into implementation details.
Group 1: Warranty. The U.S. and Bahrain will jointly lead an “assurance” working group to highlight three of the most technically complex principles in the declaration: AI and automated systems are built for “explicit and well-defined uses.” and “rigorous testing,” and “appropriate safeguards” against failure and “unintended operation.” This also includes a kill switch that allows humans to shut it down if necessary.
Motelmans told Breaking Defense that these technology areas are “areas where we particularly have a comparative advantage and where we felt there was unique value to add.”
Even the declaration's call to clearly define the mission of automated systems “sounds very basic” in theory but is prone to failure in practice, Stewart said. A lawyer who was fined for using ChatGPT to write a superficially plausible legal brief citing trumped-up incidents, and a lawyer who unsuccessfully tried to use ChatGPT to help her own children do their homework. Look at that, she said. “And this is in a non-military context!” she emphasized. “The military risks are catastrophic.”
Group 2: Accountability. While the United States will apply its vast technological expertise to this problem, other countries will focus on the human and institutional aspects of protecting AI. Canada and Portugal will jointly lead work on accountability, focusing on the human dimension. This means ensuring that military personnel receive appropriate training to understand the technology's “capabilities and limitations” and have “transparent and auditable” documentation explaining how the technology is performed. . It's working and we're “doing our due diligence.”
Group 3: Surveillance. Meanwhile, Austria (at least for now without a co-chairman) is leading a working group on “monitoring” that will focus on broader issues such as compliance with international humanitarian law, high-level monitoring, and requests for legal review of compliance with international humanitarian law. Consider policy issues. Eliminate “unintentional bias.”
Related: Avoiding accidental Armageddon: Report calling for new safety rules for unmanned systems
Real world implementation
What does implementing these abstract principles actually mean? Perhaps something like the Department of Defense's online Responsible AI Toolkit provides a general guide to implementing AI safety and ethics. It's part of a push by the Department of Defense's Chief Digital & AI Officer (CDAO) to develop public and even open source tools.
Stewart highlighted that the toolkit's chief architect, CDAO's Matthew Quan-Johnson, gave an “excellent” presentation during the international conference, adding, “I gave him a hands-on look at the toolkit and asked him questions. It was really, really helpful to have that answered.”
A few days after the conference ended, “We've gotten incredibly positive feedback…Allied,” Johnson said during a Potomac Officers Club panel discussion on AI. [were] They said they thought this was a very positive development, with a growing movement to open source and share more materials and best practices. ”
“There's really great momentum and appetite,” Johnson said during the panel discussion. “How do we go from these high-level principles to implementing processes, benchmarks, tests, evaluations, metrics, etc.?Then we can figure out how to implement according to the principles. We will be able to actually demonstrate that there is.”
Johnson certainly came back excited. “With the Political Declaration, the CDAO Defense Partnership, and the second REAIM Summit being held in South Korea in September, this is a very exciting time for responsible AI in the international space,” he said.
It's a military story. The Biden administration issued a sweeping executive order on federal use of AI in October, fully joined the UK-led Bletchley Declaration on the Safety of AI in November, and just last week, the United Nations General Assembly approved the US-led Bletchley Declaration on the Safety of AI. The resolution was passed unanimously. A consensus calling for “safe, secure and reliable” AI for sustainable development.
But the administration is also trying to keep civilian and military discussions separate. Part of the reason is that military AI has become more controversial, with many activists calling it a lock on “lethal autonomous weapons systems” that the United States and its allies, as well as adversaries such as Russia and China, would like to have room to develop. This is because they are seeking strong legal prohibition.
“In pursuing a consensual UN resolution, we made a deliberate choice not to include discussion of military use,” a senior administration official told reporters at a briefing ahead of last week's General Assembly vote. “There are many places to have those conversations. [elsewhere], including the United Nations system. We are conducting intensive diplomatic work on the responsible military use of artificial intelligence. ”
The two tracks are intended to be parallel and complementary. “I'm really happy that UNGA is taking a step into the non-military sector,” Stewart told Breaking Defense. “[There’s] Potential for advantageous and synergistic cross-pollination. ”
But the world still needs a clear forum for different kinds of people to discuss different aspects of AI, she said. The United Nations unites all countries on all issues. Military AI conferences like REAIM are attended by activists and other non-governmental groups. However, the value of the political declaration and its implementation process lies in the fact that governments talk behind closed doors with other governments, especially about military applications.
“The political declaration looks at this from an intergovernmental perspective,” Stewart said. “We are focused on an environment where governments can discuss the challenges they face and the questions they have and work on practical, concrete, and truly effective and efficient implementation. ”