Through Project Lynchpin, the first record-breaking program to extend artificial intelligence to weapons and other systems, the Army is building an operational pipeline and comprehensive infrastructure for a trusted environment in which in-house and third-party algorithms can be developed and validated. We are struggling to make the infrastructure a reality. in a responsible and safe manner.
Three senior defense officials provided an update on that early effort to a small group of reporters during a media roundtable at the Pentagon on Monday.
The details they shared suggest that the Army, through Project Lynchpin, is evolving its approach to known and unknown risks associated with AI deployment. And in parallel, the agency is also creating a new “AI Risk Mitigation Framework” to inform any future efforts.
“[This is] This is consistent with a lot of the work that the White House has been promoting on AI, and a lot of the work that the Department of Defense has been promoting on responsible AI, such as with the Lima Task Force. We definitely have all of these things, but we're also looking at the secondary and tertiary effects of things that need to be addressed early on from a disability perspective,'' said Principal Young Bang. . The Acting Assistant Secretary of the Army for Acquisition, Logistics and Technology explained.
Linchpin was first conceptualized in 2022 and is intended to ultimately generate a secure mechanism for the continued integration of government- and industry-created AI and machine learning capabilities into Army programs.
“Think of Project Linchpin as our path to delivering trustworthy AI,” Bharat Patel, Project Linchpin product lead in the Army's Intelligence, Electronic Warfare and Sensors Program Office, told reporters. told.
“The first thing I want to say is literally all the boring parts of AI: infrastructure, standards, governance, processes. All of these areas are what we are working on. It allows us to leverage the AI ecosystem and deliver capabilities at scale,” said Patel.
The Army's Tactical Information Targeting Access Node (TITAN) program encompasses next-generation ground systems for acquiring and distributing gunner kill chain sensor data from sensors, allowing officials to develop algorithms partnered with the Lynchpin program. This will be the first program that I am trying to realize using. .
“At the moment, I think we are collecting use cases for AI. TITAN is going to support a certain theater. We will work with that theater and its program to understand everything that is going on, So it's sort of a left-right limit, and it's determining how it's going to play out. But if you think about it, in classic computer vision problems, it's different for each theater. You can not [European Command] Ready to use [Indo-Pacific Command]. Trees are different, biospheres are different, everything is different. That's why it's so important to understand your use case and where it lies. [area of responsibility] Specifically. So we're looking at that very carefully because we want to make sure we adjust the model to support our customers,” Patel told DefenseScoop during a roundtable.
Mr. Bang, Mr. Patel, and their team have conducted what they call “a significant amount of market research” as part of launching this new program. Since November 2022, they have published his four requests for information on Project Lynchpin, collected “well over” 500 data points, and met individually with more than 250 companies.
Matt Willis, director of the Army's Prize Contest and Small Business Innovation Research (SBIR) program, said there could be more momentum on these fronts in the short term, and perhaps even increased funding.
“in [fiscal 2025], we predict that over the next year or so we will see significant investment in SBIR programs, particularly towards AI. This is also strategically aligned with Project Linchpin. [that’s] It could be up to $150 million or more. So that's about 40 percent of the program, and it really shows our commitment to innovation, AI, and making sure that small businesses across the country can contribute to the Army,” he said.
At the roundtable, stakeholders also reiterated their intention to address the ethical and security risks associated with AI and machine learning with Project Lynchpin as it continues to mature.
To that effect, Army officials are also building an “AI Risk Mitigation Framework,” which Bunn said is designed to address the Army's unique “roadblocks” in implementing emerging technologies. .
“This is really a way to identify risks like data poisoning, injections, adversarial text attacks, and mitigate some of those risks. Now, specifically, we're talking about things like data poisoning, injection, adversarial text attacks, and mitigating some of those risks.” Were there any?'' Are there any of the types of things we know exist in the environment or in companies? So whether it's commercial or the Department of Defense side, we know they're there. So we're actually trying to mitigate some of that,” Bang told his DefenseScoop.
“This is really a framework for looking at what the cyber risks and vulnerabilities are as they relate to third-party algorithms, and working with industry to categorize that and tools to help mitigate the risks. And how can we consider the process and adopt it sooner?'' he added.
His team has also hosted numerous engagements with industry partners to find pathways to address the potential need for companies to require an AI Bill of Materials, or AI BOM.
Fundamentally, such resources are envisioned to help governments better understand the potential risks and threat vectors that their capabilities may introduce into their networks.
“We are again conducting further sessions with the industry. We understand their point of view. It is not something to reverse engineer. [intellectual property]It is important to us to better manage the security risks associated with algorithms. However, as we understand the feedback from the industry, we are working further on developing AI summary cards. You can think of it more like baseball cards. Contains certain statistics about algorithms, intended use, etc. So it's not very specific about intellectual property and it's not necessarily a threat to the industry,” Bang said.