Last week, the Department of Homeland Security announced the creation of an Artificial Intelligence Safety and Security Commission tasked with advising the Secretary on “the safe development and deployment of AI technologies in our nation's critical infrastructure.” This new board, which includes top AI industry leaders and CEOs, will undoubtedly garner both attention and authority. This leads to an important question. Where should you start, and what priorities should you prioritize for high-impact initiatives?
This commission and its work has the potential to influence broader AI policy, so it's essential that we get things right. As boards move forward, they will take practical steps to make the most of limited bandwidth, analyze AI challenges, and ensure resiliency and improvement of our most critical systems. You need to concentrate.
So the first step is obvious. It is about identifying the true scope of responsibility. DHS currently defines “critical infrastructure” as 16 industry sectors that account for more than 50% of the U.S. economy. How can one board effectively advise across all these areas, let alone an emerging and dynamic area like AI?
This broad scope in current policy comes despite recommendations from the U.S. Cyberspace Solarium Commission established by Congress from 2019 to 2021. Most people think “critical” means power grids, water supplies, pipelines, and hospitals, but existing policies also extend to ride-sharing. , factories, office buildings, retail stores, cosmetics, and most despicably, casinos.
For the 22 individuals tasked with serving on the AI ​​Safety and Security Committee, this scope can overwhelm priorities, waste time, and divert focus from the core systems that really matter . We really need power grids, water, energy, planes, trains, and hospitals. Start there. Even this limited subset will be a major undertaking that will require deep thought and effort.
Not all priorities are equal. Among the many hypothetical AI risks, cyber risk is one of the few that we will almost certainly experience. This leads to his second core area because digital systems have bugs and bugs are exploited.
Thankfully, no new AI-related threats have emerged yet. A recent report from OpenAI/Microsoft shows that malicious “threat actors” are exploring AI-powered cyber tactics, techniques, and procedures in the wild, but their success is limited by research, troubleshooting, and generating spear-phishing emails. was found to be limited to low-level use. Given the current runway, boards must focus on preparing for the unknown, the unexpected, and the possibility that offensive AI cyber use cases may emerge in the wild.
In addition to developing basic security recommendations, a key step is to analyze existing cyber regulations and identify gaps, overlaps, and opportunities for harmonization to enable agile action. You should also consider whether the agency has the resources to adequately assist you.
For example, the staff managing the National Vulnerability Database (NVD) is distributed. The resulting backlog leaves this critical part of the nation's cyber infrastructure free from existing vulnerabilities. More resources will be needed, especially if AI increases cyber threats. Overall, deep cyber capabilities help governments and industries flex and meet new demands, facilitating success in an uncertain cyber future.
As a third pillar, boards should focus on the promise and proliferation of AI. Appropriately applying AI to critical infrastructure has the potential to improve reliability and enable greater efficiencies for both governments and economies.
For example, a 2019 Google study found that machine learning analysis of weather patterns and wind turbine output can predict wind power supply 36 hours in advance. As a result, Google increased the profitability of its turbines by an astonishing 20% ​​in capturing green energy. Other emerging applications include downed power line detection, predictive maintenance, water quality analysis, and, of course, AI-enhanced cyber defense. Boards need to carefully consider which bottlenecks are impeding success. What regulations are getting in the way? Do we need more investment and research?
As the Board sets out to advise DHS, it is essential that it approaches its work with a clear strategy and sense of priorities. Minimize AI disruption and maximize AI by limiting the scope of your efforts, prioritizing cyber risks, and recognizing AI’s immense potential to improve and strengthen critical systems. You can make significant progress in ensuring success.
Matthew Mittelstedt He is an engineer and research fellow at George Mason University's Mercatus Center.
Copyright © 2024 Federal News Network. All rights reserved. This website is not directed to users within the European Economic Area.