aAs the world grapples with how to regulate artificial intelligence, Washington faces a unique dilemma. The question is how to secure America's position as the world's AI leader while guarding against AI's potential risks. Countries seeking to regulate AI will need to balance regulation and innovation, a challenge that is particularly difficult for the United States, which has more to lose. Although the UK, European Union, and China all have strong AI companies, the field is dominated by US companies, driven by their own open innovation ecosystems. This advantage has recently become apparent, with OpenAI releasing Sora, a powerful new text-to-video conversion platform, and Google launching Gemini 1.5, a next-generation AI model that can absorb requests over 30 times the size of its previous generation. announced.
If these trends continue and AI proves to be the game changer that many hope it will be, relinquishing U.S. leadership will not be an option. But as a recent Senate hearing with social media executives reminds us, we also can't leave another powerful technology completely unregulated.
So far, the EU and China have led the way in regulating AI, but with different objectives in mind. The EU's recent AI legislation prioritizes minimizing social harm, such as discrimination from AI in employment, through a comprehensive “risk-based” approach. Unsurprisingly, China's AI regulations focus on reasserting state control of information. Neither approach is favorable for AI innovation (as some EU member states are already exasperated with).Washington’s challenge is to develop a unique US approach to AI regulation that secures leadership and Protecting our people and the world from the potential dangers of technology.
Although the Biden administration's AI executive order was a valuable first step, there are limits to what the executive branch can do on its own. Only Congress can provide America with a durable legal framework to govern this revolutionary technology. Lawmakers must balance a set of competing priorities as they consider their options. These include ensuring an open and competitive AI ecosystem, managing safety risks, controlling the proliferation of potentially harmful AI systems, and the need to stay ahead of China. To achieve these goals, the United States will need a flexible and adaptable regulatory framework that keeps pace with rapidly evolving technology.
From Sen. Chuck Schumer's AI Insights Forum to House Speaker Mike Johnson's new task force on AI, members of Congress have expressed an interest in acting in a bipartisan manner. Enthusiasm is welcomed and guaranteed. AI could pose the most complex and urgent regulatory challenges Washington has ever faced. Here are his four lessons for Washington to keep in mind as it grapples with his AI regulations.
First, AI will always move faster than Congress. After the advent of the automobile, it took several decades for it to become popular in American households. It took years for smartphones and social media. ChatGPT has gained over 100 million users in his 2 months. In just a few years, generative AI has gone from creating human-like text responses to generating realistic images and videos on demand to reliably imitating the human voice with just 3 seconds of original audio. became. The relentless pace of AI development will always lead to loopholes in the legislative process. Even if Congress passes AI regulations, we shouldn't expect those regulations to be revisited anytime soon. The last time Congress passed major technology regulations was in 1996, when most Americans were still using dial-up internet. Since then, technology has changed many times, but the law has not. This is not to say Congress should abandon regulations. Rather, lawmakers should recognize that any laws they pass must be foresighted and flexible to withstand advances in AI. This could argue for a principles-based regulatory approach rather than fixed technical standards that may become obsolete before the ink is dry. Also, the appeal of an independent body empowered to better target and adapt regulations over time, similar to specialized bodies that oversee sectors such as pharmaceuticals, aviation, automobiles, food, agriculture, telecommunications and finance. may also increase.
Second, safety supports innovation. Although there is always a tension between fostering innovation and safety, they complement each other more than current debates suggest. Cryptocurrency gives you a warning. The virtually unregulated sector predictably led to the spectacular collapse of FTX. For better or for worse, this fiasco left the public and policymakers with a bleak impression of the field, and likely hindered widespread adoption of the technology. It is not hard to imagine that unregulated AI applications could similarly cause high-profile failures, hindering adoption or leading to regulatory over-amendment by the Washington government. In order for the AI ​​to run far, it must run safely.
Third, AI regulation should encourage broad and open competition. The growth of large and expensive underlying models has privileged large companies to train the most capable models at the forefront of AI development. In a surprising shift from the anti-regulatory stance of most technology companies over the past two decades, some large AI companies are openly calling for government regulation of their most advanced AI systems. These voices are met with understandable skepticism, with some arguing that Big Tech companies want to be regulated not out of virtue but to erect regulatory barriers to competition. But leaving powerful AI models unregulated is not the solution. And our experience with social media has shown us that unregulated Big Tech is not a recipe for healthy competition or social good. We should refuse to choose between allowing the most powerful AI companies to self-regulate and burdening them with regulations that stifle innovation and competition. This requires creating rules that are clear, consistent, and without significant compliance costs, which is certainly not an easy task. Congress should also seize opportunities to level the playing field, such as by funding the National AI Research Resource to provide data and computing resources to academics and startups.
Fourth, American AI policy requires a global vision. As China, the EU, the UK and others develop their own competing frameworks, global AI governance is required, and the US cannot afford to sit on the sidelines. Even as the United States competes with China, it must also seek opportunities for cooperation. Just as no country can fight climate change or pandemics alone, no country can tackle the potential risks of AI-enabled bio- and cyber threats alone. As the world's two largest AI powers, the United States and China need to work together to strengthen security, limit proliferation, and draw red lines against dangerous AI uses. A positive sign is that President Joe Biden and General Secretary Xi Jinping have agreed to begin talks on the risks and safety of AI. But a narrow obsession with China presents an opportunity and responsibility for the United States to provide the world with an attractive AI model that leverages its advantages without sacrificing core democratic values ​​such as privacy and civil rights. There is a danger of blowing it away. China understands that aggressive AI applications in areas such as energy and agriculture are attractive to the Global South. It's time for America to get in the game.
As Washington debates AI, competitors are offering their own solutions on how to balance safety, innovation, and competition when it comes to this powerful technology. America needs its own answers that are consistent with its democratic values ​​and interests, and it needs them now. The Center for a New American Security recently launched the AI ​​Governance Forum, which brings together experts from industry, academia, and civil society to address these challenges and recommend actionable steps to policymakers. Our goal is to bring these communities together to develop solutions that balance competing interests and provide a framework for a unique U.S. AI governance model. U.S. leadership helped birth the AI ​​era. Now we must help the world use this technology safely, while upholding our commitment to democracy, privacy, and human freedom.