Not a day goes by that I don't learn about new regulations regarding artificial intelligence (AI). This is not surprising since AI is widely touted as the most powerful of the 21 new technologies.cent However, as we have written previously on this page, many new regulations are full of contradictions because there is no agreed-upon definition of AI and the landscape is constantly growing and changing.
Among these regulatory paradoxes is the tendency for activities to be regulated only when done using AI, when (from the end user's perspective) the very same human activities are not regulated. there is. For example, impersonation or parody of celebrities or politicians has been around for a long time and is often considered acceptable comment. However, we are moving into an environment where human impersonators and AI-generated impersonators who look and do exactly the same things are classified as completely different for regulatory purposes. It may be.
Current Federal Trade Commission (FTC) Chair Lina Khan is a brilliant lawyer who is trying to address these contradictions in the FTC's new AI regulations. At a recent Carnegie Endowment Program, I spoke to Mr. Khan about the contradiction in regulating some of his AI activities when the very same human activities may not be regulated. I asked him how he was coping. She responded that the committee's focus was on the opposite, “to ensure that the use of AI does not give us some sort of free pass.”
Even at this early stage, LLGAI (a “big language” because computers use the Internet to collect millions of data points instantly) and how computers generate complex instructions, text, images, video, audio, etc. Therefore, it can be seen that it is “generative''. ) can benefit from revolutionizing medicine, science, etc. Alternatively, it can harm fraud and disputes. Naturally, it is the risk of harm to people that attracts the attention of regulators. For example, it is not difficult to foresee his malicious yet perfect LLGAI impersonation (a “deepfake”) of a government official deceiving society at large. With such a frighteningly realistic scenario, it is natural that governments would enact consumer LLGAI regulations before disaster strikes.
However, there is an issue here that is often overlooked. Because we are in the early stages of consumer LLGAI, any regulations enacted now will be based on what we know very little about today. And with technology evolving so rapidly, what makes perfect sense in 2024 may not make sense or be counterproductive in 2029.
This is not the first time that a consumer-facing disruptive technology has brought with it a plethora of promise and danger. From 1915 to 1930, gas-powered automobiles in the United States grew from a rare curiosity for about 2 million people to a common mode of transportation for about 23 million people, reorganizing both work and leisure. It is no surprise that many governments enacted automobile regulations in his 1910s. While these regulations made sense at the time of their development, the results show how difficult it is to regulate rapidly changing and innovative technologies in their early stages. In 1915, a major regulatory issue was the interaction of automobiles with horses and prams. What could regulators in 1915 say about parking lots, gas stations, and passing lanes?
You don't have to go back a century to understand how difficult it is to regulate innovative consumer technologies in their early stages. In the early to mid-1990s, the Internet was renamed and opened to the public. It quickly became clear that this technology was revolutionary, enabling global communication between individuals or groups using text, images, video, audio, and more. Fearing an epidemic of pornography or worse, regulators in the 1990s began establishing rules and regulations for the Internet based on their understanding of the Internet at the time.
But nearly everyone's vision of the future of the Internet was wrong.
Oversimplifying, most of us thought that the Internet of 2024 would be similar to the Internet of 1995, only on a larger scale. As a result, very few of the Internet regulations developed in the 1990s are designed exactly as they are today. And many of the 1990s' Internet regulations are plagued by unintended consequences and significant omissions.
But the choice should not be between doing nothing to regulate consumer LLGAI in the 2020s or imposing permanent AI regulation based on what little we know today, but rather that There's no need. The most important lesson we can learn from efforts to regulate the auto industry in the 1910s and the Internet industry in the 1990s is that lawmakers and regulators must understand whether today's regulations make no sense in some places. , that you have to have enough humility and wisdom to realize that it can backfire. Year. The solution is for lawmakers and regulators to repeal AI rules and regulations every few years and to continually evolve them based on current conditions.
AI regulations need to be continually reviewed, updated, and rewritten. Removing these regulations will ensure that rules developed in the early stages of AI become permanent, but they are not expected to become permanent. A quick look at consumer LLGAI shows that nothing is stable.
Roger Cochetti has held senior leadership positions at COMSAT, IBM, VeriSign, and CompTIA. A former U.S. government official, he helped found many nonprofit organizations in the technology field and is the author of a textbook on the history of satellite communications.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.