The first mortgage-backed securities trader I met in the summer of 2007, wearing a rumpled T-shirt and a surprised look on his face, said, “I'm quitting.'' I had recently worked at a housing clinic, so I knew something strange was going on with dodgy mortgages, but he was adamant that I had no idea about the scale of the mess. I did.
Surely banking regulations can prevent the worst, I asked? he laughed.
We know what happened next. Financial markets collapsed, devastating individual lives and the global economy. We know that those in power have every incentive to pursue huge profits, knowing that even if their reckless actions have no consequences, others will pay a price if things go wrong. I learned that I have
Now, we are shamefully on the verge of repeating the same mistake with artificial intelligence. As in the run-up to 2008, we have allowed powerful systems to shape our daily lives with little understanding of how they work and little say in how they are used.
AI can now decide whether to take out a mortgage, how long to spend in prison, or whether to be kicked out of public housing for minor code violations. These systems scan the transactions we make online, influence the products we buy, and mediate the information we consume.
But this is just the beginning. AI chatbots were not widely used 18 months ago. Researchers can now create long-form videos from prompts. AI agents that function without constant human oversight already exist (for example, in social media feeds), but the next frontier, mass proliferation, is just around the corner.
At my company, we're passionate about the capabilities of agent AI, and how it can be misused to exacerbate the harm we're already seeing from less powerful AI systems. I also understand it firsthand.
A just society is prepared for this. It does not allow those in power to take risks at our expense or exploit gaps in the law, as they did when banks and financiers raked in profiteering while undermining financial markets. .
But we lag shamefully behind in meaningful accountability. Elon Musk's Tesla can sell cars with a feature called “full self-driving,” while avoiding liability if the feature causes an accident. Imagine if an airline or aircraft manufacturer could deny responsibility for a plane crash. This failure raises questions about why courts can still use AI to determine prison sentences, even though such systems have been proven to be unreliable, and why law enforcement is subject to congressional oversight. Yet, it also explains how AI is used to predict crime with false racial bias.
Most proposed AI laws ignore oversight and accountability and instead seek to make AI systems themselves secure. But this doesn't make sense. AI cannot be made inherently safe, just as a power drill, car, or computer cannot be made inherently safe. We must use laws and regulations to address short-term harms and minimize long-term risks.
This will require better use of existing institutions to regulate AI. I mainly believe he has three priorities.
First, there is a prohibition on harmful behavior. Just as police cannot enter a home without a warrant, governments and agencies should not surveil citizens without a clear and justifiable reason.
Second, guarantee the right to explanation. The Supreme Court's 1970 Goldberg v. Kelly decision held that the government cannot arbitrarily withhold benefits without explanation and the right to appeal. As AI-driven decision-making becomes more prevalent, we need to ensure similar rights to decisions that determine the most important areas of our lives.
Third, we need to strengthen the principle of responsibility. The legal principle that if you harm someone you must redress that harm has been around for centuries, but we have been strangely reluctant to apply the same principle to AI companies. It seems that. This is wrong.
The simple but powerful idea is to hold AI developers who exceed a certain threshold strictly liable for the misuse of their products, just as we hold them for injuries caused by product defects. is. Establishing a safe harbor that allows companies to register ambiguous uses, subject to government oversight and guidelines, could alleviate this problem. Combined with a complete ban on malicious applications, this could protect us from most of the potential problems by shifting the costs of AI damage to the people and companies that cause it.
As builders of powerful AI systems, we reject the argument that the laws governing AI will hinder us. It's the opposite. Good rules level the playing field. These reduce the burden on individual bodies to fight for the common good, and instead give people what they feel is valuable in their lives, within clear terms mandated by democratic processes. so you can focus on building.
The reason we pursue the wildest dreams of AI is to create a world worth celebrating. Better laws will help ensure the inclusion of everyone in the future, not just the handful of billionaires who control today.
Matt Boulos is head of policy and safety at an AI research company. Inviewmembers of NIST's National Artificial Intelligence Safety Laboratory Consortium.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.