When it comes to deploying a new technology, there are no guarantees. While developers and policymakers do their best to minimize risk, innovation always requires a leap of faith. The policy debate around artificial intelligence seems to be a guessing game on all sides. Today, I talk with Bronwyn Howell about how we should be thinking about regulating AI, based on what we know from recent history, and acknowledging AI’s great unpredictability.
Howell is a nonresident senior fellow here at AEI. She is also a faculty member of the Wellington School of Business and Government at Victoria University of Wellington in New Zealand and a senior research fellow at the Public Utilities Research Center at the University of Florida. Her research centers on regulation, development, and implementation of new technologies, as well as technology use in the health sector.
What follows is a lightly edited transcript of our conversation. You can download the episode here, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.
Pethokoukis: Bronwyn, welcome to the podcast. Thank
Howell: You very much for having me.
Since I think we might be talking about this a bit, let’s start off with a definition or description of something called “the precautionary principle.” What is that? Because something we hear a little bit about when they talk about regulation.
Right. Well, the precautionary principle is where, in a world of scientific exploration or discovery, we are inventing or developing new things that we don’t quite know what the consequences are going to be. Obviously there’s gains of society from having the scientific innovation, but there’s also risks associated with that there. Things could go wrong, there could be harm that comes out of it, but no one knows in advance quite what those harms are going to be. So the precautionary principle works on the basis that we will only let these things happen if we can guarantee there’ll be no harm; or, if we let it out, we let it out in only very prescribed ways where we’ve tested already to see what harms might come, or we can actually be sure that the harms that come are below a prescribed level. So that means that the people who could possibly be harmed have a degree of safety in what is going to happen when the technology is released into society.
That description to me seems to encapsulate quite accurately the approach that we’ve seen so far to AI regulation since the rollout of ChatGPT back in November 2022. This seems to be exactly what we’re doing, we’re applying the precautionary principle as sort of the operating rubric under which we’re doing, or we’re thinking about, regulation. I hope that’s not an overstatement.
Absolutely. Now the precautionary principle doesn’t mean that you don’t release it, but it means when you do release it, you release it with guardrails around it so that if it does do harm, that’s only going to be a minimal level. But, of course, that presumes that we know what the harms are that we are looking for. Now in, say, a new medicine, it’s sort of obvious who’s going to be harmed because it’s the people who take it, and because we have a reasonable amount of scientific knowledge that’s gone into the technology of producing the medicine, we’ve got an idea of, in advance, what might be an absolutely unsafe level, or what might be a marginal level, and what might be a safe level, because we’ve done a lot of testing already. So when we bring it out under those circumstances, we can be reasonably certain that, on the whole, for most people it won’t cause a great deal of harm.
But that’s not necessarily the case with these AI technologies because we don’t actually know what some of them can do. We don’t know, because of their emergent capabilities, what harms we are actually trying to protect against. And one of the risks we’ve got when we make these decisions in a world where we have complete uncertainty is that we risk stopping the things that would’ve beneficial, or we put precautionary stops in against things that were never going to be harmful in the first place because we don’t, in the case of the AI, necessarily have a very good handle yet on the whole range of things that these technologies could do and when the harm might come.
It seems to me, rather than a scientific approach or testing, that these early ideas for guardrails or limitations about what kind of AI models can be released, or who might release them, that what’s guiding these is kind of what technologists are saying. Some people work at AI companies are kind of speculating on what might happen or what these technologies might do, and it seems to be just like a lot of guesstimates and how are we supposed to figure out — this is a fast-evolving technology that’s only been out… ChatGPT has only been out in the market since, again, November 2022. It just seems to me this is a very difficult thing, at this point, to come up with guardrails for. So I’m wondering if the default might just not be a very strict, restrictive approach, and that’s what I’m concerned about.
Well, one of the problems is that if we are too strict or we put him in early constraints too soon, we actually stop the good developments from happening, and this is the problem.
It seems to be people are more worried about the bad than the good.
Uncertainty is a two-sided thing. We only get gains and improvements out of technological innovation because of the upside risks of releasing it and seeing those exponential growth possibilities that come out of the technology. But if we are regulating on the basis of things we’ve seen in the past and on technologies we haven’t even really necessarily had a good handle on, ourselves, in a different environment, then what we put in for these guardrails may not be effective or not.
So, for example, with ChatGPT, or think about the large language models, some of these are being developed in an open source environment while the developers may have a handle on what they’ve done with it and if they’ve put it through tests and they can come up with a system card, that’s what they’ve tested for and what the limits are, they don’t necessarily control who uses it down the track. And if you’ve got a strict contractual relationship, say, developing a product or a medicine, you’ve got good contractual links with the people who are going to do things with it, and you can constrain and control the environment, so you can engineer an environment or a set of circumstances in which you can bring it to market. And that’s largely what we’ve learned from looking at the world of product development and using these principles for thinking about the safety of products when they come to market.
But these AI models are different because it’s not just a single provider controlling the entire production chain and supply chain. There’s upstream software that could have been developed in an open source world that’s brought in. There’s downstream developers that could be using this, and it’s quite unclear necessarily what some of these things might be, what they could be used for, and what some of the possibilities are, good and bad. So whether we can even construct a way of thinking about this world using an engineering-based scientific design principle for thinking about how we go about this regulation or whether we are actually functioning in a different world, perhaps one that instead of using a product safety world, maybe we need to think about something more like, say, Eleanor Ostrom‘s common-pool resources-type governance models. . .
What kind of resources?
Common-pool resource governance models for thinking about how we are developing this. Because if we’re really uncertain, we don’t know, but things that various people in society may be able to do to change things, then this is a very different environment than where the developer controls, and then can release in a safe way.
If we are putting in guardrails and regulatory limits, and constraints, and yet we don’t have a good feeling about what the actual problems could be, might there be a situation where we feel like we’ve done it, we feel like we’ve got this thing under control, and we’ve regulated it, we all go home, we congratulate ourselves by the brand new American AI Act, but we really haven’t done it at all.
Not at all. Absolutely not at all. In fact, this is where the difference between risk and risk management—
Like a false sense of security
—takes us so far. But what these regulations claim to do is to create a safe, trustworthy world, and that’s not the same thing. So managing risk and creating safety don’t necessarily come together when we have these other uncertainties around us. I would argue, though, that perhaps what the great flurry of activity does at the moment is it actually creates a safety valve for the politicians who are being called on to do something.
So one of the great biases that we have in the cognitive biases is when we are faced with something that’s uncertain, we don’t know quite how to respond to it, there’s usually someone wants to do something. Even though you don’t know what to do, the requirement to do something becomes so strong that we see action and call for regulation from communities. We see also from politicians the desire to be seen to be doing something so that they’re not actually interpreted as being exposing society to risks that are there. So there’s the imperative there to rush in and do something before stepping back and thinking just exactly what this is we are looking at and whether, in fact, these tools are the appropriate ones to use.
Given that cognitive bias, and maybe the incentives in political systems to be seen as “doing something,” it seems to me that it was maybe a quirk of political history that we let the internet be as lightly regulated as it was in the 1990s. That just seems like. . . was that an outlier? We have a new technology and certainly there are all kinds of concerns, even back then, and we decided that we’re just going to kind of let it rip and see what happens.
I would suggest that possibly what happened was the internet was developed in a very constrained, engineering-specific environment where all of the proving was actually done by the security services. It was actually a national security project that developed the internet. So there weren’t the same distrusts at the very beginning that this thing might’ve been out of control in the first place because it was developed in a laboratory environment.
The difference is that this new technology’s not been developed in a laboratory environment in the same way. So I guess that may have led also to a greater call for precaution. There was possibly a greater degree of trust with the internet when it was released because the military would’ve looked after us and made sure that the internet was safe before they let those academics in the real world play with it.
It seems as if a discussion of this regulation is being held in some vacuum where we live in a rather lawless society and now we need to create laws around this new technology. But I was just putting together a list. There’s a lot of laws. We have a lot of laws. We have consumer protection laws, various agencies can recall products, of course, there’s just common law remedies about product liability, negligence, design defects laws, property law. . . We have a lot of laws here in the United States and in other advanced countries. Don’t those laws inform how we think about regulation? We shouldn’t automatically assume that, even though this is a new technology, and it seems amazing, and seems fantastic, that all the previous lawmaking we’ve done is somehow wildly inadequate to the task.
Well, it’s not vacuum, and those things absolutely do apply, and I think there’s a lot to learn from those things. We’ve managed in a lot of other areas to allow things like, rather than having ex ante constraints on what happens, we’ve actually got a bunch of laws available ex post to deal with disasters when they happen. And if we think about the state we’re in at the moment, we don’t actually currently have a great amount of AI law anyway because the European AI Act isn’t going to come in for two years after Brussels signs the paperwork. The US provisions, we have the executive order, but most of that is in relation to protocols, standards, processes that are yet to be thought about. Yet we’ve been using these things for seven, eight years. So you’d think that if we were going to have disasters, this has already been there—
Before this was on the market, this existed. There’s GPT-3 and GPT-2. . .
Absolutely! Today, at the moment, mostly, we are relying on, in fact, what the developers of the technologies have used with their own conventions, and protocols, and industry self-governance, and all of these ventures, all of these many different tech tools and techniques, the academic community that’s been very active in developing a bunch of tools, and benchmarks for measuring the performance, and the capabilities and the transgressions of these systems, there’s an almighty amount of stuff out there simply because, well, we want to know what the stuff is doing. So the developments have happened. The developers want to know—it’s not in their interest to develop something and let it out there if it’s going to harm someone, so they’ve had incentives to create ways of looking at what they’re doing and how it might be harmful or not before they release things. So there’s a whole bunch of things out there that don’t actually require a specific government mandated piece of ex ante regulation put in place to make the world safe.
So we’re here in May of 2024. You said the EU’s AI Act, even though there’s been headlines, it seems like it’s taken effect, it has not yet taken effect.
No, no, it hasn’t even been signed in by the parliament yet.
And then, here in the United States, they may still be holding these listening sessions on Capitol Hill between industry experts and the Senate, and there’s no law, and it’s election year. . . So maybe if it’s a couple years before this thing is actually signed and implemented in the EU, and in the US we don’t have a law, so maybe we’re talking about something coming in 2026. Meanwhile, the industry is putting billions into these models to advance them. How can the regulators even hope to keep up with what this technology’s going to look like in two years?
Well, the industry is always going to be ahead of the regulators, and as the industries already have a lot of incentives there to actually do what they can to ensure that what they’re doing is safe because it’s not in their interest to release something that’s harmful. So those constraints are happening. The people at Stanford, whose name escapes me at the moment, but they released a report earlier this month about the—
That’s like the “Human Center for Human. . .”
Human Artificial Intelligence Center at Stanford. They’ve been tracking the number of incidents of these AIs going out and causing harm. In 2023, there were 123 instances. Now, given the number of people that are out there using ChatGPT alone, let alone all of the others, the private technologies and all of these varieties, every big tech company has got five or six versions of their own on each of these things that are out there that are being used on a daily basis. Last year, 123 problems.
Doesn’t seem like a massive number.
As a proportion of the number of uses of these technologies, that’s a very, very small. Infinitesimally small proportion of bad things happening from the use of these things in the environments that are being used at the moment.
It’s not hard to look around and see examples of well-intentioned regulation that may have slowed, delayed, stifled tech progress in a way people had not anticipated. In the US, a nuclear reactor just went online. There are currently no more ready to go online, which some would blame on the regulatory climate here, even though a lot of people are concerned about climate change, and it’d be nice if we had a lot of reactors ready to go online, we don’t, and you can make an argument regulation played a role.
They passed this privacy data law in the UK. And in the EU, GDPR [General Data Protection Regulation], and it seems like there have been some unintended consequences—the amount of venture capital going into the smaller tech firms—that perhaps some people, at least, didn’t foresee.
We could go around. We could go around that we have in the United States, it’s very expensive to live in high-productivity cities because of housing laws that seem to make it very difficult to build, perhaps not the intention of people who pass these laws, yet I sense in the people who are saying, “Let’s do something, we need guardrails. The tech experts are telling me we need to have guardrails,” there seems to be an amazing absence of humility, or economic or historical perspective about rushing to legislate and regulate something that seems to be moving very fast, but it’s as if all those other examples, like, “This time will absolutely be different. We are very smart and this time we’ll know just the right guardrails or tweaks to limit this technology.” Maybe this isn’t even a real question I’m asking you. I’m just not sure where this lack of humility comes from.
Well, I do think that the need for humility is there, and also a realization that, whether we like it or not, we cannot make the world safe. We are only relying on what we look at backwards in our understanding as humans to be able to know what we did in the past. We are not good at forecasting what’s going to come in the future, and because of that, it’s difficult to know quite what we ought to be able to build for the guardrails. If we build them, we build them for what we’ve known that worked in the past, but that’s not necessarily going to work for a new technology that we haven’t experienced fully.
In my recent AEI blog, I used the example of the man with the flag walking in front of the car. That was because it was felt that they needed to constrain the speed of the car because speed and runaway horses and carriages were known to harm people, but in fact, the internal combustion engine was inherently more controllable than the horse. So they were managing for a risk that related to a past generation of technology, not the technology that they were looking at in the first place. And at the same time, they encouraged the pedestrians to condition their behavior in the face of the new technology on the man carrying the flag, not on the capabilities of the automobile in the first place, so that when the man with the flag went away, they weren’t prepared, either, for the fact that automobiles could go much faster than horses and carriages.
That’s an interesting point that these technologies aren’t going away. We need to learn how to deal with them as it happens, as change happens, not try to create sort this artificial safety system so we don’t really have to think and we don’t have to adapt, that we need to adapt.
Or push all of the risk and the liability for things going wrong onto only one party: in this case, onto the developers. This is a technology that’s going to generate benefits for the whole of humanity. Maybe we have to think about what share of that risk society should bear, as opposed to what share of that risk the developers should bear. And how do we learn about what those risks are going to be if we don’t actually let it out there and see how, in fact, it’s going to evolve?
How do you think about these open source models, which people who are very interested in regulating these technologies think are fine in some cases, but what they seem to be concerned about are models that are pushing the frontier of the technology, the most advanced things, and those are the models we really need to worry about. Maybe they need to be licensed, they need to have testing approved by the government, and for heaven’s sakes, we can’t just have anybody making these and releasing them to the public. I think that’s the concern. How do you look at the open source issue, especially as it applies to what they would call “frontier,” “leading edge,” “bleeding edge” models?
Well, most of the innovation happens at the bleeding edge, and it happens largely with those people in the skunk works working in their basements in the skunk works, working on these open source models because they’re free, or the only thing that’s required to put in is the time, so a lot of the innovations that come come from the college kids that are out there playing in the basements with these things and developing stuff. After all, that’s where we got virtually all of the technologies that sit there in the social media and the big tech space we’ve got today. They originated in places like that because people were playing with those sorts of things in an unregulated environment, creating the new technologies.
The Google guys weren’t working at IBM, right? They were just two students at Stanford. But yet that seems to be a real dividing line between people who are really bound by the precautionary principle, which is really better safe than sorry with these things; versus people are like, “We need them out there in the open. They compete against each other. We don’t want one group of people having access to them.” They want to have a democratization of this technology. If I know your feelings about open source models on the leading edge, I could probably predict a lot of your other thoughts about regulation.
The world is full of incredibly heterogeneous people with very, very diverse propensities to take on risk or not. If what we do is remove the ability of the risk-seeking people at the fringes from being able to take those risks to develop new things, and constrain all of our activity within the risk-averse walled garden that we’ve created through regulation, we are not going to get that massive pace of innovation that we would’ve got otherwise, because it’s not going to happen in a risk-averse world that’s been conditioned for the most fearful of society to satisfy their requirements. We need to have the models developed in a world that has that broad propensity of risk-seeking in it. And that means we need to accept that sometimes some of these things will be tested in perhaps a slightly less-controlled world. And in that sense, that’s where some of these developments come from. The innovation comes where there’s that propensity to be a risk in the first place.
The other thing is I wonder, too, whether, in fact, the constraining of this activity within a small number of very big players is a financial issue, because the amount of overhead that comes with complying with all of these new regulations is so huge that the only parties that will be able to play in the park will be the ones that have very big bank roles. And the EU regulations and the comments on it actually talk about the fact that, in fact, it’s those issues of the size of the bank role of the person or the entity that’s doing the development is an important part of this, and that’s tied up with their assignment of risk. That’s where the size of the model matters for what’s going to apply in the high-risk, most regulated area, and I wonder whether this is, in fact, a de facto insurance fund. Because if you’ve got this constrained only to the people with the deepest pockets, then if something should go wrong, you’ve got your insurance fund built in there with the liability with those regulated firms to be able to pay to put right any damage that occurs, but that pushes all of the risk and the financing on it into a very small area, and that’s the tradeoff we get.
One argument that’s been made to me is that people who say, “Let a thousand flowers bloom. If something goes wrong, we figure it out on a fly. We’ll figure out what the actual harms are and then we’ll regulate. Let’s not regulate based on science fiction kinds of harms.” And the counter argument I hear is that, if we do that, if we do the let-it-rip approach, anything goes wrong—and something will go wrong—that you’re liable to get a very severe regulatory backlash, that if we start now with sort of consensus, with the technologists and the policymakers all say that these are the guardrails, and if you can avoid those big problems, and then people will feel better about the technology and over the long run you’ll get more innovation, because you won’t have a backlash if something goes wrong. That’s an argument that’s been made to me. Actually the more pro-innovation approach is, “Let’s avoid the big disaster and through regulation.” But of course that’s assuming that you know what needs to be regulated against.
Absolutely. It does mean that you know what to regulate against. And the problem is that in many cases we don’t. And when it comes to deep uncertainty where we have systems within systems within systems interacting, when things go wrong, we could not necessarily, with any of the best will in the world, have known or anticipated from all of the stuff we knew in the past what was going to happen. And then we get the problem that the regulations, as they’re framed at the moment, are not actually framed on the risks of the systems themselves, largely, because there’s no real—the EU law in particular, which does have liability. Firstly, the liability is more related to a set of pre-selected big technologies. Second, it’s predicated on these risks that have been identified in systems in the past. For example, bias and political contamination, the creation of deepfakes, all of these things that we’ve encountered with the previous generation of technology are the things that are being watched. And the risks are being specified, not in the terms of the software, but they’re being termed in the EU sense with their annex. One of their law is a whole bunch of particular situations where it’s not so much the risk of the AI, but the sector of society they don’t want harmed, which has caused them to regulate particular technologies. So this is perhaps not based on the actual risk, but on sectors. It’s the users they want to put behind the guardrails, rather than the technologies, themselves. And that’s an interesting point, because if it’s people we wish to protect, maybe the issue should be about certain people shouldn’t be allowed to use AIs, rather than only certain people should be allowed to produce them.
It also seems to me that, affording this deference to the views of people who work at the technology companies, that well, this person runs this AI company and he says that, yeah, we need to have these guardrails, but they don’t know any more than anybody else. They’re creating something, so that thing doesn’t exist, so they’re speculating on what this thing that doesn’t exist that has never, ever existed in human history, what the harms might be. They’re in no better position, in a way, than anybody else.
No. And in fact, they may be biased the wrong way, because they are so narrowly focused on the engineering capabilities of their system, they may not actually have the expertise on how society might use and adapt to this technology, so they may not be correctly predicting how it’ll be used, they can only see their frame of it, because they’ve become so specialized that they’ve lost the context in which it’s going to be developed.
And that’s where some of those people in the skunk works, in the basements doing their developments, they’re the ones that have got the engagement in the real world because they’re the ones with the real problem that they’re seeking to find a solution for within the use of these, say, the large language models, to apply to a particular environment. So are we actually looking at the wrong experts here for getting our guidance? Because it’s how it plays out in society is what’s going to be the real matter. And that’s where the real, deep uncertainty is, in the intersection of the systems, rather than in the engineering constraints of the computer engineering systems where some of these capabilities have been developed.
I was just reading an essay by a law professor from Santa Clara University called “[Generative] AI is Doomed.” And the thrust of it is that, looking around at the sort of lessons policy makers have taken, not from the internet in the ’90s, but from social media platforms, which has been also a sort of flurry that we need to regulate. So you have policy makers whose most recent experience is problems with social media, policy makers who have, themselves, grown up in this sort of tech-phobic atmosphere where what they know about AI they’ve learned from dystopian science fiction films, that you have these ongoing culture wars where every new technology, especially if it produces content, people don’t like that content—you’re going to have a bad picture that someone’s going to view as a bad piece of text—that in this kind of environment, that it’s almost surely there are going to be a regulatory assault, some of which will land on this technology, and it’s a little bit of an over-reactive title, “AI is Doomed,” but it seems like it’s going to be heavily regulated and we’re not going to get those benefits, or at least as fast as we would, otherwise. So I’m concerned. So I’m asking you to be a ray of light—or maybe a cloud of extra doom, I don’t know.
Well, I think one of the things that we should actually hope for is a little bit of regulatory constraint. And in a normal—I mean, I teach decision making under uncertainty. In most situations, the recommendation is, rather than rushing to regulate, wait to get a little bit more information. We’ve got time, we’ve got the ability to look at how these things play out. We can have the tools available as ready to go when the disaster happens, but if the disaster doesn’t happen, then we didn’t need them anyway.
All of the amount of reporting that I see required out of both the EU and even the suggested NIST [National Institute of Standards and Technology] documentation is huge. It’s a massive overhead that’s required of these firms if they’re going to comply with the regulations. That’s an awful lot of wasted overhead that is not contributing to anything productive. If we are worried about our productivity numbers at the moment, the AI productivity is automatically now starting under a handicap that is vast because of the huge requirement in the overhead to keep track of everything that moves, not because it’s likely to be, or probable to be, a problem, but because someone, somewhere, in many of them manifest different panic scenarios out there has thought this is something we might have to keep an eye on. So that is huge overhead that is very costly and very inefficient.
As I said, these firms are not in the business of going out to create harm. They’re learning too. And the more we constrain with regulation what they can do with their developments, the more we constrain the whole of scientific learning, because the scientific activity will be directed towards things that attend to the regulatory agenda, not that attend to things at the scientific agenda. So this goes far broader than just the technologies, themselves. It’s the whole scientific arrangement. But if we think about this, this is the way science has always been. Engineering looks backward because it works in a very constrained, model-driven environment where we artificially constrain the world and then develop our model in it, and our model then is a toy, but we know exactly all of its dimensions. But that’s not the real world.
The real world is the world that’s science goes to explore. The real promise of science is that we know what we know, but we know there’s an awful lot out there that we don’t know. And the scientific journey is learning about that stuff we don’t know and all of the possibilities and the manifest possibilities we have out there with it. If we constrain the development, then it constrains our ability to know about the world we live in, as well. One of the great things that we’ve found with things like the large language models is we’ve learned so much in the science around how humans process language, as well as how machines can process it. That’s vastly improved the ability of the general understanding of neuro-processing and the neuro models that we are learning about, as well as we discover with the machines on that journey. If we constrain the speed of that, we constrain our learning in the scientific world of a whole bunch of other things.
We do need to be humble. AC Grayling, I think, has written an amazing book that tells us this is the most educated, the most knowledgeable society that we’ve ever had, but the more we know, the more we now realize we don’t know. And that’s the world of scientific endeavor going forward, and that’s what this new technology gives us, another lens into exploring that world. And, for us, if we’re going to live the fulfillment of the Enlightenment’s promise of what science can deliver, we’ve got to give this stuff a chance.