When the EU parliament approved the Artificial Intelligence Act last month, many in the US felt that Europe was once again setting the de facto policy governing a sensitive new area for global technology companies. . Matt Calkins, founder and CEO of Appian, said the move means “the U.S. is literally behind” on AI policy, but there are ways forward. He told me about some of them. This transcript has been edited for clarity, continuity, and brevity.
A shortened version of this interview was published in Thursday's Forbes CIO Newsletter. This transcript has been edited for brevity, clarity, and continuity.
Why do you think the United States lags behind the international community when it comes to regulating AI?
Calkins: When we make comparisons like this, we are talking about the United States and Europe. When it comes to technology regulation, Europe is proactive. Just look at the regulations they have recently put in place against non-European tech giants. They have also chosen to be more proactive in how they regulate AI.
It's not that the United States chose a more accommodative stance on laws and regulations that I would have supported. I think the US is literally behind the times when it comes to AI regulation. Moreover, the proposals the United States has made to the United Nations do nothing to change the fact that as a nation we lag behind in AI regulation. There is no binding force. It just has direction. It's a statement of intent. There is a lot of vague terminology, universal goals, and little actual regulation. A record of aspirations is what I call it.
The alarming level of inaction, and even lack of ambition, in the US when it comes to AI regulation is compounded by the fact that America's highly influential tech giants are steering the conversation toward the wrong concerns and problematic issues. I think I can explain it well. That's not the real problem, and we've built a mindset that protects AI's weaknesses.
If you were to design a strategy for the United States, what do you think is needed for AI regulation?
Calkins: The most important realization we can take as a country when it comes to AI is that artificial intelligence is a function of data. This means that anything you can get from artificial intelligence first requires inputting data. Algorithms are literally useless without data to inform them. Your ability to deliver powerful, meaningful, and valuable deliverables depends entirely on the quantity and quality of that information.
The first thing we need to do is recognize the priority of information. Because the substance of AI is data, data deserves to be protected, recognized as valuable, and given legitimate rights. The biggest challenge with AI today is securing data, both around the world and in terms of regulation. And respect the people who own the data. The person who makes it, the person who owns it. Don't let artificial intelligence take over.
Nothing is happening on this front.White House [executive order] There hasn't been anything since October. It wasn't even an idea. The closest they have come to this problem is that if an AI algorithm is trained on your data, it needs to be anonymized. that's it. They don't respect it, they don't pay for it, they just make it anonymous.
When we talk about regulating AI, much of the focus has traditionally been on privacy, national security information, and preventing disinformation, or at least letting people know they're looking at something created by AI. I did. What is the importance of being regulated?
Secondary. This is often mentioned because it is of great importance to politicians. Misinformation is partly due to the fact that deciding which speech is illegal has become a national sport. It's an interesting topic for politicians. Politicians also know that their reputations depend on their exposure, that images found in video clips are known to voters, and that their vulnerability to deepfakes is obvious. I think you understand. But to be honest about this, deepfakes can be created without AI. Photos and images can show Donald Trump being violently arrested in New York or the Pope wearing a Balenciaga jacket, all without AI.
What about misinformation? If you are misrepresenting someone's voice, it should be against the law. If you tamper with someone's photo, it should be against the law. But remember that when we make laws, we make them for the benefit of law-abiding people. Russian troll farms do not abide by our laws, and neither do North Korean hackers. This is like when creating a law, if you publish an image created by an AI, it has to be digitally signed as an AI, which actually solves about 5% of the problem.
What about AI surveillance, which is prohibited by EU law?
I believe these are invasions of privacy, but I have previously confirmed that there are considerations to be made in this regard. On the other hand, I think the organizations most likely to do such a thing would not comply with the law. Therefore, laws alone cannot protect us. In some cases, it is necessary to deny or prevent the existence of data. If you don't want people walking around your country to be monitored on a daily basis, you can simply pass a law banning AI's use of the data collected or turn off the cameras. With so many cameras installed in Western countries, I think we need to ask ourselves: Is that the society we want? Do we want a society where vast amounts of data are available about people, how they are feeling, where they have been on any given day? This is a privacy and safety issue, and it is in the hands of governments. There is no doubt that it will give a lot of power, and I think reasonable people will differ on whether it is worth it.
There are some things that I think we as a civilization should definitely try to avoid having data on. It's like intentionally making a virus more lethal to humans or studying which DNA sequences produce the most lethality. There will not be enough laws to preserve that data and ensure that no bad guy in the world ever uses it to create a deadly virus. I think we all know how powerful data is. You need to look in the mirror and say there is data that you simply don't need. If you're looking forward to a world where AI creates viruses, where AI creates nanorobots that hurt people and things like that, there are some areas where you have to say stop doing that research. there is.
There are a number of policies underway that could change direction significantly depending on the outcome of the November election. Do you think he is also one of those people for AI policy?
I think speech is becoming a political football, and both sides have speech they oppose and feel empowered to use political power to suppress. In my opinion, this is the wrong direction. We need to protect the right to express ourselves and the rights of those who create the content of which our society is the beneficiary of that content. Fear leads us to believe that we need to say less of certain things. I think both sides have less regard for the other side's right to speak than I've ever seen in my lifetime in this country.
What I would recommend is that you approach AI rather than what you can't say. Because honestly, that sounds like China's approach to AI. AI should be a tool. Be responsible for what you do with it. AI is not a separate actor. It is controlled by individuals, and the organizations proposing and using the AI ​​are responsible for the output of that AI. I don't see any political party supporting this.
What do you think is needed to move regulatory movement in the right direction?
I know there are some in Congress who regret being slow to move on social media. The U.S. has been very behind when it comes to AI, and I don't think that will necessarily continue to be the case. Maybe if Europe moves, or if the problem befalls us again, or if we get out of the current impasse or something. Perhaps when greater commercial impact is seen, or when someone violates property rights in a very egregious way (which should still happen, since the door is wide open), then perhaps the legislators You will be scared and take action. Events may occur in the coming year that will shift the United States from an underdeveloped country to a pioneer.
One party may be in control. A lack of checks and balances can lead to quick action. Or AI compromise. What I'm saying is that something offensive was done with the AI. It could be an impersonation of a highly influential person. It could be a massive intellectual property rip-off that people are obviously uncomfortable with. Or it could be that something exploitative was done to the public persona, beyond anything that's ever been done, that pissed people off. I think AI is fully capable of attacking us. That would make us angry. So there will be some shocks soon. I think this will create public momentum to set boundaries around technology.
You lead a company that uses AI heavily, but as we've discussed, there's no regulation. How do you comply with the types of regulations you want regarding technology?
I talk to a lot of CIOs, and they almost universally don't want to share their data to benefit from AI. Appian has been focused on what I call private AI. Our quest was to deliver the magic of language models at scale without leaking information to the outside world. There is no training at all and it uses specific AI techniques that rely on a highly connected database owned by the company. With a sufficient database, you can get great results from AI by simply finding relevant information and sending it along with your questions to a large language model.
You could join the crowd and upload in bulk, but your customers don't want that. They want something surgical. They want to harness the magic of modern AI without sharing information, and they can do that. Must be very good with databases.