opinion
Harnessing the power of AI for good will require democratic societies to work together.
Written by Kay Firth-Butterfield
Because the language model is large, It started making headlines in the fall of 2022, and millions of words have been written about the dangers of AI. Those of us who work with these technologies and their impacts have been talking about this since 2014, and now that conversation has gone mainstream, and we're starting to see how AI can help address the world's most pressing challenges. There is a risk that necessary debate will be drowned out. .
The solution is governance. The world of AI needs public trust to reap the benefits of AI, but that cannot happen without regulation. As we look to the future, we need to ensure the technology we use today, known as responsible AI, is secure. According to a spring 2023 poll by the AI ​​Policy Institute, more than 60% of Americans say they are concerned about the negative effects of AI, but without strong laws, it is difficult to prevent the negative effects of AI. They also don't have the tools to deal with it when it happens.
But just when we need public trust in AI the most, it is declining at an alarming rate in democracies. A recent Luminate survey found that 70% of voters in the UK and Germany who consider themselves to understand AI are concerned about its impact on elections. Similarly, an Axios/Morning Consult poll found that more than half of Americans believe that AI will definitely or probably influence the outcome of the 2024 election, while more than one-third believe that AI is to blame. They indicated that they expected their own confidence in the election results to decline. More generally, an American Psychological Association poll found that two in five U.S. workers are worried about losing their jobs to AI, while a Gallup poll found that Americans 79% of respondents do not trust companies to independently control the use of AI. Without addressing these concerns, the economic and positive benefits of technology cannot be realized.
But in 2021, PwC's analysis showed more hopeful results. Researchers surveyed more than 90 AI ethical principles from groups around the world and found that all participants agreed on nine core ethical concepts, including accountability, data privacy, and human agency. I discovered that Now governments must find a way to make these concepts a reality by working together to build a coalition of nations that can do the hard work of planning for an uncertain future.
If we continue to react to technological advances without thinking ahead, there is a very real risk that in 2050 we will find ourselves living in a world that no longer meets our needs as humans. there is. The European Union has so far opted for a risk mitigation approach that addresses current issues but does not address the fundamental question of how humans want to interact with AI in the future. Each state in the U.S. has its own laws, which can slow innovation and make collaboration more difficult.
It is guaranteed that future generations will work alongside AI systems and robots. However, AI regulations have been slow to develop and currently rely on existing laws to promote best practices. Rather than simply trying to reduce harm, we need to create best practices for what kind of AI we need in the world and how to build it. Only then will our children be able to live in a human-centered society served by AI, rather than an AI world occupied by humans.
Rather than addressing every specific situation by democratic governments around the world working with civil society, academia, and corporate stakeholders (which is impossible), organizations around the world must instead follow suit. Laws can be enacted outlining specific requirements that must be met. When developing, deploying, and using AI systems. Although many people who use AI think they are using it for good, they have little understanding of the negative consequences that can result. It is therefore up to policymakers to codify priorities such as privacy and data security. This requires AI development teams to adopt proven best practices and comply with all existing and new laws for building responsible AI systems from the beginning.
It is tempting to think that domestic governance gaps can be filled by international regulations and treaties, but this approach has risks. The UN Security Council is also at a standstill on the topic of harm reduction, much less on a topic that requires forward-thinking. . For example, we have been waiting for an agreement on the control of lethal autonomous weapons since 2013, despite calls from the UN Secretary-General and small states, with no result. If the Security Council is unable to achieve such a policy, it will likely be difficult to agree to proactively develop AI policies that suit all stakeholders. The United Nations is expected to appoint members to a high-level panel on AI, a welcome development, but it is unlikely that the creation of an advisory panel will lead to meaningful regulation as quickly as needed. The world doesn't have five years to figure out its next step.
However, international cooperation does not have to take place through the United Nations. Promising proposals include emulating the model of the European Organization for Nuclear Research, an intergovernmental organization of 23 member states, and Gavi, the vaccine alliance. If we continue down that path, the Global North will no longer have unilateral control over AI technology. It will reduce inequality and make AI useful to many different cultures. Governments around the world will work together to envision a positive AI-powered future for their citizens and enact the regulations necessary to achieve it.
Governance is difficult. True global governance is even more difficult. This faster path will also take time, so companies that design, develop, and use AI will need to self-regulate in the meantime, with the full support of their boards and executives. But ultimately, we will need to work together to build a world where humanity benefits from AI, rather than being forced to adapt to it. A comprehensive approach is essential and we must act now.
Kay Firth-Butterfield is CEO of Good Tech Advisory, former head of artificial intelligence at the World Economic Forum, and the world's first chief ethics officer for AI.