As AI systems rapidly proliferate, public policy makers and industry leaders are seeking clearer guidance on how to manage the technology. A majority of IEEE members in the United States express that current regulatory approaches to the management of artificial intelligence (AI) systems are inadequate. They also argue that prioritizing AI governance should be a public policy issue, similar to issues such as health care, education, immigration, and the environment. This is according to the results of a study conducted by IEEE for the IEEE-USA AI Policy Committee.
We chair the AI ​​Policy Committee and recognize that IEEE members are an important and valuable resource for staying well-informed about the technology. To guide public policy advocacy in Washington, D.C., and to better understand views on the governance of AI systems in the United States, IEEE joins 9,000 IEEE-USA active members working on AI and neural networks A random sample of 888 active members was surveyed.
The study intentionally did not define this term A.I.. Instead, we asked respondents to use their own interpretation of technology when responding. This result showed that even among IEEE members, there is no clear consensus on the definition. A.I.. There are significant differences in how members think about AI systems, and this lack of convergence has implications for public policy.
Overall, members were asked their opinions on how to govern the use of algorithms and data privacy in resulting decision-making, and whether the U.S. government should increase its workforce and expertise in AI.
The state of AI governance
IEEE-USA has long advocated for strong governance to control the impact of AI on society. It is clear that U.S. public policy makers are struggling to regulate the data that powers AI systems. Existing federal law protects certain types of health and financial data, but despite repeated attempts, Congress has yet to pass legislation implementing national data privacy standards. Data protection for Americans is fragmented, and compliance with complex federal and state data privacy laws can be costly for industry.
Many U.S. policymakers argue that AI governance cannot be achieved without a national data privacy law that provides standards and technical guardrails for data collection and use, especially in commercial information markets. This data is a critical resource for large-scale third-party language models used to train AI tools and generate content. As the U.S. government has acknowledged, the commercially available information marketplace allows any buyer to obtain large amounts of data about individuals and groups, including details protected by law. This issue raises serious privacy and civil liberties concerns.
Data privacy regulation proved to be an area where IEEE members had a strong and clear consensus.
Survey points
The majority of respondents (approximately 70%) said that current regulatory approaches are inadequate. You can learn more from each individual answer. To provide context, we have categorized the results into her four areas of discussion. Governance of AI-related public policies. Risk and responsibility. trust; and comparative perspective.
Governance of AI as public policy
While there are differing opinions on aspects of AI governance, what stands out is the consensus on regulating AI in specific cases. More than 93% of respondents support protecting the privacy of personal data and support regulations to address misinformation generated by AI.
Approximately 84% support requiring risk assessments for medium- and high-risk AI products. 80% called for transparency and explainability requirements for AI systems, and 78% called for restrictions on autonomous weapons systems. More than 72 percent of members support policies that limit or govern the use of facial recognition in certain situations, and nearly 68 percent support policies that regulate the use of algorithms in consequential decisions.
There was strong agreement among respondents to prioritize AI governance as a public policy issue. Two-thirds said the technology should be given at least the same priority as other areas within the government's purview, such as health care, education, immigration and the environment.
Although 80% support the development and use of AI and more than 85% say AI must be carefully managed, respondents are divided on who and how such management should occur. It doesn't match. While just over half of respondents said the government should regulate AI, this data point should be considered alongside the clear majority support for government regulation in specific sectors or use case scenarios.
Only a minority of non-AI-focused computer scientists and software engineers believed that private companies should self-regulate AI with minimal government oversight. In contrast, almost half of AI experts prefer government oversight.
More than three-quarters of IEEE members support the idea that governing bodies of all types should do more to govern the impact of AI.
risks and responsibilities
Many of the survey questions asked about perceptions of AI risk. Almost 83 percent of members said the public does not have enough information about AI. More than half agree that the benefits of AI outweigh the risks.
When it comes to responsibility and liability for AI systems, just over half said developers have the primary responsibility for ensuring systems are safe and effective. About one-third said the government should take responsibility.
trusted organization
Respondents ranked academic institutions, nonprofit organizations, and small and medium-sized technology companies as the most trusted organizations for responsible design, development, and deployment. The three least trusted forces are big technology companies, international organizations, and governments.
The most trusted organizations to responsibly manage or govern AI are academic institutions and independent third parties. The least trusted organizations are big technology companies and international organizations.
comparative perspective
Members indicated a strong desire for AI to be regulated to reduce social and ethical risks, with 80 percent of non-AI science and engineering professionals and 72 percent of AI workers holding this view. I supported it.
Almost 30% of AI professionals say regulation can stifle innovation, compared to about 19% of non-AI professionals. Majorities across all groups agree that it is important to start regulating AI rather than waiting, with 70 percent of non-AI experts and 62 percent of AI workers supporting immediate regulation. Masu.
The majority of respondents acknowledged the social and ethical risks of AI and emphasized the need for responsible innovation. More than half of AI professionals prefer non-binding regulatory tools such as standards. About half of non-AI experts support specific government rules.
Mixed governance approach
The survey found that the majority of IEEE members based in the United States support AI development and strongly advocate for its careful management. The results will guide IEEE-USA as it works with Congress and the White House.
While respondents acknowledge the benefits of AI, they express concerns about social impacts such as inequality and misinformation. Trust in the actors responsible for creating and managing AI varies widely. Academic institutions are considered the most trusted organizations.
A notable minority opposes government involvement and wants non-regulatory guidelines and standards, but this number should not be taken in isolation. Conceptually, there are different attitudes towards government regulation, but certain scenarios such as data privacy, the use of algorithms in consequential decision-making, facial recognition, and autonomous weapons systems have an overwhelming demand for rapid regulation. There is a consensus.
Overall, a mixed governance approach using laws, regulations, technical and industry standards is preferred.