Prominent figures from academia, government and business prepare to gather in South Korea on May 21-22 for the second edition. AI Safety Summitthe tech giant's absence has raised eyebrows. Google.
The search firm's research arm's decision to skip an event exploring the limits and commercial implications of artificial intelligence highlights the complex dynamics shaping the global debate around the technology.Some see the summit as an important forum for addressing AI risks and challengesmeanwhile Some have expressed concern that overregulation could stifle innovation and cede ground to competitors such as China.
Advanced AI Research Group Google Deep Mind He expressed support for the summit, but did not confirm whether he would attend. Reuters reported. Google did not immediately respond to PYMNTS' request for comment.
Business impact
Upcoming Summit comes at a pivotal moment At a time when there is growing awareness of the need for responsible AI development and deployment. The summit's discussions and outcomes could have far-reaching implications for the future of commerce, as governments, businesses and researchers meet virtually to address these issues.
“AI supply chains are very complex and do not stay neatly within national borders.” Andrew Gamino Chongco-founder of reliablethe AI software company told PYMNTS. “One of the themes they presented where we are already starting to see divisions between countries is that in their policy Copyright issues. Although the “fair use” doctrine holds supreme status in the United States, being challenged. Other countries do not necessarily have such traditions. Copyright is not just an issue for model training. Some countries are starting to disagree on whether content generated by AI can receive intellectual property protection. ”
Safety initiatives
Gamino-Cheong noted that there has been a lot of activity in the AI safety field since the last time. AI Summit.of U.S. AI Safety Association It only recently received funding and new leadership.
Meanwhile, the US has already announced several partnerships with the UK and South Korea for AI safety labs. united nations, OECD, world economic forumand International Organization for Standardization They are all busy issuing additional AI guidelines.of European Union AI Law The final political hurdle has also been passed, and its implementation will influence the global AI safety debate. brussels effect.
“The landscape of AI itself hasn't changed much over the past six months,” Gamino-Cheong says. “Most new models released are Llama-3, gemini and DBRXimprovements have been made incrementally towards previous versions, and most are still keeping up with the latest version. GPT-4. Much of the AI focus will be on the infrastructure around the AI and how it can securely access data through its RAG patterns, how it can do so at low cost, and how it can intercept malicious inputs/outputs. It has been. OpenAIThe pending release of GPT-5 could change everything, but we will just We'll have to wait and see. ”
International efforts to make AI safer are gaining momentum.America and Britain formed a coalition partnership Last month we focused on AI safety.US Secretary of Commerce Gina Raimondo and British Secretary of State for Technology Michelle Donnellan We strengthened our collaboration by signing an agreement to foster collaboration in the development of advanced AI model testing. Raimondo highlighted that AI is a vital technology of our time, building on the commitments made at the last AI Security Summit held at Bletchley Park to help address national security and societal risks. He pointed out the importance of partnerships.
However, observers I am keeping it Their expectations for the upcoming South Korean summit are low.
“One of the reasons is that governments are still busy implementing many of the things they announced at the last summit and have not yet decided what to do next,” Gamino Chong said. . “Many leaders are still trying to learn about AI as well, but until they do, Deeper It is too early to discuss the definition of bias, how to address the risks of open access AI models, and the responsibility of AI systems.
For all of our coverage of PYMNTS AI, subscribe to our daily subscription AI Newsletter.