EU lawmakers are poised to approve sweeping rules on Wednesday to govern artificial intelligence, including powerful systems like OpenAI's ChatGPT, the last major hurdle toward formal adoption.
European Union officials say the rules, first proposed in 2021, will protect citizens from possible risks while encouraging innovation on the continent.
Since OpenAI's Microsoft-backed ChatGPT debuted in late 2022, Brussels has been rushing to pass a new law and a new global AI race has begun.
Excitement over generative AI exploded as ChatGPT can spit out eloquent texts, including poems and essays, and pass medical exams within seconds.
Further examples of generative AI models include DALL-E and Midjourney, which generate images, but there are also models that generate audio from simple inputs in everyday language.
Advertisement – SCROLL TO CONTINUE
Dragos Tudlace, who along with fellow lawmaker Brando Benifay pushed the document through parliament, said: “The EU has delivered its results. No ifs, no buts, no more.”
“Europe is now the trusted global standard-setter for AI,” said Thierry Breton, EU Commissioner for the Internal Market.
The EU's 27 countries are expected to back the document in April, before the law is published in the EU's Official Journal in May or June.
Advertisement – SCROLL TO CONTINUE
Rules covering AI models like ChatGPT are expected to take effect 12 months after the law becomes official, while companies will have two years to comply with most other rules.
The EU regulation, known as the 'AI Act', takes a risk-based approach, with the higher the risk of an AI system, the stricter the requirements.
For example, high-risk AI providers must conduct risk assessments to ensure their products are compliant with the law before being released to the public.
Advertisement – SCROLL TO CONTINUE
“We are taking measures proportionate to the AI model to regulate as little as possible and as much as necessary,” Breton told AFP.
Violating companies could be fined between 7.5 million euros and 35 million euros ($8.2 million to $38.2 million), depending on the type of violation and the size of the company.
The use of AI in predictive enforcement or systems that use biometric information to infer an individual's race, religion, or sexual orientation is also strictly prohibited.
Advertisement – SCROLL TO CONTINUE
The rules also ban real-time facial recognition in public places, but police must get approval from law enforcement authorities before deploying AI, with some exceptions for law enforcement agencies. There is.
AI is likely to change every aspect of European life, and many stakeholders have been lobbying the EU as big tech companies vie for control of the lucrative market.
The watchdog group on Tuesday cited lobbying efforts by French AI startup Mistral AI and Germany's Aleph Alpha, as well as US-based tech giants including Google and Microsoft.
Advertisement – SCROLL TO CONTINUE
They warned that enforcement of the new rules “could be further weakened by corporate lobbying,” adding that the study showed “how strong corporate influence was” during negotiations.
The three watchdogs, based in Belgium, France and Germany, said: “Many details of AI law are still outstanding and need to be clarified in a number of implementing laws, for example standards, thresholds and transparency obligations. ” he said.
Commissioner Brereton stressed that the EU had “withstood demands from special interests and lobbyists to exempt large-scale AI models from regulation,” adding, “The result is a balanced, risk-based, future-proof regulation. has come true,” he added.
However, CCIA, one of the main technology lobby groups, warned that many of the new rules “remain unclear and could slow the development and deployment of innovative AI applications in Europe.”
Boniface de Champly of CCIA Europe said: “Proper enforcement of the law will therefore ensure that AI rules do not place an undue burden on companies seeking to innovate and compete in a vibrant market. It is extremely important to do so.”
raz/rmb/rlp