- Google was fined Wednesday due to some issues with how it trains its AI.
- French regulators have fined the tech giant 250 million euros (about $270 million).
- The watchdog group said Google broke its pledge after using media content to train Bird, now called Gemini.
Google was fined about $270 million on Wednesday, partly due to problems with how it trains its AI.
French regulators claim Google has backtracked on its commitment to negotiate contracts with French news organizations over content. The watchdog claimed that Google used journalists' content without their knowledge to teach its AI chatbot Bard (now rebranded as Gemini).
In a previous settlement, Google promised to “negotiate in good faith based on transparent, objective, and non-discriminatory standards,” which regulators called “Promise 1.”
The regulator said that while legal issues related to the use of news content to train AI models still remain, “at least Auto Lite We believe that Google violated Commitment 1 by failing to notify publishers that their content would be used in the Bard software. ”
Regulators also said Google failed to cooperate with an oversight trustee set up as part of an earlier settlement, failed to negotiate in good faith and failed to provide complete revenue information to negotiating parties.
The California-based company was fined 250 million euros for the violations listed, but did not dispute the facts, according to French regulators.
Google said in a statement Wednesday that the fine was “not proportionate” to the allegations.
Google said it agreed to the payment because “it's time to move on.”
Google said in a statement that it is focused on “a sustainable approach to connecting people with high-quality content, and our ambition to work constructively with French publishers.”
“Throughout the past several years, we have actively discussed concerns from publishers and the FCA, and that continues to be the case,” Google said in a statement. “But now is the time to provide greater clarity on who should pay and how, so that all parties can chart a course towards a more sustainable business environment.”
How tech companies train their chatbots remains a hot topic and has already been brought up in court.
In 2022, UK regulators fined AI company Clearview nearly $9 million related to the way it collected biometric data for facial recognition. However, this fine was overturned a year later on appeal by the UK General Chamber of Regulation Tribunal.
The New York Times sued OpenAI late last year over the ChatGPT bot, claiming the AI company violated the law by using its content to teach large-scale language models. OpenAI asked a judge to dismiss at least part of the lawsuit, alleging that the Times hired someone to “hack” its platform, which the company denies.
Meanwhile, some publishers (including Business Insider's parent company Axel Springer) have agreements with companies like ChatGPT to use their content.
Correction, March 20, 2024: A previous version of this article incorrectly stated that this is the first time a company has been fined in connection with AI training. In 2022, Clearview was fined by UK regulators for scraping biometric data. The fine was later canceled on appeal.
On February 28, Axel Springer, the parent company of Business Insider, joined 31 other media groups in filing a $2.3 billion lawsuit against Google in Dutch court, alleging losses caused by the company's advertising practices. I woke you up.