A settlement has been reached between the executor of George Carlin's estate and the podcast producer who used generative artificial intelligence to imitate the late singer's voice and style in an unauthorized special.
Will Sasso and Chad Krutgen, podcast hosts DuceyCarlin's estate then notified the court on Tuesday of the agreement to settle the case. Josh Schiller, an attorney for Karlin's estate, said the agreement would result in a ruling barring further use of the already-deleted video, which violated the rights of the cartoon. It is said that it is a thing. Further terms of the deal were not disclosed. Schiller declined to comment on whether there were any financial losses.
The settlement is believed to be the first resolution to a lawsuit involving the appropriation of a celebrity's voice or likeness using AI tools. The move comes as Hollywood warns against using the technology to exploit the personal brands of actors, musicians, comics and others without their consent or compensation.
“This is a message that we need to be very careful in how we use AI technology, and we need to respect people's hard work and good intentions,” Schiller said. He added that the agreement “provides a blueprint for resolving similar disputes in the future, when artists and public figures have their rights violated by AI technology.”
The legal battle began with a one-hour special titled: George Carlin: “I'm glad I died'', was released on the podcast's YouTube channel in January. In this episode, an AI-generated Karlin imitates the comedian's signature style and rhythm, narrating commentary on AI-generated images, and narrating commentary on reality TV, streaming his services, the prevalence of AI itself, and more. Tackle contemporary topics.
The podcast bills itself as a “first-of-its-kind media experiment,” and the show's premise revolves around the use of an AI program called “Dudesy AI,” which records the host's personal records, including text messages and social media. You can access most of the. Account and browsing history — for writing episodes in the style of Sasso and Kurtgen.
Schiller said the podcasters have approached Carlin's estate with an offer to remove the video and agree not to republish it on any platform in the future. He further added: [Carlin’s] Get rid of your heritage and restore it. ”
The lawsuit filed by Karlin's estate alleges copyright infringement for using Karlin's copyrighted works without permission.
The beginning of the video explains that the AI program that created the special ingested 50 years of Carlin's original stand-up routines, owned by Carlin's estate, as training material.
The complaint also alleges that the use of Carlin's name and likeness violated publicity laws. It noted that the special was promoted as an AI-generated Carlin episode, in which the deceased comedian would be “resurrected” using AI tools.
The Carlin feature wasn't the first time Dudecy used AI to impersonate a celebrity. Last year, Sasso and Krutgen released an episode featuring an AI-generated Tom Brady performing a stand-up routine. It was removed after the two received suspension letters.
There is no federal law covering the use of AI to imitate a person's likeness or voice, leaving a gap in a patchwork of state laws to fill. Still, there is little relief for people in states that have not passed such protections, prompting lobbying efforts from Hollywood.
This prompted a bipartisan coalition of House members to introduce a long-awaited bill in January that would ban the publication and distribution of unauthorized digital replicas, including deepfakes and voice clones. By granting intellectual property rights at the federal level, this law aims to give individuals the exclusive right to authorize the use of their images, sounds, and visual likenesses. Under the bill, there would be stiff penalties for unauthorized use, and individuals and entities whose exclusive rights were affected would be able to sue.
In March, Tennessee became the first state to pass a bill specifically aimed at protecting musicians from the unauthorized use of AI to imitate their voices without their permission. The Portrait Rights, Sound and Image Security Act (ELVIS Act) builds on the state's older right-to-publicity laws by adding an individual's “voice” to the protected realm. California has not yet updated its ordinance.
Tuesday's deal comes as OpenAI prepares to launch a new tool that can recreate the human voice from a 15-second recording. Given a recording and text, you can read the text out loud with the recording's voice. Sam Altman's organization is not releasing this technology publicly to better understand its potential harms, such as its use to spread misinformation or impersonate people to facilitate fraud. said.
With the rise of AI voice imitation tools, there is debate over whether platforms hosting infringing content should be held accountable. Under the Digital Millennium Copyright Act, platforms such as YouTube have access to certain safe harbor provisions, so long as they take specific steps to remove such potentially infringing content. Artist advocacy groups are calling for changes to the law.
“This is not a problem that will resolve itself,” Schiller said in a statement. “We must take swift and strong action in the courts, and there must be some accountability from AI software companies whose technology is being weaponized.”