MrBeast has become the biggest YouTuber in the world, thanks in part to his elaborate gifts.
He handed out thousands of free Thanksgiving turkeys and once left a waitress a $10,000 tip for two glasses of water. So when a video emerged showing him offering his newly released iPhone to thousands of people at his low price of $2, it was one of his typical stunts. It seemed like one.
There's one problem. It wasn't actually him. He said the video was the work of someone who used artificial intelligence to reproduce his likeness without permission.
“Are social media platforms ready for the rise of AI deepfakes?” wrote MrBeast (real name Jimmy Donaldson). Post to X (old Twitter). “This is a serious problem.”
Welcome to the world of deepfake advertising. There, the product may be genuine, but its recommendation is completely different. Here, a video that appears to show a celebrity connecting items from a dental plan to a cookware is actually just a hoax generated by an AI that uses technology to change its voice, appearance, and behavior. Not too much.
Of course, fake celebrity endorsements have been around for as long as the celebrities themselves. What has changed is the quality of the tools used to create it. So instead of simply stating that a celebrity endorses a product, they can fabricate a video to prove it and fool unsuspecting consumers.
With a few clicks and a little know-how, savvy fraudsters can generate audio, video, and still images that are increasingly difficult to identify as fakes, even though the advertising world is still relatively nascent. can.
“It's not big right now, but I think the technology is getting better and better, and there's still a lot of potential for it to get even bigger,” said Colin Campbell, associate professor of marketing at American University. San Diego published research on AI-generated ads.
Celebrities such as Tom Hanks and Gayle King are also targets of AI fraud
There is no shortage of abuses of AI technology.
An artificially generated robocall used President Joe Biden's voice to urge New Hampshire voters to skip the state's primary election. And last month, sexually explicit images of pop star Taylor Swift surfaced online, sparking calls for regulation.
On Friday, a number of major technology companies signed an agreement to work to prevent AI tools from being used to interfere with elections.
more: Big tech promises to crack down on election AI deepfakes in 2024. Will they keep their word?
But this technology is also being used to deliver fabricated product recommendations straight to people's pockets.
Britt Parris, an assistant professor at Rutgers University who studies AI-generated content, said: “The burden is placed on people who are bombarded with information to be the arbiters of protecting their financial selves first and foremost.” . “The people who are making these technologies available, the people who are actually profiting from deepfake technology… they don’t really care about the public. We focus on that.”
Actor Tom Hanks and broadcaster Gayle King are among those who have claimed their voices and images have been altered without their consent and linked to unauthorized giveaways, promotions and endorsements.
“We're at a new crossroads here, a new connection of what's possible in terms of using someone's likeness,” Parris said.
Similar endorsement claims were debunked by USA TODAY, including Kelly Clarkson's endorsement of weight loss keto gummies and claims that an Indian billionaire promoted a trade program. A video that appears to show Clarkson has been viewed more than 48,000 times.
Yet they continue to pop up, in part because they are so easy to create.
A search of Meta's ad library by USA TODAY revealed multiple videos that appeared to be AI-generated hoaxes. They claim Elon Musk is handing out gold bars and Jennifer Aniston and Jennifer Lopez are offering liquid Botox kits.
“Anytime you can't pay an actor or celebrity to appear in an ad, they'll appear, right?” Parris said. “These small fraud companies…will no doubt use the tools at their disposal to try to extort as much money as possible from people.”
“This software is very easy to use”
Experts say the creators of these fake testimonials usually follow a simple process.
These start with text-to-speech programs that generate audio from written scripts. Other programs can use small samples of a particular celebrity's authentic voice to recreate the audio, and in some cases can even mimic the real-life voice, said Shiwei Liu, a digital media forensics expert at the University at Buffalo. Sometimes the audio is only about a minute long.
Other programs create lip movements to match the spoken words in an audio track. Liu said the video will be superimposed over the person's mouth.
“All the software is very easy to use,” says Liu.
These videos are also easy to create in bulk and customized for specific audiences, which poses another problem. Videos that are not widely distributed can be difficult to find and can be difficult for law enforcement. For example, the Meta ad library had 63 versions of ads credited to Lopez and Aniston. Many were only active for a day or two, accumulating hundreds of views before being removed and replaced with new ones.
“Most of the time, they're not going anywhere,” Campbell said. “So you just target specific consumer groups and only those people can see them. These are even more difficult to detect if you are targeting people you don’t recognize.”
For now, it's still possible to see clues that an AI-generated video isn't real with the naked eye. Liu says it is difficult to recreate teeth and tongue artificially. In some cases, fake videos even exist. too much It is perfect and eliminates pauses, breathing, and other imperfections in human speech.
But the technology has come so far in such a short period of time that it could become indistinguishable between a fake video and a real one in “a few years,” Campbell said.
“Video tools aren't as good as image-based ones,” he said. “But video is essentially just a combination of images, right? So it's just a matter of processing power and gaining experience with it.”
Think critically and use online AI detection tools
Social media users have several tactics at their disposal to protect themselves. Some of them were identified by the Better Business Bureau in an alert issued in April 2023.
The key is to think critically.
“Tom Hanks, it would seem strange that he would sell dental insurance,” Paris said. “Based on what you know about a particular celebrity, if it doesn't pass the smell test, it's probably not worth thinking too hard about, and certainly not sharing. At least until you go and do a little homework and background research.” You won’t believe it.”
Companies typically do not limit their legitimate advertising to a single social media platform. For example, a real video posted on Facebook may also appear on Instagram, TikTok, and YouTube.
There are also several online detectors that can determine with varying degrees of accuracy whether an image is real or generated by AI.
Social media users who aren't yet familiar with these tools and tips still have a little time to figure it out, but maybe not a lot.
“I would say fake commercials are a threat,” Liu said. “But it's not really dangerous for everyone. Yet.”
Thank you for supporting our journalism. You can subscribe to our print edition, ad-free app, and e-newspaper here.
USA TODAY is a certified signatory of the International Fact-Checking Network. The network requires a proven commitment to nonpartisanship, fairness, and transparency.