With the advent of artificial intelligence, financial fraud is becoming harder to detect.
Interactive deepfakes, such as live phone calls or video conferences with people who appear to be real people, are becoming more common. In February, a Hong Kong company lost HK$200 million (US$25.6 million) after its employees were fooled by a deepfake that showed the company's chief financial officer ordering them to transfer money during a video call.
Also in February, gizmodo The Federal Trade Commission reported that it has identified thousands of AI scams impersonating Elon Musk and his company, Tesla.
,
SpaceX, Neuralink, and X are scammers trying to scam people out of their money. This article focuses on one man from Pennington, New Jersey who lost his $18,880 during his online event for a Tesla Cybertruck.
Lou Steinberg, managing partner at cyber research institute CTM Insights, said interactive deepfakes are particularly problematic because “people tend to trust people when they talk to them and get a response back.”
A November Gallup poll found that 15% of respondents said someone in their household had been a victim of financial fraud, and 57% were worried that a scammer would trick them into sending money to them. It was revealed. This is higher than the 51% who are worried about someone breaking into their car.
Advertisement – SCROLL TO CONTINUE
That number puts him on par with Stuart Sprenger, senior wealth advisor for personal wealth management at Citi.
,
Experienced. He says about 10% of his clients have lost money to scams over the past few years, and he's aware of such scams.
structure
Deepfake calls impersonate financial institutions to steal money from potential victims by asking them to verify their accounts or read their PIN numbers, and some use the voice of a family member to trick someone into transferring funds. There are some things that do. “Microsoft says he can reproduce your voice in three seconds,” Steinberg says.
Advertisement – SCROLL TO CONTINUE
In November, Philadelphia General Counsel Gary Schildhorn testified at a special Senate hearing on AI and fraud, explaining how he almost fell victim to a scammer who used AI to replicate his son's voice. The scammers claimed that his son had been in a car accident and needed $9,000 in bail.
Steinberg said AI could personalize mass email and text phishing, making it a bigger target for scammers. Gone are the days when obviously fake emails were riddled with typos, and instead consumers receive fake, personalized communications that don't automatically get sorted into their spam baskets. The links to his fake website have also become more sophisticated.
“They know where you work because you put it on LinkedIn. They know where you vacationed because you posted on social media. They know where you vacationed because you posted on social media. I know the name,” he says.
Advertisement – SCROLL TO CONTINUE
It's not just scammers who reach out. Joanne Bradford, chief financial officer at financial advisor Domain Money, said AI is making it harder to detect fake sellers. Bradford, the former president of online shopping savings platform Honey, says people should check out sellers on social media, Google reviews and even over the phone before making a purchase.
“Now with AI, you can replicate anything very quickly. The provenance and trustworthiness of who you're actually buying from is very important,” she says.
counterattack
Advertisement – SCROLL TO CONTINUE
Stolen money can be difficult to recover, so experts urge people to take precautions such as checking their financial accounts frequently, using two-factor authentication, and limiting the amount of information they reveal. .
Steinberg said he has a code word for families to use if they're worried that an emergency call from a loved one is fake. He also recommends not answering “yes” to identification questions, as they may be recorded and used elsewhere.
Interactive deepfake calls are scripted, so one way to detect AI-generated calls is to exclude the computer from the script. Steinberg said his friend made a challenge that appeared to be an AI robocall with inaccurate responses. Steinberg said he recently received a robocall from a friend asking him to verify his personal information in case his account could be hacked. “So he says, ‘Last week, you ran over my duck.’ The AI doesn’t know how to react to that,” he says.
Veronica Perez, loss prevention manager at Affinity Federal Credit Union, agrees that scammers often work from a script and provide enough personal information to make the contact believe the communication is legitimate. I agree to the collection from the website. AI scams trick people into revealing more information by speaking with a sense of urgency and using only small amounts of public information.
“Be careful about the information they're giving you. Are they using your name? If they say there's unusual activity on your device.
Advertisement – SCROLL TO CONTINUE
Visa card, ask, “What kind of card is it?” We ask them to provide us with the last four digits of their card number,” she said, adding that the authorized agency that contacts them will have that information.
Even if the caller pressures the victim to continue calling, don't be afraid to hang up and call back, she added. “They're trying to avoid you thinking, 'Does this make sense?'” she says. “If it's confidential, if you're told to lie, or if you're prevented from sharing it with friends, family or the police, that's a big red flag.”
Please contact editors@barrons.com.