While AI is helping businesses grow and innovate, the burgeoning technology is also helping criminals come up with sophisticated fraud methods, according to financial services firm Plaid.
According to a Plaid spokesperson, presentations, also known as liveness attacks, have increased by 38% this year alone. luck. A liveness attack is when a malicious party attempts to trick the video portion of the verification process by impersonating someone else. It can be done by holding up a photo, wearing a mask that looks real, or using a deepfake image on the screen. About 12% of all liveness attacks used generated faces, and about 25% of fraudulent ID document attempts used generated AI, the spokesperson added.
Alain Meier, head of identity at Plaid, said: luck Fraud is becoming increasingly easy given the availability of AI-based resources, online data, and even those found on the dark web.
“The bar for conducting a sophisticated fraud attack is getting lower every year,” he added.
About 90% of Plaid's customers require some form of “know your customer” (KYC) process, a spokesperson said. This means businesses must verify customers when creating new accounts. For example, many banks and financial services companies now require users to submit a video of themselves as part of the verification process. The company then matches the video to things like the customer's driver's license.
But more sophisticated scammers have found a workaround here. They are using his AI to create deepfake videos of potential victims accessing their accounts. “Fraud has become very professional,” Meyer added.
Meier shared examples of how Plaid's technology and humans helped uncover potential fraud. In late 2023, financial services clients were registering new users and using Plaid's identity verification software to ensure there was a real person behind the account. Part of the process included a “survival check” in which the company asked users to submit a selfie video. Plaid's software noticed that multiple users had a similar IP address for her, which raised a red flag.
Plaid analysts then reviewed the videos and found that several videos had the same background: a large number of phones and devices mounted against a brick wall. Further investigation revealed that an organized crime group based in Eastern Europe was behind the fakery.
However, deepfakes may not be detected. When that happens, Machine Learning tools can be used to detect subtle changes in fake documents, photos, and videos, as well as background elements, associated file data, and how the material was presented, Meyer said. It is said that it is possible to analyze the
Still, it is difficult for many businesses, especially small and medium-sized businesses, to keep up with the pace of evolving fraud techniques. Meyer said many simply don't have the resources. He added: “This fraud is so sophisticated that every company would need to have an in-house team of fraud experts.”