Large financial institutions have an advantage over smaller institutions in preventing AI-related fraud.
This is according to a new report from the U.S. Treasury Department on addressing risks centered on artificial intelligence (AI) in the financial sector.
“As more companies adopt AI, a gap exists in the data available to financial institutions to train their models,” the department said in a press release accompanying the report on Wednesday, March 27. “This gap is stark in the area of fraud prevention, where data sharing between companies is inadequate.”
Large financial institutions (FIs) can benefit significantly by leveraging internal data to create such models, the report said. Although smaller financial institutions tend to have more historical data, they typically lack the internal data and expertise needed to develop their own anti-fraud AI models. I am.
The report also reveals the growing need for financial institutions to share data with each other to better train anti-fraud AI and machine learning (ML) models.
“Unlike data on cybersecurity, very little fraud information is shared across the financial sector, limiting the ability to aggregate fraud data for use in AI systems,” the report said. “Most financial institutions interviewed by Treasury expressed the need for greater collaboration in this area, especially as fraudsters themselves use AI and ML technologies.”
PYMNTS Intelligence research shows that financial institutions use a variety of tools to prevent fraud, and across the board, they are using a combination of their own fraud prevention systems, third-party resources, and new technology to help their institutions. They say they are protecting their customers.
Last September, PYMNTS Intelligence compiled the 2023 State of Fraud and Financial Crime in the United States, which found that 66% of bank executives said they were leveraging AI and ML to fight fraud, and by 2022. This was an increase from 34%.
“However, developing AI and ML tools can be expensive, which is why only 14% of financial institutions say they are building their own AI and ML technology to fight fraud. ,” PYMNTS wrote. “Nearly 30% say they rely entirely on third-party vendors to provide these tools. Similarly, just 11% of financial institutions develop their own APIs in-house. 22% rely entirely on third-party API solutions.”
Meanwhile, PYMNTS recently spoke with Robin Lee, Hawk AI's APAC general manager, about the need to combine technology and critical thinking when combating financial crime, which she described as “RoboCop, not Terminator.” described as an approach.
“When the first 'RoboCop' movie came out, the tagline was part man, part machine, all cops,” he said. “This does a good job of summarizing the approach we should take with the Terminator, which is 100% machine.”