Treasury calls on Banks to collaborate more to fight fraud

Treasury calls on Banks to collaborate more to fight fraud Informed

Lenders must share data to fight new AI fraud risks.

Last month, the Department of the Treasury issued a troubling report warning banks they are at-risk to emerging AI fraud threats. The culprit is a failure to collaborate. The Report warns that lenders are not sharing “fraud data with each other to the extent that would be needed to train anti-fraud AI models.

This report should be a wake-up call. As any fraud-fighting veteran knows, combating fraud is a perpetual arms race. And when new technologies like generative AI emerge, the status quo is disrupted.  Right now, the fraudsters are gaining the upper hand. According to a recent survey by the technology firm Sift, ⅔ of consumers have noticed an increase in fraud scams since November 2022, when generative AI tools hit the market.

How is AI changing the fraud landscape?

According to the Treasury report, new AI technologies are “lowering the barrier to entry for attackers, increasing the sophistication and automation of attacks, and decreasing time-to-exploit.” These technologies “can help existing threat actors develop and pilot more sophisticated malware, giving them complex attack capabilities previously available only to the most well-resourced actors. It can also help less-skilled threat actors to develop simple but effective attacks.

The same generative AI technology that helps people create songs, draw pictures, and improve their software coding  is now being used by fraudsters. Fraudsters, for example, can purchase an AI chatbot on the Dark Web, called Fraud GPT, to create phishing emails and phony landing pages. AI technology can help produce human-sounding text or images to support impersonation and generate realistic bank statements with plausible transactions.  Informed.IQ’s fraud consortium has identified fraudsters creating fraudulent bank statements that duplicate existing statements with minor variations in names and charges. 

The Financial Services industry should pay attention to Treasury’s call for more data sharing. AI models, including defensive AI fraud tools, require data to power them. The Report notes “the quality and quantity of data used for training, testing, and refining an AI model, including those used for cybersecurity and fraud detection, directly impact its eventual precision and efficiency.”  

Why is data sharing critical?

Data sharing is critical because it allows lenders to see emerging patterns as well as fraud threats outside their portfolio. With data sharing, once a fraudster commits fraud anywhere in the collective network, the whole network can defend against it. It’s like a collective immune system. Consortia are particularly important for smaller organizations like credit unions, which lack visibility into the overall threat environment.

We applaud industry efforts like the American Bankers Association’s information exchange, which purportedly enables banks to share names and account information of suspected scammers. But these type of trade-association led initiatives are invariably insufficient. Fraudsters operate at the speed of innovation, with massive data at their disposal, and even changing methods of attack. It’s not in the DNA of any trade group to keep up. 

In contrast, this is where private sector organizations can help. It took companies like FICO and its Falcon software to corral transaction fraud. I worked at ID Analytics in the early 2000s when we created a consortium of identity information with major lenders to turn the tide on identity fraud. Just as with other fraud-fighting innovations, the solutions for AI-generated fraud will most likely come from industry itself.

Why Informed.IQ?

Informed is one of a number of companies who operate fraud consortia. Millions of loan applications pass through our system from the nation’s largest lenders, auto finance companies, credit unions, and fintechs. We collect and analyze data in real-time at the speed of the transaction, so we can help lenders stop a loan before it is funded. And we use the same data to defend lenders – anomaly detection, knowledge graphs, historical knowledge of fraud – that AI fraudsters use to commit the fraud.

Yes, lenders should be worried about the new AI fraud threats. The problems are real, and cannot be solved by any one company in a silo.

author avatar
Tom Oscherwitz VP of Legal
Tom Oscherwitz is Informed’s VP of Legal and Regulatory Advisor.  He has over 25 years of experience as a senior government regulator (CFPB, U.S. Senate) and as a fintech legal executive working at the intersection of consumer data, analytics, and regulatory policy.

New: American Banker - Sharing information = best defense against AI fraud