How generative AI is making fraud a lot easier—and cheaper—to pull off
Generative AI offers seemingly endless potential to magnify both the nature and the scope of fraud against financial institutions and their customers; it’s limited only by a criminal’s imagination.
The astounding pace of innovations will challenge banks’ efforts to stay ahead of fraudsters. This is because generative AI-enabled deepfakes incorporate a “self-learning” system that constantly checks and updates its ability to fool computer-based detection systems.3
Specifically, the ready availability of new generative AI tools can make deepfake videos, fictitious voices, and fictitious documents easily and cheaply available to bad actors. There is already an entire cottage industry on the dark web that sells scamming software from US$20 to thousands of dollars.4 This democratization of nefarious software is making a number of current anti-fraud tools less effective.5
So, no wonder financial services firms are particularly concerned about generative AI fraud that accesses client accounts. One report found deepfake incidents increased 700% in fintech in 2023.6 For audio deepfakes alone, the technology industry is behind in developing tools to identify fake content.7
Some fraud types may be more vulnerable to generative AI than others. For example, business email compromises, one of the most common types of fraud, can cause substantial monetary loss, according to the FBI’s Internet Crime Complaint Center’s data, which tracks 26 categories of fraud.8 Fraudsters have been compromising individual and business email accounts through social engineering to conduct unauthorized money transfers for years. However, with gen AI, bad actors can perpetrate fraud at scale by targeting multiple victims at the same time using the same or fewer resources. In 2022 alone, the FBI counted 21,832 instances of business email fraud with losses of approximately US$2.7 billion. The Deloitte Center for Financial Services estimates that generative AI email fraud losses could total about US$11.5 billion by 2027 in an “aggressive” adoption scenario.
Banks have been at the forefront of using innovative technologies to fight fraud for decades. However, a US Treasury report found “existing risk management frameworks may not be adequate to cover emerging AI technologies.”9 While old fraud systems required business rules and decision trees, financial institutions today are commonly deploying artificial intelligence and machine learning tools to detect, alert, and respond to threats. For instance, some banks are using AI to automate processes that diagnose fraud and send the investigations to the appropriate team at the bank.10 Some banks are already incorporating large language models to detect signs of fraud, such as one used by JPMorgan for email compromises.11 Similarly, Mastercard is working to prevent credit card fraud with its Decision Intelligence tool, which scans a trillion data points to predict if a transaction is genuine.12