Deduce Study Shows AI-Generated Fraud Surge

Deduce, a fraud technology platform designed to prevent AI-generated identity fraud, in association with Wakefield Research, this week released a study showing alarming trends in synthetic fraud growth. The study, which polled 500 financial executives in the United States, shows a 17% increase in synthetic fraud cases over the past two years, with more than a third of professionals reporting a significant surge of 20 – 50%.

Synthetic identities utilize a combination of legitimate personally identifiable information, such as social security numbers, and fabricated information designed to pass as a real human being. This has been in use for decades but has recently been growing in complexity. Thanks to generative AI, Deduce said that fraudsters are accelerating the use of synthetic identities to such a degree that they will circumvent existing fraud prevention solutions and escalation policies, including document verification and manual reviews. The study indicates that while the industry invests in fraud prevention, 52% of experts believe that fraudsters are adapting faster than defences can keep up.

“Synthetic identity fraud has long been a significant challenge for the financial industry, but the advent of AI technology has accelerated the problem,” said Ari Jacoby, Deduce CEO. “Fraudsters are now able to create identities at an unprecedented pace, allowing them to play the long game with these personas. They can open accounts, make deposits, and engage in seemingly human-like interactions that pose an immense challenge for technology-based detection methods. Without better fraud prevention solutions, we can anticipate a spike in the financial impact associated with these identities.”

Seven out of every eight experts believe this problem will worsen before an effective solution is found. The increasing sophistication of fraudsters and their ability to adapt to evolving security measures create a formidable challenge for institutions trying to safeguard their assets and customer data.

This increase in synthetic fraud and the rising value of each fraud incident underscores a systemic problem in the way that identities are analyzed today. As AI makes it more difficult to separate real people from fakes, not only are financial institutions at risk of fraudsters applying for loans or credit, but they’re also at risk of offering it to criminals themselves. In fact, 53% of respondents said that they had proactively offered credit to synthetic customers.



Sponsored Links by DQ Promote

 

 

Send this to a friend