APP Scams in the UK Are on the Rise and Financial Institutions Are Under Pressure, Regtech Professional Claims

Kathy Gormley, the AML Product Manager at Resistant AI, a start-up using machine-learning techniques to protect organizations from fraud, says that APP scams are on the rise, and financial institutions are under pressure to stem the flow of ill-gotten gains from these crimes.

The Payment Systems Regulator notes that APP scams accounted “for more than 40% of fraud losses in the UK in 2022.”

This is a worrying trend, “particularly in light of the upcoming APP fraud reimbursement requirements being introduced in 2024.”

Kathy from Resistant AI explains that as the perpetrators of APP fraud need multiple accounts to launder their proceeds, banks – particularly the digital-only challenger firms – “should be more vigilant at the customer onboarding stage, rather than waiting and relying on transaction monitoring to spot mules.”

Kathy, who  has 10 years’ experience in RegTech, explains that Authorized push payment (APP) fraud is a form of scam, “where an individual or business is socially engineered or conned into sending – and authorizing – a payment into a criminal’s bank account, whilst thinking it is a legitimate transaction.”

For example, the scammer may “pose as a representative of the victim’s bank informing them of a breach, and recommending they move all their money into another ‘safe’ account, which is actually their own. For such scams, money mule accounts are typically used to receive the fraudulent funds. The funds are then moved through the financial system to distance them from the initial receiving account before they move to the trail breaker stage where the money is taken out of the financial system.”

According to Kathy, all banks are “faced with an onslaught from organized criminal gangs attempting to open accounts so they can launder money. However, the problem is more noticeable at new digital banks because they typically have a higher proportion of new accounts to their overall customer base.”

Also, digital banks have “innovated convenient, low-friction digital onboardings. This slick and seamless customer experience that challengers offer, make it easy to open accounts – and this convenience is attractive to fraudsters.”

She also noted that fraud (like many things) is “made easier with generative artificial intelligence (AI), the AI that generates speech, text, images, videos and other content. Tools like ChatGPT can be used to create more efficient attacks and give criminals turbo fuel, which has the potential to overwhelm financial institutions.”

She further noted that Generative AI can be “used by criminals to create the personas that reach out to potential victims. It can be used to create content to write seemingly authentic emails from financial institutions. Deepfake technology has been used by fraudsters to replicate individuals in order to open accounts, but now generative AI can create a ‘person’ out of nothing – an AI-generated individual who can talk and move in response to ‘liveness’ checks.”

She added that automation “offers criminals opportunities to scale. Mass serial fraud is a serious threat and criminals are testing banks’ onboarding processes to identify even the smallest vulnerabilities and opportunities to exploit. At Resistant AI, we see growing evidence that these attacks are being automated. In the millions of documents analyzed by our engine we have seen attempts that use the same document templates repeatedly (both fraudulently created, as well as real ones).”

Resistant AI have identified, for example, “a single passport being used over 2500 times in a 20 day period. ”

She also shared that in its review of best practice to detect and prevent money mules, the FCA identified machine learning as “one way for firms to reduce the inherent risks of static rules-based systems.”

At Resistant AI they believe that “the traditional rules-based approach is not fit for purpose, especially in an age where AI tools are freely available to the ever-evolving fraudsters. Machine learning can respond and adapt as threats change, rather than waiting for new behavior to emerge and writing new rules after the fact. In tackling rising APP fraud, we need to fight fire with fire and use AI to prevent criminals using money mule accounts.”



Sponsored Links by DQ Promote

 

 

Send this to a friend