Prove’s Financial Crime Report: Majority of Consumers Concerned About AI based Fraud

Holiday shopping season is a favorite time of year for many, but as consumers everywhere start preparing to open their wallets ahead of Black Friday, “a new type of threat has begun rearing its ugly head: AI-based fraud.”

The recent boom in AI capabilities and technologies has “had widespread benefits for both companies and individuals but has also enabled fraudsters to launch more sophisticated scams, posing significant risks to consumers.”

But are consumers even aware of what AI-based fraud is? And how do they feel “about identity verification and authentication, identity fraud in general, and the customer experience while shopping online?”

To answer these questions, Prove has put together “a 2023 Online Shopping and AI-Based Fraud Report summarizing the results of two surveys commissioned by Prove and conducted by market research company Dynata in October 2023.”

The report sheds light on consumer attitudes “towards online shopping and fraud, but interestingly also reveals a significant opportunity for retailers to potentially capture more account signups through advanced identity verification and frictionless onboarding.”

Here are key insights into what the data means and recommendations on “how retailers and ecommerce brands can take action.”

As noted in the update from Prove, Consumers are concerned “about shopping online this holiday season, and are particularly worried about AI-based fraud attacks.”

The findings of the 2023 Online Shopping and AI-Based Fraud Report indicate “that there is significant concern about fraud during the holiday season,” with:

  • 81% of consumers saying they are worried about fraud while shopping online this holiday season, and
  • 84% of consumers saying they’re concerned about AI-based fraud attacks this holiday season.‍

As explained in an blog post by Prove, AI-based fraud, also “known as artificial intelligence-based fraud, refers to fraudulent activities or scams that are facilitated or enhanced by the use of artificial intelligence (AI) technologies.”

‍Commons types of AI-based fraud include:

1. Social Engineering Schemes:

The use of AI enables fraudsters “to turbo-charge social engineering scams such as phishing, vishing, and business email compromise (BEC) scams. AI helps criminals automate processes to create personalized, convincing, and highly effective messages to targets.”

As a result, cybercriminals can launch “a higher volume of attacks in less time, with a heightened success rate.”

2. Password Hacking:

Cybercriminals have harnessed AI to “enhance their password-cracking algorithms.”

These smarter algorithms “enable quicker and more accurate guessing, making hackers more efficient and profitable. With AI, hackers can guess passwords more effectively, posing a significant threat to the security of online accounts and systems.”

This calls for a proactive approach “to fortifying passwords with advanced multi-factor authentication.‍”

3. Deepfakes & Voice Cloning:

AI’s ability to manipulate visual and audio content “has given rise to deepfakes and voice cloning. Cybercriminals can use these techniques to impersonate individuals, spreading fabricated content across influential social media platforms.”

These malicious tactics can be combined “with social engineering, extortion, or other types of fraud schemes, making them even more insidious.”



Sponsored Links by DQ Promote

 

 

Send this to a friend