If financial services companies want to derive value from artificial intelligence (AI), they must define clear use cases and prepare for what’s coming.
Asher Lohman helps them do that. Lohman is the vice president of data analytics for Trace3, an integrator, and reseller of enterprise technology. He helps businesses in multiple sectors maximize the value of their technological investments.
Lohman said it’s important for such companies to have innovative mindsets. That helps them identify trends early on to prepare their clients for them. Trace3 stays on top by partnering with more than 60 global venture capitalists and Israeli startups that are leading innovation.
“We have insight into what’s coming next and where investments are happening,” Lohman explained. “That is a differentiator to our clients, letting them see technology innovations before they are the next mainstream thing out there.
How to deliver true value with AI
Companies considering AI investments must cut through the buzz to see where they can deliver true business value. Sympathizing with those who may have to implement an AI strategy with little direction, Lohman said there can be a huge gulf between the two.
“We don’t see AI as a check-box experience,” he said. “It needs to be business transformational. For us, it’s understanding what are those business use cases, what is available to support those use cases, taking that data and putting it in a place that can be accessed by these different models and helping them build new models to extract that value.”
In financial services, a main consideration is improving fraud detection accuracy. Lohman said one method of doing that is through adaptive learning. Newer machine-learning models ingest data to improve fraud detection patterns, spotting more anomalies.
That journey begins by understanding normal user behavior – where, how and when they normally access a system. AI sees patterns and builds user profiles. Should a deviation occur, the user can be prompted for a password or facial recognition.
“That’s where the most effective use of AI is happening in financial services today,” Lohman said.
But the fraudsters are also using it…
Fraudsters are also using AI. That is why the phishing pitches are better written. Gone are the days of poorly written pleas from Nigerian princes.
That forces the good side to up its game. How is the user accessing the system? What is their entry point?
Lohman said consumers can protect themselves from being victimized by limiting their exposure to AI-based systems. If you don’t recognize a phone number or sender of a text message, don’t answer it. Their purpose may be to record your voice or to confirm your number.
“A lot of the time, AI is driving what’s happening there,” Lohman said. “Building the voice print, that’s all AI-generated. Using text messaging and doing it at scale is all AI managed by targeting people who’ve responded in the past.”
Lohman said an extreme case saw the CFO of a Hong Kong-based company directed to wire millions of dollars to scammers. He logged on to a Zoom call with staff members he recognized and was told by a superior to complete the action.
“There was not a single real person on that call,” Lohman said. “The interactions with the CFO were AI-generated.”
If AI-based security systems are based on identifying deviations from normal behavior, can they adapt to criminals using AI to change those norms for their purposes subtly?
Lohman said they can. Such security systems track hard-to-replicate actions like typing speed and even mouse-clicking patterns. Variances from those can be caught. The systems augment that by leveraging unstructured data like a user’s calls to a help centre to add profile indicators.
More help comes via continuous learning. Systems incorporate transactions, external fraud cases, third-party data and continuous testing.
“Now we’re letting AI do the model training for itself,” Lohman said.
Where data, explainability, and the human element fit in
Compliance and data privacy are issues system designers must contend with. Lohman said the systems must access data to stay current on fraud patterns. The implications are greater than many realize.
That makes explainability critical. System designers must be prepared to explain how the system works while addressing legal and regulatory concerns.
Lohman said humans and AI systems must work in tandem. The AI models can identify and pull out important information, but humans must screen those decisions for accuracy.
Fraud detection systems can also be improved by improving the amount of quality data available to train them. While there are decades of data available in some cases, documentation on why certain data is for specific use cases lags. That is reflected in AI systems providing different answers to the same questions.
“Bringing in fresh and clean data blended for specific use cases…Most organizations are going to have to go through some level of this data cleanup process,” Lohman concluded.