Majority of Financial Services Organizations with Over 5000 Employees have Implemented AI Systems, Report Claims

Artificial Intelligence has considerably impacted the financial services industry in recent years, especially since the advent of OpenAI in late 2022.

According to the Economist Intelligence Unit, 54% of financial services organizations with over 5000 employees “have implemented AI systems.” Most recently, the now-familiar customer-facing chatbot has been “joined by more advanced implementations of generative AI in behind-the-scenes processes.”

But with all the benefits AI can bring to Fintech, “including increased operational efficiency, enhanced decision-making, and improved customer engagement, it carries certain risks.”

To ensure the continued health of financial service ecosystems, Fintechs must understand and seek to mitigate such risks, “which touch on a wide range of matters, including data privacy and security, the opacity of AI decision-making processes (‘black box’ problem), and the potential for model degradation.”

The Limits of AI Systems

Darius Padvelskis, a data scientist at Baltic Amadeus — a software development and IT consulting company, explains:

“It is crucial for Fintechs to stay mindful of the fact that AI systems are not without their limitations and that those limitations can lead to unintended consequences when they are not properly evaluated.”

At the most basic level, AI systems, like all computers, “are vulnerable to cybersecurity threats. When such systems are entrusted with sensitive financial information, any security breach can be potentially disastrous for customers, banks, and associated Fintechs.”

In addition to traditional cyber threats, AI systems are vulnerable to novel threats like those focused “on manipulating data at some stage of the AI lifecycle to exploit inherent limitations of AI algorithms.”

Padvelskis says:

“To reduce the risk and potential impact of a data security breach. Fintechs who use AI would be well-advised to implement robust AI security compliance programs.”

At a higher level, AI systems also “face the risk of model performance degradation, which occurs when an AI model needs to prepare to handle specific new data, leading to suboptimal or incorrect forecasts.”

When AI facilitates financial processes like trading, for example, minor errors can quickly have outsized consequences, “especially in the case of high-frequency trading.”

One solution to this issue has been “monitored machine learning,” wherein only certain trade decisions are left to AI while others are made by humans.

Padvelskis added:

“For all its power, or perhaps because of it, AI is still most effective when its tasks and data environment are precisely defined, and human monitors are near a hand to keep it in check.”

A Case of AI Bias

Certain aspects of AI systems “are more challenging to control, given the scope of time and data involved in their training.” This is a function of the so-called “black box” problem, which refers to the potential lack of transparency and explainability “in AI decision-making processes.”

As noted in the update:

“Because AI algorithms are extremely complex, explaining how and why certain decisions are reached can be difficult. This lack of transparency can have legal, ethical, or even just reputational implications for banks and Fintechs who use AI.”

The case of the Apple credit card in 2019 “offers a clear example of the issue’s complexity.”

When users of Apple’s card alleged “that the provider’s algorithms seemed to issue smaller lines of credit to women than to men, Apple and Goldman Sachs defended their AI system against claims of gender bias by citing a third-party audit of its algorithms.”

But some critics remained unsatisfied, “claiming that algorithmic audits themselves are challenging to conduct in a way that produces straightforward and meaningful answers.”

Because financial institutions must comply with various regulations and provide justifications for their decisions, using opaque AI systems “can make it extremely difficult to demonstrate compliance and satisfy regulatory requirements.”

Padvelskis explains:

“Undoubtedly, AI can and will help Fintechs achieve their aims. But doing so will require keeping a close eye on things; implementation must be done carefully and deliberately.”

As mentioned in the update, Baltic Amadeus is “a technology partner that simplifies complex business digitalization processes.”

With over 250 IT professionals, they “deliver IT solutions and strategic advice, leveraging the latest technologies and innovative project management models.”

With 30+ years of experience, they are “a trusted partner for clients in various industries, creating ROI-based solutions for businesses of all sizes.”

Sponsored Links by DQ Promote



Send this to a friend