SteelEye‘s Communications Surveillance solution, powered by our Compliance CoPilot, is said to enable compliance teams to “master” eComms and vComms compliance. The platform claims to support any communication channel “including voice, chat, email, meetings, and social messaging.”
SteelEye explains that it “provides language detection, transcription, translation, and surveillance in 40+ languages, including risk detection for language switching across written and spoken communications.”
Their AI assistant reduces the communication surveillance alert review process “by 75% through intelligent alert scoring, triaging, and workflow automation.”
SteelEye embeds the ability “to easily tailor alerting criteria to meet specific needs.”
SteelEye’s data controls and governance framework “is designed to ensure the integrity and completeness of data.”
The platform provides a comprehensive audit “of data receipt and processing, with proactive data governance alerts.”
As AI continues to transform industries, its impact on areas “like communications surveillance is highly promising.”
At Regs & Eggs 2024, industry professionals from Baird, PwC, and TD Securities discussed the application of AI in financial services and “how firms can effectively navigate AI in crucial areas like communications surveillance.”
Overall, there is a positive outlook “for the role of AI in financial services.”
In particular, there is a view that AI and large language models (LLMs) can drive efficiencies, increase productivity, and “enhance risk detection in communications surveillance.”
However, to fully realize these benefits, several key considerations must be carefully addressed.
According to a blog post by SteelEye, one of the most interesting “aspects of AI and LLMs is their ability to optimize processes in communications surveillance.”
For example, many financial firms today simply “cannot review all call activity and work through the entire alert queue.”
With AI-powered surveillance, firms can move from “monitoring a proportion of communications activity to monitoring 100% of it.”
Consequently, many people believe the technology will “not only help firms streamline the review process and reduce false positives but also identify more bad actors.”
This can allow firms to “reduce administrative tasks and free up resources for higher-level work, resulting in improved job satisfaction and reduced compliance fatigue.”
The use of AI and LLMs can also “enhance risk detection accuracy in communications surveillance and help detect not only financial risks such as market abuse but also non-financial risks like culture and conduct issues.”
Voice surveillance stands out as “a particularly promising application of AI.”
AI’s ability to understand and alert “on multiple languages, even during language switching, is a game-changer. AI models equipped with LLMs can seamlessly handle these transitions, ensuring that no details of the conversations are missed due to language changes.”
Moreover, LLMs excel at processing different contexts of language use.
This is particularly powerful when considering voice.
People typically communicate differently in spoken language “compared to written text, often using colloquialisms, slang, and informal phrases.”
Then there are the complexities of accents.
Effective AI systems can distinguish and “understand these nuances, providing a more accurate and comprehensive analysis of communications.”
This contextual intelligence is vital for identifying risks “that might be overlooked in traditional surveillance methods but is, of course, dependent on accurate and precise voice transcription.”
Voice transcription technology has also benefitted “from the evolution of AI, where systems are now better able to understand voice and convert it to text.”
This is foundational to effective voice surveillance.
However, while most vendors tout high precision-recall and accuracy rates, “it is essential that firms verify these claims through testing.”
A recommended approach is to “conduct manual reviews of vendor transcriptions.”
This involves taking a sample of calls, running them “through the vendor’s software, and then comparing the AI-supported transcripts with those produced manually by analysts.”