SoSafe’s 2024 research report, based on key insights from 500 global security professionals and 100 customers across 10 different countries, reveals a fast-evolving cybersecurity landscape that poses major challenges for individual and organizations.
A considerable 87% of respondents encountered AI-driven cyberattacks last year, underscoring the growing threat of artificial intelligence in the hands of malicious actors.
Notably, 91% predict a significant rise in such threats over the next three years, yet only 26% feel highly confident in detecting them.
This gap reveals a critical vulnerability as organizations worldwide grapple with the challenges of AI adoption.
The report highlights the evolution of cyberattacks into more sophisticated, multichannel operations.
A significant 95% of respondents noted an uptick in attacks spanning email, text, social media, and other platforms over the past two years.
An example involves WWP’s CEO, who fell victim to a scam starting with trust-building via WhatsApp, progressing through Microsoft Teams, and culminating in an AI-generated deepfake voice call that extracted sensitive data and funds.
Such incidents illustrate how AI expands attack surfaces, exposing firms to risks like data poisoning and AI hallucinations—where manipulated inputs lead to flawed outputs.
SoSafe’s research findings reveal a lack of preparedness.
Over half (55%) of organizations have not fully implemented controls to mitigate risks from their own AI tools, leaving them open to exploitation.
Attackers are leveraging AI for obfuscation—masking their intent and origins—cited as the top concern by 51%.
Meanwhile, 45% worry about entirely new attack methods, and 38% fear the sheer speed and scale of automated assaults.
Andrew Rose, SoSafe’s Chief Security Officer, warns that AI is amplifying the sophistication and personalization of attacks.
“Businesses are aware of the threat, but our data shows they’re not confident in their ability to respond.”
The shift from rudimentary email phishing to what Rose calls “3D phishing” marks a new era of deception.
By blending voice, video, and text across platforms, attackers mimic legitimate communication, making detection harder.
Even internal AI tools, like chatbots designed to assist employees, can become unwitting accomplices.
Rose points out that few firms consider how these systems could be manipulated to reveal sensitive data or identify key targets, urging a rigorous security overhaul that addresses both tech and human vulnerabilities.
Despite these challenges, SoSafe CEO Niklas Hellemann sees AI as a potential ally.
He said:
“It’s one of our greatest tools against evolving threats, but its strength depends on the people using it.”
The research report emphasized that technology alone isn’t enough—cybersecurity awareness is vital.
Without employees trained to spot AI-driven scams, even advanced defenses falter.
Hellemann advocates a blended approach: human expertise, heightened awareness, and careful AI integration.
As AI (artificial intelligence) adoption accelerates globally across jurisdictions, it is becoming clear that organizations must enhance their defenses against increasingly advaned cyberattacks while ensuring their own AI doesn’t become a liability (instead of an asset).