Senior cybersecurity executives find themselves navigating a pivotal moment where artificial intelligence is simultaneously amplifying risks and strengthening protective measures. A new EY Cybersecurity Roadmap Study of 500 top corporate security officials highlights how organizations are urgently channeling resources into advanced AI tools and self-governing systems to counter the surge in intelligent cyber threats.
The research shows that an overwhelming 96 percent of these leaders view AI-enhanced cyberattacks as a major risk to their operations.
Nearly half—48 percent—estimate that AI contributed to at least one-quarter of the security breaches their companies faced over the previous 12 months.
Despite this reality, fewer than half express strong assurance in their ability to repel a large-scale AI-assisted intrusion.
This gap underscores a pressing need to overhaul traditional security setups and place intelligent systems at their foundation rather than merely layering them on existing frameworks.“
Industry participants have been integrating AI capabilities to keep pace with evolving dangers, yet their limited confidence points to the necessity of building security architectures where AI serves as a central element,” notes Ganesh Devarajan, Americas Consulting Cyber Risk Practice Leader at EY.
He emphasizes that simply automating outdated approaches falls short; instead, companies must develop fully AI-integrated strategies that weave protection into every layer of enterprise technology and foster trust across operations.
In response, nearly all respondents—99 percent—anticipate that strategic AI deployment will fundamentally reshape both proactive threat prevention and reactive security protocols.
However, 85 percent of those already leveraging AI for cybersecurity report that existing budgets cannot adequately address the new challenges.
This shortfall is expected to change dramatically, with the proportion of organizations allocating at least one-quarter of their total cybersecurity spending to AI-powered solutions projected to jump from just 9 percent today to 48 percent within two years—a nearly fivefold increase.
These expanded resources will enable a shift toward more sophisticated “agentic” AI platforms.
These autonomous systems go beyond basic automation, handling intricate, multi-stage responses that mimic human decision-making across interconnected environments.
Devarajan explains that such investments allow teams to evolve from routine task handling to deploying advanced agents capable of orchestrating complex countermeasures in real time.
A striking 97 percent of leaders believe their organization’s market position over the coming two years will hinge directly on how mature their agentic AI defenses become.
Usage of these systems is forecast to roughly double across critical functions by 2028, including advanced persistent threat detection (rising from 30 percent to 62 percent), real-time fraud monitoring (32 percent to 58 percent), identity and access controls (23 percent to 51 percent), vendor risk oversight (25 percent to 50 percent), data privacy compliance (27 percent to 48 percent), and safeguards against deepfakes and impersonation (23 percent to 42 percent).
While almost every organization maintains an AI cybersecurity governance structure—and 98 percent of those with frameworks deem them vital for ethical deployment—implementation lags significantly.
Only 20 percent have fully refined and culturally embedded these policies. Another 51 percent have integrated them into core processes, and 26 percent report complete rollout across business units.
Industry professionals stress that bridging this divide requires moving past isolated protections toward comprehensive trust architectures that embed governance, ethics, and compliance into AI initiatives, transforming potential vulnerabilities into strategic strengths.
Conducted between December 2025 and January 2026 among U.S.-based directors and C-suite executives at firms generating at least $500 million annually across 12 sectors, the study carries a ±4 percent margin of error.
Its findings signal a clear industry pivot: as AI threats intensify, forward-thinking organizations are betting heavily on intelligent, autonomous defenses paired with robust oversight to stay resilient.