OPAQUE, a startup building infrastructure to secure sensitive data used in artificial intelligence systems, has raised $24 million in a Series B funding round led by Walden Catalyst, the company said.
Returning investors Intel Capital, Race Capital, Storm Ventures and Thomvest participated in the round, alongside new investor and strategic partner Advanced Technology Research Council (ATRC).
The funding brings OPAQUE’s total capital raised to $55.5 million and values the company at about $300 million post-money.
The company is positioning itself at the center of a growing challenge facing large organisations: how to deploy generative AI and autonomous agents on proprietary data without exposing sensitive information or breaching compliance requirements.
Enterprises across industries are eager to use internal data to improve productivity and decision-making, but many AI initiatives remain stuck in pilot stages due to concerns from security, legal and compliance teams over privacy risks, policy enforcement and auditability.
“Enterprises will continue to struggle to bring AI into production until they have verifiable guarantees that their most sensitive data and models are protected,” said Young Sohn, founding managing partner at Walden Catalyst, in a statement.
OPAQUE’s platform uses confidential computing and cryptographic verification to provide what it describes as runtime proof that data remains private, model weights are not exposed and governance policies are enforced before, during and after AI execution.
Chief executive Aaron Fulkerson said the company is building a “confidential-first” infrastructure so organisations can safely deploy AI on sensitive datasets.
The latest funding will support product development and expansion into areas including confidential AI training, post-quantum security and sovereign cloud environments.
The investment follows the launch of OPAQUE Studio, a development environment that allows enterprises to build and deploy AI agents with verifiable privacy and audit controls.
Founded from research at UC Berkeley’s RISELab, the company said its customers and partners include ServiceNow, Anthropic, Accenture and organisations in financial services, healthcare and insurance.
The funding reflects a shift in enterprise AI spending toward governance and security as companies move from experimentation to large-scale deployment.
As regulatory scrutiny and data sovereignty concerns rise, tools that provide verifiable auditability and policy enforcement are becoming critical purchase criteria.