Verda is powering ExpressAI, a privacy-first AI platform where user prompts are cryptographically isolated at the hardware level and not just protected by policy.
Most AI platforms make a promise: we won't look at your data. It's a policy statement backed by organizational controls including access lists, audit logs, and contractual language. For many use cases, that's sufficient.
But for applications where privacy isn't just a feature but the entire product proposition, policy alone isn't enough. What happens when the question shifts from "do you promise not to look?" to "can you prove it's technically impossible to look?"
That’s the question ExpressVPN set out to answer when they built ExpressAI. And it’s the question that brought them to Verda.
What is ExpressAI?
ExpressAI is a privacy-first AI platform that’s part of the ExpressVPN suite. It is designed around a foundational principle: user prompts should remain private by design and not by policy.
To achieve this, ExpressAI runs workloads inside secure enclaves, ensuring that data is cryptographically isolated during processing. This means:
- Prompts are cryptographically isolated during processing. Not even infrastructure operators can access the data while it is being used.
- No persistence of user data beyond the session. Once the interaction ends, the data is gone.
- No use of prompts for model training. The architecture enforces this.
This architecture enables a zero-access AI environment, where privacy is enforced at the hardware and system level rather than through organizational controls alone.
Why ExpressVPN Chose Verda
Building confidential AI at production scale required more than access to GPUs. It required an infrastructure partner willing to co-engineer the solution, and one that shared a structural commitment to data protection.
Three factors shaped this collaboration:
European data sovereignty by default. For a product built on the promise that user data stays private, where that data is processed matters as much as how it is encrypted. Verda’s infrastructure limits exposure to regimes such as the US Cloud Act, and operated under GDPR from the ground up.
Joint R&D, not just rack space. The project required going beyond the standard cloud procurement. As part of this partnership, we conducted joint R&D focused on enabling confidential computing on the NVIDIA Blackwell architecture.
Fast access to cutting-edge compute. ExpressAI's product proposition depends on serving the latest open-source models at competitive performance and cost, which means running on the newest GPU architecture available, not waiting in line for it.
During this process, together the teams:
- Enabled confidential computing for multi-node configurations on NVIDIA HGX™ B200
- Worked on early-stage support for distributed secure GPU clusters
- Validated performance and isolation for large-scale inference workloads
This work was carried out early in the Blackwell lifecycle, helping define how confidential computing can extend beyond single-node environments into production-grade AI systems.
What This Means for the Industry
Confidential computing in AI is still early. Most implementations are limited to single-GPU or single-node environments. Extending cryptographic isolation to multi-node, multi-GPU configurations requires solving problems that don't have off-the-shelf answers yet.
The work Verda and ExpressVPN have done together helps define how confidential computing can move beyond proof-of-concept (PoC) into real, production AI systems. It's a proof point that the architecture works, that the performance trade-offs are manageable, and that European infrastructure is where this kind of work naturally belongs, in a jurisdiction where data protection isn't an afterthought but a legal and cultural baseline.
Confidential computing on Verda
For teams building AI applications where data privacy is non-negotiable, whether due to regulatory requirements, customer expectations, or the nature of the data itself, Verda provides infrastructure purpose-built for confidential computing workloads.
Today, we support:
- Single-GPU confidential computing upon request for NVIDIA RTX PRO 6000 Blackwell Server Edition
- Managed proof-of-concepts for multi-GPU and multi-node with the Blackwell Ultra, Blackwell, and Hopper architectures
In the near future, we will be adding out-of-the-box support for GPU Instances and multi-node GPU clusters for NVIDIA B200.
Get Started
Low-friction PoC: Deploy confidential computing on a single RTX PRO 6000 upon request. → Read the docs
White-glove PoC: Request and validate multi-GPU or multi-node confidential computing architectures with our engineering team. → Contact us