is now Verda Learn more
GB300
Sovereign European cloud

Flexible access to NVIDIA® GB300 NVL72

From a single tray to multiple racks

Turnkey infrastructure  ·  Expert support  ·  Powered by NVIDIA®

Configurations

Suitable for ad-hoc experiments and production workloads

Specifications

Per 1x tray
GB300 4x
Each tray contains:
  • 1152 GB GPU VRAM
  • 900 GB GPU RAM
  • 128 cores 2x Grace CPU
  • 1800 gbit NVLink v5

The GB300 Grace-Blackwell superchip features two Blackwell Ultra B300 GPUs and one Grace CPU, an enhancement from the previous generation's configuration of a single Hopper GPU and one Grace CPU. These components are interconnected via NVLink v5, facilitating a unified memory domain. Each GPU is equipped with 288GB of HBM3e memory.

The NVL72 rack integrates 18 trays, each containing two GB300 Grace-Blackwell superchips, totaling four B300 GPUs and two Grace CPUs. Unlike an InfiniBand cluster, the GB300 utilizes NVLink v5 for GPU interconnectivity. This provides significantly higher throughput and lower latency for GPU-to-GPU communication, optimizing performance for demanding computational tasks.

GB300 GPU

Pricing

Long-term discounts available
$7.990/h 1x GB300 GPU

Performance

Built for the age of AI reasoning
Metric CPU
FP4 1080 PFLOPS (1.08 EFLOPS)
FP8 360 PFLOPS
FP16/BF16 180 PFLOPS
FP32 3 PFLOPS
GPU memory 20 TB
GPU memory bandwidth 576 TB/s
NVLink bandwidth 130 TB/s
Max GPU TDP 100.8 kW

GB300 delivers the cutting-edge performance, with substantial gains over the previous generations.

Get the architecture overview and performance implications from our technical blog that covers:

  • Per-GPU specifications for Blackwell and Blackwell Ultra GPUs
  • Key differences between GB300 and B200
  • NVL72 rack-aggregate performance
  • NVLink and NCCL test results
Read the analysis

Success stories

Battle-tested with open-source projects and select customers
  • Quote

    Verda's GB300, combined with their infrastructure support, has provided me with an extremely stable experience. We used their cluster while developing DeepSeek v32 RL, and it was consistently reliable. It allowed me to fully focus on development without having to worry about machine setup or infrastructure issues. The experience was truly exceptional.

    Yueming Yuan RL Core Developer
    Logo
  • Quote

    vLLM on GB300 provided by Verda is amazing! Trillion-param-level open-source monsters like Kimi K2.5 and GLM-5 are within reach easily. Models that used to need a whole rack now scream on one node. The future isn't coming,  it's already shipping tokens.

    Kaichao You Core Maintainer
    Logo

Expert support

Offering deep expertise with the Blackwell Ultra Architecture
Verda is among the first NVIDIA GB300 NVL72 providers, offering deep expertise with the Blackwell Ultra architecture. Building upon early adoption of HGX™ B300, we deployed one of the very first GB300 NVL72 systems in the European Union.

Verda is among the first NVIDIA GB300 NVL72 providers, offering deep expertise with the Blackwell Ultra architecture. Building upon early adoption of HGX™ B300, we deployed one of the very first GB300 NVL72 systems in the European Union.

We host and operate GB300 NVL72 systems at our data center locations in in the Nordics, powered by 100% renewable energy sources. Our in-house engineering teams handle the full lifecycle, including:

  • Hardware installation
  • Infrastructure provisioning
  • Systems engineering

Verda's first GB300 NVL72 system was battle-tested with production workloads from open-source projects, such as SGLang and vLLM, and select customers. All early users noted stability and support.

With Verda, you can gain reliable and flexible access to GB300 NVL72 systems with expert support and sensible SLAs.

Read the announcement Read the announcement

Verda Cloud Platform

Full-stack AI cloud, rethought from scratch
  • Full-stack AI

    Flexible architecture for efficient experimentation, training, and inference at any scale.
  • Efficient

    Cutting-edge hardware with compute, storage, and networking optimized for peak efficiency.
  • Developer-first

    Web console, developer docs, API, native SDK, Terraform, and more.
  • Reliable

    Historical uptime of over 99.9% with fair compensation for service disruptions.
  • Expert support

    Proactive support from our experienced team of ML craftsmen and infrastructure experts.
  • AI R&D

    In-house expertise from contributing to frontier research and open-source projects.
  • Cost-effective

    Streamlined GPU access at up to 90% lower costs than hyperscalers. Long-term discounts available.
  • Secure and sovereign

    European service that complies with GDPR and adheres to ISO 27001, 27017, 27018, and 27701.
  • Sustainable

    Hosted in efficient Nordic data centers that utilize 100% renewable energy sources.

Verda Stack

Peak efficiency across software, compute, storage, and networking

AI developer ecosystem

Interfaces for provisioning and managing platform resources.

Web consoleAPISDKTerraformPartner integrations

Managed services

Platform layer for running and scaling AI workloads.

Auto-scaling containersBatch jobsContainer registryInference API

Cloud compute

Virtualized resources for user-managed environments.

Instant ClustersVirtual machines

Core infrastructure

Compute, storage, and networking powering platform operations.

NVIDIA GPUsCPU nodesBlock storageShared FilesystemObject StorageNVLinkInfiniBandRoCE

Data center foundation

End-to-end control for predictable cost, performance, and reliability.

EU locationsEU ownershipSecurity certificationsRenewable energy

AI developer ecosystem

Interfaces for provisioning and managing platform resources.

Web consoleAPISDKTerraformPartner integrations

Managed services

Platform layer for running and scaling AI workloads.

Auto-scaling containersBatch jobsContainer registryInference API

Cloud compute

Virtualized resources for user-managed environments.

Instant ClustersVirtual machines

Core infrastructure

Compute, storage, and networking powering platform operations.

NVIDIA GPUsCPU nodesBlock storageShared FilesystemObject StorageNVLinkInfiniBandRoCE

Data center foundation

End-to-end control for predictable cost, performance, and reliability.

EU locationsEU ownershipSecurity certificationsRenewable energy

In-house AI R&D

How Verda's AI engineers and GPU infrastructure power frontier AI research
  • 1X World Model Verda collaborates with 1X on building multi-GPU inference for the 1XWM generative video model.As video quality leads to task success, it enables solving more complex tasks in household autonomy.
  • The SGLang Project Verda sponsors SGLang and its collaborators with access to compute resources and infrastructure support.SGLang’s recent explorations into RL training with FP8 and INT4, utilize NVIDIA Hopper, Blackwell, and Blackwell Ultra platforms.
  • 1X World Model Challenge Consisting of Verda's in-house engineers, the Revontuli team won both tracks: sampling and compression.For this challenge, the team utilized Verda's Instant Clusters with Blackwell GPUs and InfiniBand interconnect.