is now Verda Learn more
Instant Clusters

Instant GPU Clusters

Immediate, self-service access to multi-node clusters

for large-scale AI training

Immediate, self-service access to multi-node clusters for large-scale AI training

B200 SXM6

H200 SXM5

Deploy now
Customers and partners who trust Verda
  • Freepik
  • Black Forest
  • 1X
  • SGLang
  • Prime Intellect
  • WaveSpeed
  • Sony
  • NEC
  • Harward University
  • MIT University
  • Korea University
  • Maritaca
  • Siili
  • Nex
  • Findable
  • Shadeform
  • TensorPool
  • Dstack
  • Simli
  • Happy Whale
  • Freepik
  • Black Forest
  • 1X
  • SGLang
  • Prime Intellect
  • WaveSpeed
  • Sony
  • NEC
  • Harward University
  • MIT University
  • Korea University
  • Maritaca
  • Siili
  • Nex
  • Findable
  • Shadeform
  • TensorPool
  • Dstack
  • Simli
  • Happy Whale
Scale up for large AI workloads with unmatched speed and flexibility

Instant Clusters

Rapid provisioning Access multi-node GPU clusters in minutes instead of days or weeks
Short-term contracts Scale your capacity for as little as 1 day without long-term commitments
Self-serve access Deploy clusters via the Cloud Dashboard without talking to sales
Peak performance Negligible virtualization overhead across compute, networking, and storage
Leverage cutting-edge compute, networking, and storage solutions

Specifications

B200 SXM6

Available now

B200 SXM6

Each 8x GPU node contains:

1440 GB GPU VRAM
240 cores AMD Turin CPU
3200 Gbit/s InfiniBand
100 Gbit/s Ethernet
5 Gbit/s Uplink
Deploy now
H200 SXM5

H200 SXM5

Each 8x GPU node contains:

1128 GB GPU VRAM
176 cores AMD Genoa CPU
3200 Gbit/s InfiniBand
100 Gbit/s Ethernet
1 Gbit/s Uplink
Deploy now
Fast and flexible access to multi-node clusters

Pricing

Contract type: Pay as you go
B200 SXM6 $4.89 per GPU/hr
H200 SXM5 $3.39 per GPU/hr
Deploy now Or check out our docs
Cluster

Secure and sustainable

Designed for ML engineers

Our clusters offer high uptime and rapid recovery, minimizing downtime disruptions. Hosted in carbon-neutral data centers, we select locations with excellent renewable energy practices, utilizing sources such as sources like nuclear, hydro, wind and geothermal.

Dependable performance and affordable high-throughput storage, adhering to the highest security standards.

  • High-speed network

    High performance servers with up to 3200 Gbps RDMA interconnects, such as Infiniband
  • Seamless scaling

    Expand your compute capacity for AI training at short notice and for short periods of time
  • Expert support

    Our engineers specialize in hardware configured for ML and are always available to assist
  • Secure and reliable

    Hosted in European GDPR regulated countries, ISO 27001 certified. Historical uptime of over 99.9%
  • Cost-effective

    Secure GPU access at up to 90% lower costs than major cloud providers. Long-term plans available
  • 100% renewable energy

    Hosted in efficient Nordic data centers that utilize 100% renewable energy sources

Powering AI innovators

Customer spotlights
  • Quote

    Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. Verda enables us to deploy custom models quickly and effortlessly.

    Iván de Prado Head of AI
    Logo
  • Quote

    Our entire language model journey is powered by Verda's clusters, from deployment to training. Their servers and storage smooth operations and maximum uptime, so we can focus on achieving exceptional results without worrying about hardware issues.

    José Pombal AI Research Scientist
    Logo
  • Quote

    Verda powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access to our training clusters. Thanks to Verda, our infrastructure runs smoothly and securely.

    Nicola Sosio ML Engineer
    Logo
  • Quote

    Verda is the perfect mix of being nimble and having production-grade reliability for low-latency service like ours. Our startup times and compute costs both dropped significantly. With Verda, we can promise our customers high uptimes and competitive SLAs.

    Lars Vågnes Founder & CEO
    Logo
  • Quote

    Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. Verda enables us to deploy custom models quickly and effortlessly.

    Iván de Prado Head of AI
    Logo
  • Quote

    Our entire language model journey is powered by Verda's clusters, from deployment to training. Their servers and storage smooth operations and maximum uptime, so we can focus on achieving exceptional results without worrying about hardware issues.

    José Pombal AI Research Scientist
    Logo
  • Quote

    Verda powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access to our training clusters. Thanks to Verda, our infrastructure runs smoothly and securely.

    Nicola Sosio ML Engineer
    Logo
  • Quote

    Verda is the perfect mix of being nimble and having production-grade reliability for low-latency service like ours. Our startup times and compute costs both dropped significantly. With Verda, we can promise our customers high uptimes and competitive SLAs.

    Lars Vågnes Founder & CEO
    Logo

Meet our team

Quote

Our infrastructure team is hands-on with everything from provisioning GPUs to writing the software behind features like instant clusters, which, fun fact, got its first customer after overtime teamwork during a sauna session.

Artem Ikonnikov Infrastructure Team Lead
Logo