NVIDIA® B300 SXM6
GPU Instances and Clusters
Early and instant access to the Blackwell GPUs starting at $1.24/h*
B300 SXM6 with Verda
Where flexibility meets performance and simplicityGPU Instances
Instant Clusters
Bare-metal Clusters
B300 SXM6 Pricing
The fastest access to B300 SXM6 GPUs with reliable service and expert support-
$4.95/h
On-demand price -
$1.24/h
Spot instance
B300 SXM6 Specs
Designed for the most demanding AI and HPC workloads-
+55.6%
Faster dense FP4 performance (14 vs 9 PFLOPS) -
+55.6%
More GPU memory for larger models and batches
NVIDIA B300 virtual machines
Built on NVIDIA Blackwell Ultra with 5th Gen AMD EPYC Turin processors and NVLink v5.
NVIDIA’s latest hardware, designed to further accelerate LLM and MoE inference compared to its predecessor.
| GPU model | Instance name | CPU | RAM | VRAM | Pay As You Go Price |
|---|---|---|---|---|---|
| 8x B300 SXM6 | 8B300.240V | 240 | 2200 | 2100 | $39.60/h |
| 4x B300 SXM6 | 4B300.120V | 120 | 1100 | 1050 | $19.80/h |
| 2x B300 SXM6 | 2B300.60V | 60 | 550 | 525 | $9.90/h |
| 1x B300 SXM6 | 1B300.30V | 30 | 275 | 262 | $4.95/h |
- Pricing per GPU
- $4.95/h Pay As You Go
- $1.24/h Spot
- $4.85/h -2%1 month
- $4.55/h -8%1 year
- $3.71/h -25%2 years
Verda instances
Where speed meets simplicity in GPU solutionsCustomer feedback
What they say about us...
-
Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. Verda enables us to deploy custom models quickly and effortlessly.
Iván de Prado Head of AI at Freepik -
From deployment to training, our entire language model journey was powered by Verda's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.
José Pombal AI Research Scientist at Unbabel -
Verda powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to Verda, our training clusters run smoothly and securely.
Nicola Sosio ML Engineer at Prem AI -
We needed production-grade reliability with pricing that made sense for a startup. Verda hit that sweet spot.
Lars Vagnes Founder & CEO
-
Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. Verda enables us to deploy custom models quickly and effortlessly.
Iván de Prado Head of AI at Freepik -
From deployment to training, our entire language model journey was powered by Verda's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.
José Pombal AI Research Scientist at Unbabel -
Verda powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to Verda, our training clusters run smoothly and securely.
Nicola Sosio ML Engineer at Prem AI -
We needed production-grade reliability with pricing that made sense for a startup. Verda hit that sweet spot.
Lars Vagnes Founder & CEO