is now Verda Learn more

NVIDIA B300 vs. B200: Complete GPU comparison to date

DataCrunch Content Team 3 min read
NVIDIA B300 vs. B200: Complete GPU comparison to date

Updates:

  • 2025-11-10: B300 SXM6 262GB GPUs are now available for self-service deployment via our Cloud Platform.
  • 2026-02-15: Updated the values of GPU Memory and GPU Memory Bandwidth for B300.

In this blog, we give a complete comparison between NVIDIA® B300 and B200 based on the information available to date. We will be continuously updating this blog as both GPU types become available.

The NVIDIA B200 and B300 are both part of the Blackwell architecture family. The B200 is the base model, while the B300 (also known as Blackwell Ultra) is a higher-performance variant.

Performance numbers without sparsity

Technical Specifications B200 B300
FP4 9 PFLOPS 14 PFLOPS
FP8/FP6 4.5 PFLOPS 4.5 PFLOPS
INT8 4.5 POPS 0.15 POPS
FP16/BF16 2.25 PFLOPS 2.25 PFLOPS
TF32 1.1 PFLOPS 1.1 PFLOPS
FP32 0.037 PFLOPS 0.037 PFLOPS
GPU Memory 180 GB HBM3E 270 GB HBM3E
GPU Memory Bandwidth 7.7 TB/s 7.7 TB/s
NVLink bandwidth per GPU 1.8 TB/s 1.8 TB/s
Max Thermal Design Power (TDP) Up to 1,000W Up to 1,100W

Key differences

  • Compute Performance: B300 dense FP4 performance is 55.6% faster (14 vs 9.0 petaFLOPS) compared to B200, due to higher clock speeds, optimized tensor cores, and the additional TDP headroom. Essentially no FP64 performance (1.25 TF on B300 vs. 37 TF on B200).
  • Memory and Bandwidth: B300 has 50% more GPU memory (270 GB HBM3E vs. 180 GB) for larger models and batches, with equal memory bandwidth.
  • Interconnect and Power: NVLink bandwidth is the same, but B300 supports higher TDP (up to 1,100W vs. 1,000W), enabling the performance uplift but requiring better cooling (liquid-cooled systems are generally recommended for DGX/HGX B300).
  • Compute capability: B300 is SM103 compared to B200 SM100.

Deploy B300 & B200

Verda is among the first to offer HGX B300 and B200 servers. Both are available for deployment without quotas or approvals on our Cloud Platform.

In addition, B200s are available as Instant Clusters with InfiniBand™ interconnect, enabling rapid and flexible provisioning of 16x-128x GPUs with self-service and pay-as-you-go access. Instant B200 Clusters received bronze in the recent ClusterMAX v2 evaluation from SemiAnalysis.

As mentioned, we will update this blog with additional performance benchmarks following the deployment of B300s. To get notified about the updates, please subscribe to our newsletter.

References

  1. White Paper, NVIDIA Blackwell Architecture Technical Brief, table 3. System Specifications, Per GPU specs
  2. Introducing NVFP4 for Efficient and Accurate Low-Precision Inference
  3. NVIDIA DGX B300 Datasheet
  4. NVIDIA DGX B200 Datasheet
  5. NVIDIA B300, Glenn's Digital Garden
  6. NVIDIA Blackwell Datasheet: Individual B200 Specifications
  7. Triton [Blackwell] Revert to using inline asm for tcgen05 ops to unbreak sm103 support #8045

Subscribe to our newsletter

Get the latest updates on GPU benchmarks and AI research
Your information will be used in accordance with the Privacy Policy. You may opt out at any time.