NVIDIA® DGX GB200

Enterprise Infrastructure for Mission-Critical AI

NVIDIA DGX™ GB200 is purpose-built for training and inferencing trillion-parameter generative AI models. Designed as a rack-scale solution, each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips—–36 NVIDIA Grace CPUs and 72 Blackwell GPUs—–connected as one with NVIDIA NVLink™. Multiple racks can be connected with NVIDIA Quantum InfiniBand to scale up to hundreds of thousands of GB200 Superchips.

Download the NVIDIA DGX GB200 Data sheet »

XENON NVIDIA DGX GB200
Specifications
NVIDIA® DGX GB200
Specifications

GPU

NVIDIA® DGX GB200

72x NVIDIA Blackwell GPUs, 36x NVIDIA Grace CPUs

Specifications

CPU Cores

NVIDIA® DGX GB200

2,592 Arm® Neoverse V2 cores

Specifications

GPU Memory | Bandwidth

NVIDIA® DGX GB200

Up to 13.4 TB HBM3e | 576 TB/s

Specifications

Total Fast Memory

NVIDIA® DGX GB200

30.2 TB

Specifications

Performance

NVIDIA® DGX GB200

  • FP4 Tensor Core: 1,440 PFLOPS | 720 PFLOPS*
  • FP8/FP6 Tensor Core: 720 PFLOPS | 360 PFLOPS*
Specifications

Interconnect

NVIDIA® DGX GB200

  • 72x OSFP single-port NVIDIA ConnectX®-7 VPI with 400 Gb/s NVIDIA InfiniBand
  • 36x dual-port NVIDIA BlueField®-3 VPI with 200 Gb/s NVIDIA InfiniBand and Ethernet
Specifications

NVIDIA NVLink Switch System

NVIDIA® DGX GB200

9x L1 NVIDIA NVLink Switches

Specifications

Management Network

NVIDIA® DGX GB200

Host baseboard management controller (BMC) with RJ45

Specifications

Software

NVIDIA® DGX GB200

  • NVIDIA Mission Control
  • NVIDIA AI Enterprise
  • NVIDIA DGX OS / Ubuntu Enterprise
Specifications

Support

NVIDIA® DGX GB200

Three-year business-standard hardware and software support

Quick Quote Request

  • This field is for validation purposes and should be left unchanged.

Get a Quote Talk to a Solutions Architect