NVIDIA® DGX GB200
Enterprise Infrastructure for Mission-Critical AI
NVIDIA DGX™ GB200 is purpose-built for training and inferencing trillion-parameter generative AI models. Designed as a rack-scale solution, each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips—–36 NVIDIA Grace CPUs and 72 Blackwell GPUs—–connected as one with NVIDIA NVLink™. Multiple racks can be connected with NVIDIA Quantum InfiniBand to scale up to hundreds of thousands of GB200 Superchips.
GPU
72x NVIDIA Blackwell GPUs, 36x NVIDIA Grace CPUs
CPU Cores
2,592 Arm® Neoverse V2 cores
GPU Memory | Bandwidth
Up to 13.4 TB HBM3e | 576 TB/s
Total Fast Memory
30.2 TB
Performance
- FP4 Tensor Core: 1,440 PFLOPS | 720 PFLOPS*
- FP8/FP6 Tensor Core: 720 PFLOPS | 360 PFLOPS*
Interconnect
- 72x OSFP single-port NVIDIA ConnectX®-7 VPI with 400 Gb/s NVIDIA InfiniBand
- 36x dual-port NVIDIA BlueField®-3 VPI with 200 Gb/s NVIDIA InfiniBand and Ethernet
NVIDIA NVLink Switch System
9x L1 NVIDIA NVLink Switches
Management Network
Host baseboard management controller (BMC) with RJ45
Software
- NVIDIA Mission Control
- NVIDIA AI Enterprise
- NVIDIA DGX OS / Ubuntu Enterprise
Support
Three-year business-standard hardware and software support
