NVIDIA® DGX H200 «NEW»
NEWThe gold standard for AI infrastructure
The NVIDIA DGX™ H200 is the proven choice for building enterprise AI factories and centers of excellence. As the foundation of NVIDIA DGX SuperPOD and DGX BasePOD architectures, DGX H200 delivers 32 petaFLOPS of AI performance powered by eight NVIDIA H200 Tensor Core GPUs with 1,128GB of total GPU memory. Featuring 18x NVLink connections per GPU, four NVSwitches providing 7.2TB/s of bidirectional GPU-to-GPU bandwidth, and 10x ConnectX-7 400Gb/s network interfaces, DGX H200 is engineered for the most demanding generative AI, natural language processing, and deep learning workloads. The fully integrated solution includes NVIDIA AI Enterprise software, Base Command orchestration, and expert support from NVIDIA DGXperts—delivering uncompromising performance with flexible deployment options including on-premises, colocated, and managed service configurations.
Key Features:
- 8x NVIDIA H200 Tensor Core GPUs with 1,128GB total memory
- 32 petaFLOPS FP8 AI performance
- 7.2TB/s GPU-to-GPU bandwidth via 4x NVSwitches
- 10x ConnectX-7 400Gb/s network interfaces
- Complete NVIDIA AI Enterprise software suite and Base Command
GPU
8x NVIDIA H200 Tensor Core GPUs
GPU Memory
1,128GB total
Performance
32petaFLOPS FP8
NVIDIA NVSwitch
4x
System Power Usage
~10.2kW max
CPU
- Dual Intel® Xeon® Platinum 8480C Processors
- 112 cores total, 2.00GHz (Base), 3.80GHz (max boost)
System Memory
2TB
Networking
- 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI
- Up to 400Gb/s NVIDIA InfiniBand/Ethernet
Management Network
- 10Gb/s onboard NIC with RJ45
- 100Gb/s Ethernet optional NIC
- Host baseboard management controller (BMC) with RJ45
Storage
- OS: 2x 1.9TB NVMe M.2
- Internal storage: 8x 3.84TB NVMe U.2
