NVIDIA DGX H100
NEWThe World’s Proven Choice for Enterprise AI
NVIDIA DGX H100 powers business innovation and optimisation. The latest iteration of NVIDIA’s legendary DGX systems and the foundation of NVIDIA DGX SuperPOD™, DGX H100 is an AI powerhouse that features the ground breaking NVIDIA H100 Tensor Core GPU. The system is designed to maximise AI throughput, providing enterprises with a highly refined, systemised, and scalable platform to help them achieve breakthroughs in natural language processing, recommender systems, data analytics, and much more.
Download the PDF datasheet for the DGX H100.
Looking for more details? Check out our blog post for a deep dive into the revolutionary DGX H100.
GPUs
8x NVIDIA H100 Tensor Core GPUs
GPU Memory
640GB total
Performance
32 petaFLOPS FP8
NVIDIA NVSwitch™
4x
System Power Usage
~10.2kW max
CPU
Dual x86
System Memory
2TB
Networking
- 8x Single-PNetworking 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7
- 400Gb/s InfiniBand/Ethernet
- 2x dual-port NVIDIA BlueField-3 DPUs VPI
- 1x 400Gb/s InfiniBand/Ethernet
- 1x 200Gb/s InfiniBand/Ethernet
Management network
- 10Gb/s onboard NIC with RJ45
- 50Gb/s Ethernet optional NIC
- Host baseboard management controller (BMC) with RJ45
- 2x NVIDIA BlueField-3 DPU BMC (with RJ45 each)
Storage
- OS: 2x 1.9TB NVMe M.2
- Internal Storage: 8x 3.84TB NVMe U.2
System software
- DGX H100 systems come preinstalled with DGX OS, which is based on Ubuntu Linux and includes the DGX software stack (all necessary packages and drivers optimised for DGX).
- Optionally, customers can install Ubuntu Linux or Red Hat Enterprise Linux and the required DGX software stack separately.
Operating Temperature Range
5–30°C (41–86°F)