NVIDIA DGX B200
NEWA unified AI platform for training, fine-tuning, and inference.
NVIDIA DGX™ B200 is an unified AI platform for develop-to-deploy pipelines for businesses of any size at any stage in their AI journey. Equipped with eight NVIDIA Blackwell GPUs interconnected with fifth-generation NVIDIA® NVLink®, DGX B200 delivers leading-edge performance, offering 3X the training performance and 15X the inference performance of previous generations. Leveraging the NVIDIA Blackwell GPU architecture, DGX B200 can handle diverse workloads—including large language models, recommender systems, and chatbots—making it ideal for businesses looking to accelerate their AI transformation.
GPUs
GPU 8x NVIDIA B200 Tensor Core GPUs
GPU Memory
1,440GB total
Performance
72 petaFLOPS training and 144 petaFLOPS inference
NVIDIA NVSwitch™
2x
System Power Usage
~14.3kW max
CPU
2 Intel® Xeon® Platinum 8570 Processors
- 112 Cores total, 2.1 GHz (Base)
- 4 GHz (Max Boost)
System Memory
Up to 4TB
Networking
- 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI
- > Up to 400Gb/s InfiniBand/Ethernet
- 2x dual-port QSFP112 NVIDIA BlueField-3 DPU
- > Up to 400Gb/s InfiniBand/Ethernet
Management network
- 10Gb/s onboard NIC with RJ45
- 100Gb/s dual-port ethernet NIC
- Host baseboard management controller (BMC) with RJ45
Storage
Storage OS: 2x 1.9TB NVMe M.2
- Internal storage: 8x 3.84TB NVMe U.2
Software
- NVIDIA AI Enterprise – Optimised AI Software
- NVIDIA Base Command – Orchestration, Scheduling, and Cluster Management
- DGX OS / Ubuntu – Operating System
Rack Units (RU)
10 RU
System Dimensions
- Height: 17.5in (444mm)
- Width: 19.0in (482.2mm)
- Length: 35.3in (897.1mm)
Operating Temperature Range
5–30°C (41–86°F)
Enterprise Support
- Three-year Enterprise Business-Standard Support for hardware and software
- 24/7 Enterprise Support portal access
- Live agent support during local business hours