NVIDIA DGX A100
The NVIDIA DGX™ A100 is the third generation DGX™ from NVIDIA®, and represents a major leap forward with the new A100 Tensor Core GPU’s creating a system that is capable of 5 petaFLOPS of performance.
The DGX™ A100 includes 8 x A100 Tensor Core GPU cards, 6 x NVLink switches interconnecting these 8 GPUs at 4.8TB/s.
The DGX™ A100 can be configured with two memory options. The base model has 40GB per GPU for a total of 320GB, and 15TB of NVME internal storage, and 1TB of system memory. The larger memory option takes each GPU to 80GB, for a total of 640GB GPU memory and also doubles the storage to 30TB of NVME and system memory of 2TB.
The result of all this compute and GPU power is a unified infrastructure for all your AI workloads. The DGX™ A100 will handle analytics, training and inference. The DGX™ A100 can replace racks and racks of CPUs based systems previously used for analytics or training, combining all of the capabilities into one flexible, powerful system.
Looking to combine multiple DGX™ A100’s into a POD or SuperPOD architecture? NVIDIA® has done the hard work for you. Leverage proven reference architectures to stand up multiple DGX™ A100 units and save yourself months of design time. Learn more about the Pod / SuperPOD Architecture.Get a Quote
8 x NVIDIA A100 Tensor Core GPUs, either 40GB or 80GB memory each
320GB or 640GB total
5 petaFLOPS AI, 10 petaOPS INT8
System Power Usage
Dual AMD Rome 7742, 128 cores total, 2.25 GHz (base), 3.4 GHz (max boost)
1TB or 2TB
- 8x Single-Port Mellanox ConnectX-6 VPI
- 200Gb/s HDR InfiniBand
- 1x or 2x Dual-Port Mellanox ConnectX-6 VPI
- 10/25/50/100/200Gb/s Ethernet
- OS: 2x 1.92TB M.2 NVME drives
- Internal Storage: 15TB (4 x 3.84TB) U.2 NVME, or 30TB (8 x 3.84TB) U.2 NVME
Ubuntu Linux OS
123 kgs (271 lbs)
Packaged System Weight
143kgs (315 lbs)
- Height: 10.4 in (264.0 mm)
- Width: 19.0 in (482.3 mm) MAX
- Length: 35.3 in (897.1 mm) MAX
System Power Usage
6.5 kW max
Operating Temperature Range
5ºC to 30ºC (41ºF to 86ºF)