NVIDIA A100 Tensor Core GPU PCIe
The NVIDIA® A100 Tensor Core GPU in PCIe form delivers 80GB memory, third generation tensor cores, PCIe Gen4 bandwidth, and the ability to create up to 7 vGPU’s with NVIDIA’s Multi-Instance GPU (MIG) feature. The NVIDIA A100 is now shipping. Contact us for more details and to build your NVIDIA A100 based solution.
XENON also has a range of customisable server builds designed for the NVIDIA A100. From 1 to 10 GPUs – The new XENON RADON Intel servers and the new XENON KRYPTON AMD Servers.
In performance numbers below:
* Structural sparsity enabled;
** SXM GPUs via NVIDIA HGX™ A100 server boards; PCIe GPUs via NVLink® Bridge for up to 2 GPUs.

GPU Architecture
NVIDIA Ampere
Double-Precision Performance
- FP64: 9.7 TFLOPS
- FP64 Tensor Core: 19.5 TFLOPS
Single-Precision Performance
- FP32: 19.5 TFLOPS
- Tensor Float 32 (TF32): 156 TF | 312 TFLOPS*
Half-Precision Performance
312 TFLOPS | 624 TFLOPS*
Bfloat16
312 TFLOPS | 624 TFLOPS*
Integer Performance
- INT8: 624 TOPS | 1,248 TOPS*
- INT4: 1,248 TOPS | 2,496 TOPS*
GPU Memory
80 GB hBM2
Memory Bandwidth
1,935 GB/sec
Error-Correcting Code
Yes
Interconnect Interface
- PCIe Gen4: 64 GB/ sec
- NVIDIA NVLink®: 600 GB/sec** PCIe
Form Factor
PCIe
Multi-Instance GPU (MIG)
Up to 7 GPU instances
Max Power Consumption (Thermal Design Power, or TPD)
300 W
Delivered Performance for Top Apps
90%
Thermal Solution
Passive
Compute APIs
CUDA®, DirectCompute, OpenCLTM, OpenACC®
Quick Quote Request