NVIDIA Data Centre GPUs
NVIDIA® Data Centre GPUs bring the latest parallel GPU processing to a range of applications – from data science, to research, artificial intelligence, machine learning and more. XENON can design a server with proper power, cooling and memory to power single or multiple GPUs. XENON also builds workstation solutions with these GPUs – unleashing the power of GPU computing into a desktop form factor, at home in ambient room temperatures and with standard power supplied. Contact the XENON solutions team to discover which NVIDIA GPU is right for your requirements.
For a limited time, a four hour, self-paced course – AI in the Data Centre – is available for up to 3 team members with each NVIDIA Data Centre GPU purchased. Spaces are limited. Contact us to learn more.
New NVIDIA A100 Tensor Core GPU in a PCIe form factor. Third generation Tensor Cores with 20x performance increase. Multi-Instance GPU capable, the A100 can be configured into 7 vGPU’s individually or in combinations to provide maximum flexibility in data analytics, training and inference.
Versatile compute acceleration for mainstream enterprise servers.
- features FP64 NVIDIA Ampere architecture Tensor Cores
- Up to 3X higher throughput than v100 and 6X higher than T4
- 24 gigabytes (GB) of GPU memory
- GPU memory bandwidth of 933 gigabytes per second (GB/s)
New NVIDIA A40 Tensor Core GPU in PCIe Form Factor. Virtualisation ready to deliver flexibility and agility along with the lightening fast performance of NVIDIA Ampere Architecture. All new design to optimise Tensor Cores, memory and PCIe Gen4. The worlds most powerful data centre GPU for visual computing, VR, AI, HPC workloads.
- GPU Memory: 48GB
- GPU Memory Bandwidth: 696 GB/s
- vGPU capable with multiple config options
- PCIe Generation 4 form factor
- 300W power draw, passive thermal
Accelerated graphics and video with AI for mainstream enterprise servers.
- Ultra-fast GDDR6 memory, delivering 600 GB/s of bandwidth
- 24GB GDDR6 GPU memory
- Compact, single-slot, 150W GPU
- Tensor Float 32 (TF32) precision provides up to 5X the training throughput
Powered by NVIDIA Turing™ architecture and purpose-built to boost efficiency for scale-out servers running deep learning workloads.
Unlock an unprecedented VDI user experience.
- 4x 16GB GDDR6 with error-correcting code (ECC)
- GPU memory bandwidth of 4x 232GB/s
- More than 2x the Encoder Throughput
- Supports multiple, high-resolution monitors (up to two 4K or a single 5K)
The world’s first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep learning performance.
- 640 NVIDIA Tensor Cores
- 5,120 NVIDIA CUDA® Cores
- Connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers
- Delivers 47X higher inference performance than a CPU server