GPU Computing - NVIDIA

Data Centre to Edge Acceleration

NVIDIA® has developed a full range of GPU compute that allow you to accelerate HPC and AI workloads from the data centre to the field, or edge.

Overview of the NVIDIA GPU solutions:

  • NVIDIA DGX™ Systems. Complete solutions for any artificial intelligence, machine learning, or visualisation workload. DGX Stations are data centre power in a workstation format, running of standard mains power and ambient air temperature in your office. The new DGX Station™ A100 provides enough processing power for a small team.
  • GPUs. The full range of NVIDIA GPUs are available from XENON, including:
    • “A” series with NVIDIA Ampere Architecture – the latest GPU architecture released in 2020, the NVIDIA A100 Tensor Core GPU delivers 6x advances in speed and power, and also revolutionary flexibility with the ability to slice an NVIDIA A100 into multiple instance GPU (MIG’s). This allows the Ampere Architecture GPU’s to be agile and elastic, and be configured for data analytics, training and inference – the complete AI workload in a flexible, universal platform. The Ampere Architecture GPU is available in the NVIDIA A100 Tensor Core GPU, and also available in a rack mount unit with 6 GPU’s, memory and processing power – a complete unit in the DGX™ A100.
    • “V” series with NVIDIA Volta Architecture – released in 2017, the Volta Architecture broke through the 100 teraFLOPs barrier, and the NVLink® architecture allows multiple GPU’s to be combined into larger parallel processing powerhouses. These GPU are available in a range of sizes and also the DGX Station. These units represent great processing power bang-for-buck. They are power hungry and to make the most of them we recommend a customised XENON GPU server or XENON GPU Personal Supercomputer to ensure sufficient memory, CPU, and power and cooling to make the most of your GPU power.
  • NVIDIA EGX – Edge Computing – NVIDIA provides a complete solution to allow AI workloads to be processed at the edge. Using NVIDIA GPU Containers (NGC™), the AI models and training completed in the data centre can be run in the same containers in the NVIDIA edge devices – the Jetson range. These come in kit form and are a great extension to your AI capabilities, or a great starting point for exploring AI capabilities.

GPU Accelerated Applications

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. Pioneered in 2007 by NVIDIA, GPUs now power energy-efficient data centres in government labs, universities, enterprises, and small-and-medium businesses around the world. These GPU Computing systems are ideal for data analytics, artificial intelligence, and other visualisation workloads. GPUs for computational workloads are specially designed, and include features such as matrix multiplication, multi-core computations, and internal circuitry to ensuring the computers can take advantage of all the capabilities of the GPU. NVIDIA maintains a catalogue of GPU accelerated applications – download the catalogue.

How Applications Accelerate with GPUs?

GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run significantly faster. XENON designs GPU Computing systems for optimal performance across the whole system – power supply, cooling, memory and internal CPU performance.

Browse this section to review the NVIDIA range of GPUs.

XENON also builds specific systems with NVIDIA GPUs – view these systems.

Talk to a Solutions Architect