Enquire about this solution

XENON GPU Computing


GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. Pioneered by NVIDIA®  who led the industry in the application of GPUs to computing problems. GPU based systems now power energy-efficient datacentres in government labs, universities, enterprises, and small-and-medium businesses around the world.

How Do Applications Accelerate with GPUs?

GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run significantly faster. Many applications are already coded for GPU acceleration. For other custom coded applications, the NVIDIA® CUDA programming toolkit allows for minor changes to the code to take advantage of the massive parallelism that GPUs offer. The benefit is 100x increase in run time and processing of computationally intensive workloads.

Range of GPUs Available

In the past few years, the range of GPUs available has increased while at the same time they are being designed for more specific applications and workloads.

Compute Focused GPUs

Compute intensive GPUs for artificial intelligence, machine learning, data science and engineering applications – these GPUs are packed with more computing cores for higher compute power and accuracy at floating point scales and they often include specialised functions to maximise compute throughput and accuracy.

These compute focused GPUs are also widely used now in engineering and science applications for things like Computational Fluid Dynamics, Finite Element Analysis, scientific and mathematical modelling.

Examples of these compute intensive GPUs include:

Graphcore’s Intelligence Processing Unit (IPU) solution is interesting as they have designed a chip specifically for compute process and AI/ML rather than adapting GPU based form factors.

Graphics Focused GPUs

Graphics is the legacy application for GPUs and what they do best. These GPUs have specialised cores designed for graphics work, ray tracing and rendering thousands of pixels simultaneously. These GPUs are now standard issue for many desktop and laptop applications from gaming through to desktop publishing, photographic editing etc.

The evolution of the Graphics GPU has seen the release of new GPUs that are specifically designed for the professional demands of creative artists in animation, film visual effects, and film and television production/post-production. These GPUs pack more than average performance into small form factor to meet the demands of these more intensive applications.  Examples include the NVIDIA A40 and A2.

Shared GPU Solutions

GPUs can now be shared across all applications – compute, high performance graphics and even desktop graphics. With the increased amount of memory and compute cores in modern GPUs, manufacturers have developed ways to share these resources across a virtualisation layer.

NVIDIA has been leading the way in vGPU innovation and has developed three ways to share these resources.

  1. Multi-Instance GPU (MIG) – this is currently a feature of the higher spec GPUs including the H100, A100, and A30. This allows the GPU to be split into a fixed number of logically isolated resource pools – for example the A100 can be split into 7 MIG instances, with equal amounts of memory and GPU cores. These can be delivered as individual instances or combined in any number to create more powerful MIG instances. In a DGX A100 system with 8 x A100‘s – up to 56 MIG instances can be created and delivered to users as a single GPU or as 56 GPU instances, or any combination in between.
  2. vPC – this software stack delivers GPU to multiple users in a virtual desktop instance (VDI). Using vPC allows for a single GPU to support multiple users and accelerate the graphics elements of their individual productivity applications (like MS Office, Powerpoint, video conferencing etc). the GPU resources can be shared on a fixed allocation or a variable share. This granular control allows system administrators to control and deliver a high quality end user experience. NVIDIA A16 is ideal for these workloads with a good balance of compute and memory to support a number of users.

  3. vWS – the virtual workstation solution allows for the sharing of high end GPUs for the graphics professional. Using vWS allows for a single GPU to support multiple end users of high end workstation applications such as animation, special effects, film post-production work. The specific demands of the applications will determine the recommended number of users on a single GPU, but the 48GB of memory can be shared between 1 and 48 users, creating significant cost savings compared to setting up individual workstations for each user.

In addition to cost savings, it is much easier to up-date and manage a user’s software stack on the vGPU set-up. And when the GPU system isn’t being used for applications it can be re-configured and deployed for other applications such as render work.

XENON is experienced in designing GPU based solutions for individual applications and for sharing across VDIs or workstations. Contact us today to discuss your requirements and design a solution for your workflow.

XENON NVIDIA virtual GPU IT stack
NVIDIA Virtual GPU (vGPU) Monitoring and Management

XENON GPU Systems

XENON has been building GPU Computing solutions since 2008, when we introduced clustered GPU based computing to Australia. We offer a full range of solutions, from managed services in the cloud to individual personal GPU based super computers, through to XENON GPU Clusters.

Build Your Solutions Today

Contact the XENON team to scope you requirements and build your GPU solution today.

Talk to a Solutions Architect