XENON ARGON NVIDIA Grace Servers
The XENON ARGON™ Server platform utilises the new NVIDIA Grace™ CPU to deliver integrated CPU and GPU performance in compact rack mount servers.
The Grace CPU platform is built on ARM architecture with NVIDIA proprietary enhancements like NVLink-C2C which delivers high-bandwidth connection between the Grace CPU and NVIDIA GPUs. The Grace CPUs are the first to use high bandwidth memory in the form of LPDDR5X, which delivers the speed necessary to large AI models while reducing system power requirements.
Currently the Grace CPUs come in two “flavours” –
- Grace Superchip – CPU alone, with 144 high performance ARM Neoverse V2 cores, up to 960 GB of LPDDR5X (240GB, 480GB or 960GB), and drawing 500W including memory. See the Grace CPU datasheet here.
- Grace Hopper – this is integration of Grace CPU and Hopper™ GPU architecture delivers a Grace CPU and an H100 GPU, in a single integrated chip. The Grace Hopper chip comes with 72 ARM V2 cores, H100 GPU, up to 480GB LPDDR5X memory. See the Grace Hopper datasheet here.
The XENON ARGON range of servers is designed to optimise the power of the Grace processors, while providing for a range of configurations to meet your specific workload requirements.
The ARGON Solo R317 allows for a variety of NVIDIA GPUs to be matched to the power of the Grace CPU, allowing for tailoring of your compute and GPU power for your specific workloads. The Solo R517 can be configured with up to 4 NVIDIA PCIe based GPUs – creating a server with one of the highest core and GPU counts available in 2RU.
The ARGON Solo R517 and Solo H517 provide the integrated power of the Grace Hopper CPU+GPU superchip, for high performance at the lowest watt power draw. The H517 is a 2 node server with 2x Grace Hopper chips and more storage and networking options in each node delivering double the processing power in 2U compared to the R517.
Review the specifications below and contact our team to configure a system to meet your exact workload requirements.
Configure Your System NowThe NVIDIA Grace™ CPU delivers high performance, power efficiency, and high-bandwidth connectivity that can be used in diverse configurations for different data center needs.
Use Cases: Accelerate the largest AI, HPC, Cloud and Hyperscale workloads.
- CPU: NVIDIA Grace™ Superchip – up to 144 cores, 288 threads
- Memory: Up to 8 DIMMs, 2TB DDR5
- GPU: Support for up to 4x NVIDIA PCIe GPUs – H100, H100 NVL, L40S, L40, A100
- Drives: 8x E1.S hot-swap NVMe drive slots
- Network: 1 x 10GbE BaseT, 1x Dedicated management port. Optional: NVIDIA ConnectX®-7 or Bluefield®-3 DPU
- 2U form factor
Grace Hopper™ Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications. The superchip delivers up to 10X higher performance for applications running terabytes of data, enabling scientists and researchers to reach unprecedented solutions for the world’s most complex problems.
Use Cases: AI,HPC, Data Analytics,Digital Twins, hyperscale cloud applications.
- CPU: NVIDIA Grace Hopper Superchip, includes 1x Grace CPU and 1x Hopper H100 GPU connected via NVLinkC2C
- Memory:
- Grace CPU: up to 480GB LPDDR5X memory with ECC, memory bandwidth up to 512GB/s
- Hopper H100: Up to 96GB HBM3, memory bandwidth up to 4TB/s
- Drives: 4x E1.S hot-swap NVMe drive slots
- Network: 1x 10GbE BaseT, Optional: NVIDIA ConnectX®-7 or Bluefield®-3 DPU
- 2U form factor
Grace Hopper™ Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications. The superchip delivers up to 10X higher performance for applications running terabytes of data, enabling scientists and researchers to reach unprecedented solutions for the world’s most complex problems.
Use Cases: AI,HPC, Data Analytics,Digital Twins, hyperscale cloud applications.
This two node, 2U server includes in each node:
- CPU: NVIDIA Grace Hopper Superchip, includes 1x Grace CPU and 1x Hopper H100 GPU connected via NVLinkC2C
- Memory:
- Grace CPU: up to 480GB LPDDR5X memory with ECC, memory bandwidth up to 512GB/s
- Hopper H100: Up to 96GB HBM3, memory bandwidth up to 4TB/s
- Storage per node: Up to 4 x 2.5″ Gen5 NVMe hot-swappable bays
- Network per node:
- 2x 10GbE LAN ports, Supports NSCI function
- 1x Dedicated management port
- Optional: NVIDIA ConnectX®-7 or Bluefield®-3 DPU
- 2U form factor, with 2 nodes.