Bow Pod64
NEWBuilt for multi-tenancy and concurrency, Bow Pod64 is a powerful, flexible building block for the enterprise datacenter, private or public cloud. With cloud-native capabilities to support multiple users and mixed workloads across multiple smaller VPods (Virtual Pods) or used as one single system for large training workloads Bow Pod64 gives you faster time to business value for today’s models and unlocks a new world of new AI applications. The Bow system features:
- Ease of use & flexibility
- Faster time to business value
- Support from AI experts so you’re up and running fast
- Range of host server models available from XENON, as well as switching and storage to maximise the performance of your Bow system.
Also available as a Bow Pod128 system.
Check out more about Graphcore in these blog posts on the XENON site.

Processors
64x Bow IPUs
1U blade unites
16x Bow-2000 machines
Memory
- 57.6GB In-Processor-Memory™
- Up to 4.1TB Streaming Memory™
Performance
- 22.4 petaFLOPS FP16.16
- 5.6 petaFLOPS FP32
IPU Cores
94,208
Threads
565,248
IPU-Fabric™
2.8Tbps
Host-Link
100 GE RoCEv2
Software
- Poplar
- TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO
- OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana
- Slurm, Kubernetes
- OpenStack, VMware ESG
System Weight
450kg + Host servers and switches
System Dimensions
16U + Host servers and switches
Host Server
Selection of approved host servers from Graphcore partners
Storage
Selection of approved systems from Graphcore partners
Thermal
Air-Cooled