Bow Pod16
NEWIdeal for exploration, the Bow Pod16 gives you all the power, performance and flexibility you need to fast-track your IPU prototypes and speed from pilot to production. Bow Pod16 is your easy-to-use starting point for building better, more innovative AI solutions with IPUs whether you’re focused on language and vision, exploring GNNs and LSTMs or creating something entirely new. The Bow system features
- Compact 5U form factor
- Flexible & easy to use
- Expert support to get you up and running quickly
- Range of host server models available from XENON, as well as switching and storage to maximise the performance of your Bow system.
Also available as a Bow Pod32 system.
Check out more about Graphcore in these blog posts on the XENON site.
Processors
16x Bow IPUs
1U blade unites
4x Bow-2000 machines
Memory
- 14.4GB In-Processor-Memory™
- Up to 1TB Streaming Memory™
Performance
- 5.6 petaFLOPS FP16.16
- 1.4 petaFLOPS FP32
Separate Cores
23,552 Threads
Threads
141,312
IPU-Fabric ™
2.8Tbps
Host-Link
100 GE RoCEv2
Software
- Poplar SDK
- TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO
- OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana
- Slurm, Kubernetes
- OpenStack, VMware ESG
System Weight
66kg + Host server
System Dimensions
4U + Host servers and switches
Host Server
Selection of approved host servers from Graphcore partners
Storage
Selection of approved systems from Graphcore partners
Thermal
Air-Cooled