Bow Pod256
NEWWhen you’re ready to grow your AI compute capacity at supercomputing scale, choose Bow Pod256, a system designed for production deployment in your enterprise datacenter, private or public cloud. Experience massive efficiency and productivity gains when large language training runs are completed in hours or minutes instead of months and weeks. Bow Pod256 delivers AI at scale. The Bow Systems featurre:
- IPU at supercomputing scale
- World-leading language and vision performance for new and emerging models
- Multiple users & mixed workloads across many smaller vPods or harness the power of 256 Bow IPUs
- Range of host server models available from XENON, as well as switching and storage to maximise the performance of your Bow system.
Also available as Bow Pod512 and Bow Pod1024 systems.
Processors
256x Bow IPUs
1U blade units
64x Bow 2000 machines
Memory
- 230.4GB In-Processor-Memory™
- Up to 16.3TB Streaming Memory™
Performance
- 89.6 petaFLOPS FP16.16
- 22.4 petaFLOPS FP32
IPU Cores
376,832
Threads
2,260,992
IPU-Fabric™
2.8Tbps
Host-Link
100 GE RoCEv2
Software
- Poplar
- TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO
- OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana
- Slurm, Kubernetes
- OpenStack, VMware ESG
System Weight
1,800kg + Host servers and switches
System Dimensions
64U + Host servers and switches
Host Server
Selection of approved host servers from Graphcore partners
Storage
Selection of approved systems from Graphcore partners
Thermal
Air-Cooled