Bow Pod16

NEW

Ideal for exploration, the Bow Pod16 gives you all the power, performance and flexibility you need to fast-track your IPU prototypes and speed from pilot to production. Bow Pod16 is your easy-to-use starting point for building better, more innovative AI solutions with IPUs whether you’re focused on language and vision, exploring GNNs and LSTMs or creating something entirely new. The Bow system features

  • Compact 5U form factor
  • Flexible & easy to use
  • Expert support to get you up and running quickly
  • Range of host server models available from XENON, as well as switching and storage to maximise the performance of your Bow system.

Also available as a Bow Pod32 system.

Check out more about Graphcore in these blog posts on the XENON site. 

XENON Graphcore Bow Pod16
Features
BOW POD16
Features

Processors

BOW POD16

16x Bow IPUs

Features

1U blade unites

BOW POD16

4x Bow-2000 machines

Features

Memory

BOW POD16

  • 14.4GB In-Processor-Memory™
  • Up to 1TB Streaming Memory™
Features

Performance

BOW POD16

  • 5.6 petaFLOPS FP16.16
  • 1.4 petaFLOPS FP32
Features

Separate Cores

BOW POD16

23,552 Threads

Features

Threads

BOW POD16

141,312

Features

IPU-Fabric ™

BOW POD16

2.8Tbps

Features

Host-Link

BOW POD16

100 GE RoCEv2

Features

Software

BOW POD16

  • Poplar SDK
  • TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO
  • OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana
  • Slurm, Kubernetes
  • OpenStack, VMware ESG
Features

System Weight

BOW POD16

66kg + Host server

Features

System Dimensions

BOW POD16

4U + Host servers and switches

Features

Host Server

BOW POD16

Selection of approved host servers from Graphcore partners

Features

Storage

BOW POD16

Selection of approved systems from Graphcore partners

Features

Thermal

BOW POD16

Air-Cooled

Quick Quote Request

  • This field is for validation purposes and should be left unchanged.

Get a Quote Talk to a Solutions Architect