IPU-POD256 «NEW»

When you’re ready to explore AI compute at supercomputing scale, choose IPU-POD256 for production deployment in your enterprise datacenter, private or public cloud. Experience massive efficiency and productivity gains when large language training runs are completed in hours or minutes instead of months and weeks. IPU-POD256 delivers AI at scale.

  • IPU at supercomputing scale
  • World leading language and vision performance for new and emerging models
  • Fine-grained compute & sparsity opens up new innovation
Get a Quote
XENON Graphcore IPU POD256
Features
IPU-POD256
Features

IPUs

IPU-POD256

256x GC200 IPUs

Features

IPU-M2000s

IPU-POD256

64x IPU-M2000s

Features

Exchange-Memory

IPU-POD256

16,614.4GB (includes 230.4GB In-Processor Memory and 16,384GB Streaming Memory)

Features

Performance

IPU-POD256

  • 64 petaFLOPS FP16.16
  • 16 petaFLOPS FP32
Features

IPU Cores

IPU-POD256

376,832

Features

Threads

IPU-POD256

2,260,992

Features

IPU-Fabric

IPU-POD256

2.8Tbps

Features

Host-Link

IPU-POD256

100 GE RoCEv2

Features

Software

IPU-POD256

  • Poplar
  • TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO
  • OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana
  • Slurm, Kubernetes
  • OpenStack, VMware ESG
Features

System Weight

IPU-POD256

1,800kg + Host servers and switches

Features

System Dimensions

IPU-POD256

64U + Host servers and switches

Features

Host Server

IPU-POD256

Selection of approved host servers from Graphcore partners

Features

Thermal

IPU-POD256

Air-Cooled

Quick Quote Request

  • This field is for validation purposes and should be left unchanged.

Get a Quote Talk to a Solutions Architect