IPU-POD128 «NEW»

When you’re ready to scale, choose IPU-POD128 for production deployment in your enterprise datacenter, private or public cloud. Experience massive efficiency and productivity gains when large language training runs are completed in hours or minutes instead of months and weeks. IPU-POD128 delivers for AI at scale.

  • Superior scaling & blazing fast performance
  • Full systems integration support for datacenter installation.
  • AI expert support to develop & deploy models at scale.
Get a Quote
XENON Graphcore IPU POD128
Features
IPU-POD128
Features

IPUs

IPU-POD128

128x GC200 IPUs

Features

IPU-M2000s

IPU-POD128

32x IPU-M2000s

Features

Exchange-Memory

IPU-POD128

8.3TB (includes 115.2GB In-Processor Memory and 8.2TB Streaming Memory)

Features

Performance

IPU-POD128

  • 32 petaFLOPS FP16.16
  • 8 petaFLOPS FP32
Features

IPU Cores

IPU-POD128

188,416

Features

Threads

IPU-POD128

1,130,496

Features

IPU-Fabric

IPU-POD128

2.8Tbps

Features

Host-Link

IPU-POD128

100 GE RoCEv2

Features

Software

IPU-POD128

  • Poplar
  • TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO
  • OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana
  • Slurm, Kubernetes
  • OpenStack, VMware ESG
Features

System Weight

IPU-POD128

900kg + Host servers and switches

Features

System Dimensions

IPU-POD128

32U + Host servers and switches

Features

Host Server

IPU-POD128

Selection of approved host servers from Graphcore partners

Features

Thermal

IPU-POD128

Air-Cooled

Quick Quote Request

  • This field is for validation purposes and should be left unchanged.

Get a Quote Talk to a Solutions Architect