The World’s Most Powerful Data Center GPU
Modern data centres are key to solving some of the world’s most important scientific and big data challenges using high performance computing (HPC) and artificial intelligence (AI). NVIDIA® Tesla® accelerated computing platform provides these modern data centres with the power to accelerate HPC and AI workloads. NVIDIA Pascal GPU-accelerated servers deliver breakthrough performance with fewer servers resulting in faster scientific discoveries and insights and dramatically lower costs.
With over 400 HPC applications GPU optimised in a broad range of domains, including 10 of the top 10 HPC applications and all deep learning frameworks, every modern data center can save money with Tesla platform.
NVIDIA Tesla P100 Data Center GPU
HPC and hyperscale data centres need to support the ever-growing demands of data scientists and researchers while staying within a tight budget. The old approach of deploying lots of commodity compute nodes requires vast interconnect overhead that substantially increases costs without proportionally increasing data centre performance.
The NVIDIA Tesla P100 accelerator is the world’s most powerful data centre GPU ever built, designed to boost throughput and save money for HPC and hyperscale data centers. Powered by the brand new NVIDIA Pascal™ architecture, Tesla P100 enables a single node to replace up to half-rack of commodity CPU nodes by delivering lightning-fast performance in a broad range of HPC applications.
NVIDIA Tesla P100 PCIe
HPC and Deep Learning Applications
Benefits: Replace 32 CPU servers with a single P100 server for HPC and deep learning
- 4.7 TeraFLOPS of double- precision performance
- 9.3 TeraFLOPS of single- precision performance
- 720 GB/s memory bandwidth (540 GB/s option available)
- 16 GB of HBM2 memory (12 GB option available)
Recommended Server Configurations: 2-4 GPUs per node
NVIDIA Tesla P100 with NVLink™
Deep Learning Training
Benefits: 10X faster deep learning training vs. last-gen GPUs
- 21 TeraFLOPS of half- precision performance
- 11 TeraFLOPS of single- precision performance
- 160 GB/s NVIDIA NVLink™ Interconnect
- 720 GB/s memory bandwidth
- 16 GB of HBM2 memory
Recommended Server Configurations: 4-8 GPUs per node
NVIDIA Tesla P40
Deep Learning Training and Inference
Benefits: 40X faster deep learning inference than a CPU server
- 47 TeraOPS of INT8 inference performance
- 12 TeraFLOPS of single- precision performance
- 24 GB of GDDR5 Memory
- 1 decode and 2 encode video engines
Recommended Server Configurations: Up to 8 GPUs per node
NVIDIA Tesla P4
Deep Learning Inference and Video Transcoding
Benefits: 40X higher energy efficiency than a CPU for inference
- 22 TeraOPS of INT8 inference performance
- 5.5 TeraFLOPS of single- precision performance
- 1 decode and 2 encode video engines
- 50 W/75 W Power
- Low profile form factor
Recommended Server Configurations: 1-2 GPUs per node
Accelerating scientific discovery, visualising big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks. These workloads also require accelerating data centres to meet the growing demand for exponential computing.
NVIDIA Tesla is the world’s leading platform for accelerated data centres, deployed by some of the world’s largest supercomputing centres and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools, and applications to enable faster scientific discoveries and big data insights.
At the heart of the NVIDIA Tesla platform are the massively parallel GPU accelerators that provide dramatically higher throughput for compute‑intensive workloads—without increasing the power budget and physical footprint of data centers.
NVIDIA® TESLA®. ONE PLATFORM. UNLIMITED DATA CENTER ACCELERATION.
Contact us today to design a solution based on the P100.