The World’s Most Powerful Data Center GPU
Modern data centers are key to solving some of the world’s most important scientific and big data challenges using high performance computing (HPC) and artificial intelligence (AI). NNVIDIA® Tesla® accelerated computing platform provides these modern data centers with the power to accelerate HPC and AI workloads. NVIDIA Pascal GPU-accelerated servers deliver breakthrough performance with fewer servers resulting in faster scientific discoveries and insights and dramatically lower costs.
With over 400 HPC applications GPU optimized in a broad range of domains, including 10 of the top 10 HPC applications and all deep learning frameworks, every modern data center can save money with Tesla platform.
NVIDIA Tesla P100 Data Center GPU
HPC and hyperscale data centers need to support the ever-growing demands of data scientists and researchers while staying within a tight budget. The old approach of deploying lots of commodity compute nodes requires vast interconnect overhead that substantially increases costs without proportionally increasing data center performance.
The NVIDIA Tesla P100 accelerator is the world’s most powerful data center GPU ever built, designed to boost throughput and save money for HPC and hyperscale data centers. Powered by the brand new NVIDIA Pascal™ architecture, Tesla P100 enables a single node to replace up to half-rack of commodity CPU nodes by delivering lightning-fast performance in a broad range of HPC applications.
Tesla P100 PCIe
HPC and Deep Learning Applications
Key Features:
- 4.7 TeraFLOPS of double- precision performance
- 9.3 TeraFLOPS of single- precision performance
- 720 GB/s memory bandwidth (540 GB/s option available)
- 16 GB of HBM2 memory (12 GB option available)
2-4 GPUs per node
Tesla P100 with NVLink™
Deep Learning Training
Key Features:
- 21 TeraFLOPS of half- precision performance
- 11 TeraFLOPS of single- precision performance
- 160 GB/s NVIDIA NVLink™ Interconnect
- 720 GB/s memory bandwidth
- 16 GB of HBM2 memory
4-8 GPUs per node
Tesla P40
Deep Learning Training and Inference
Key Features:
- 47 TeraOPS of INT8 inference performance
- 12 TeraFLOPS of single- precision performance
- 24 GB of GDDR5 Memory
- 1 decode and 2 encode video engines
Up to 8 GPUs per node
Tesla P4
Deep Learning Inference and Video Trancoding
Key Features:
- 22 TeraOPS of INT8 inference performance
- 5.5 TeraFLOPS of single- precision performance
- 1 decode and 2 encode video engines
- 50 W/75 W Power
- Low profile form factor
1-2 GPUs per node
NVIDIA Pascal Architecture Delivers Accelerated Computing
Accelerating scientific discovery, visualizing big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks. These workloads also require accelerating data centers to meet the growing demand for exponential computing.
NVIDIA Tesla is the world’s leading platform for accelerated data centers, deployed by some of the world’s largest supercomputing centers and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools, and applications to enable faster scientific discoveries and big data insights.
At the heart of the NVIDIA Tesla platform are the massively parallel GPU accelerators that provide dramatically higher throughput for compute‑intensive workloads—without increasing the power budget and physical footprint of data centers.
NVIDIA® TESLA®. ONE PLATFORM. UNLIMITED DATA CENTER ACCELERATION.