- Volta architecture
By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace commodity CPU servers for traditional HPEC and Deep Learning. - Tensor Core
Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance. That's more Tensor FLOPS for DL Training, and DL Inference. - Advanced NVLink
NVIDIA NVLink in Tesla V100 delivers higher throughput. Up to eight Tesla V100 accelerators can be interconnected at fast speed to unleash the high application performance possible on a single server. - HBM2
With a combination of improved raw bandwidth and higher DRAM utilization efficiency, Tesla V100 delivers higher memory bandwidth. - Maximum efficiency mode
The maximum efficiency mode allows data centers to achieve higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing higher performance at less power consumption. - Improved programmability
Tesla V100 is architected from the ground up to simplify programmability. Its independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.
NVIDIA Tesla V100 is the excellent data center GPU, ever built to accelerate AI, HPEC, and graphics. Powered by NVIDIA Volta, the advanced GPU architecture, Tesla V100 offers the performance of many CPUs in a single GPU - enabling data scientists, researchers, and engineers to tackle challenges, that were once thought impossible.