<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

Access NVIDIA H100 in minutes from just $2.06/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

NVIDIA A100
a100
NVIDIA A100

Experience 20x Performance Leap with NVIDIA A100 GPU

The NVIDIA 80GB A100 Tensor Core GPU delivers unprecedented acceleration for the most demanding AI, data analytics, and HPC workloads. With up to 20x the performance of the previous generation and one of the world's fastest memory bandwidths, the NVIDIA A100 can handle even the largest models and datasets with ease. Available on-demand through Hyperstack!

product-banner-curve

Unrivalled Performance in…

tick_check

AI Training & Inference

Up to 20x faster than previous generations, crush demanding workloads in AI, analytics, and HPC

tick_check

Data Analytics

Doubled memory (80GB) and world-record 2TB/s bandwidth tackle massive datasets and complex models

tick_check

Precision

Handle diverse workloads with a single accelerator, supporting a broad range of math formats

tick_check

Scalability

Scale up or down, and adapt to dynamic workloads with ease

NVIDIA A100

Starts from $1.50/hour

nvidia-a100-reserve-now

Redefine GPU Performance with NVIDIA A100

ai-and-supercomputing-performance

AI and Supercomputing Performance

The NVIDIA A100 takes the Ampere architecture to the extreme, delivering unmatched processing power for AI, data science, and HPC workloads. It provides a massive 20x increase in Tensor FLOPS for deep learning over previous-gen GPUs along with advanced support for sparse models and datasets. Combined with NVLink and structural sparsity acceleration, you get breakthrough performance for training and deploying immense neural networks

optimised-versatility

Optimised Versatility

This GPU also introduces powerful new technologies to optimise utilisation in data centres. With its Multi-Instance GPU capabilities, a single A100 can be partitioned into separate entities for right-size acceleration. Whether tackling enormous distributed jobs or tiny tasks, the A100 allows every user to leverage its industry-leading capabilities efficiently.

Benefits of NVIDIA A100

unmatched-versatility

Unmatched Versatility

Powered by NVIDIA Ampere architecture, the A100 adapts to your needs. Connect multiple GPUs with NVLink for massive tasks. Maximise the potential of every GPU in your data centre, 24/7.

fast-deep-learning

Fast Deep Learning

Experience 20x the performance of previous generations with 3rd-gen Tensor Cores. The A100 delivers 312 teraflops for training and inference, accelerating your deep learning work like never before.

breakthrough-interconnection

Breakthrough Interconnection

The A100's NVLink, paired with NVSwitch, connects up to 16 GPUs at 600GB/s, creating the ultimate single-server performance platform.

world-record-memory-bandwidth

World-Record Memory Bandwidth

A100 GPU Memory boasts up to 80GB of HBM2e, reaching a groundbreaking 2TB/s bandwidth – the first of its kind. Enjoy 1.7x higher bandwidth than previous generations and 95% DRAM utilisation efficiency.

multi-instance-flexibility

Multi-Instance Flexibility

Optimise resource allocation for every task, expand access for all users, and find a new era of acceleration.

ai-driven-insights-at-scale

AI-Driven Insights at Scale

The A100 enables real-time decision-making and complex data analysis across various industries, transforming the way businesses operate and innovate in an AI-accelerated world.

Technical Specifications

Specification
A100 80gb PCle
CUDA Core
6912
Tensor Cores
432
FP64
9.7 TFLOPS
FP64 Tensor Core
19.5 TFLOPS
FP32
19.5 TFLOPS
Tensor Float 32 (TF32)
9.7 TFLOPS
BFLOAT16Tensor Core
312 TFLOPS
FP16 Tensor Core
312 TFLOPS
INT8 Tensor Core
624 Tops
GPU Memory
80GB HBM2e
Max Thermal Design Power (TDP)
300W
Multi-Instance GPU
Up to 7MIGs @10GB
CUDA Core
6912
Tensor Cores
432
FP64
9.7 TFLOPS
FP64 Tensor Core
19.5 TFLOPS
FP32
19.5 TFLOPS
Tensor Float 32 (TF32)
9.7 TFLOPS
BFLOAT16Tensor Core
312 TFLOPS
FP16 Tensor Core
312 TFLOPS
INT8 Tensor Core
624 Tops
GPU Memory
80GB HBM2e
Max Thermal Design Power (TDP)
300W
Multi-Instance GPU
Up to 7MIGs @10GB

Available on
Hyperstack

nvidia-a100-available-on-hyperstack

Frequently Asked Questions

Frequestly asked questions about the NVIDIA A100.

What is the NVIDIA A100 GPU card used for?

A100s are robust GPUs specifically designed for AI tasks like training large language models as they handle massive datasets and complex calculations needed to train AI models for tasks like writing, translating, and generating code. For running AI applications, once trained, A100s power AI applications like image recognition and speech-to-text.

What type of memory is NVIDIA A100?

The NVIDIA A100 GPU in the cloud uses high-bandwidth HBM2 memory. It allows for fast data transfer between the GPU and memory, crucial for AI tasks.

How much is NVIDIA 80GB A100 GPU?

The cost of NVIDA A100 depends on how you use it. On Hyperstack, you can rent an A100 by the hour with no extra add-on charges, starting at $ 2.75 per GPU.

How easy is it to set up and use the NVIDIA A100?

With Hyperstack, spinning up an A100 is easily done in minutes - it’s just a few clicks away. Sign in and explore for free to see for yourself!