
NVIDIA A100 SXM
Revolutionary Performance at Every Scale with NVIDIA A100 SXM
Get state-of-the-art performance with 80GB HBM2e memory, cutting-edge Tensor Core technology, and Multi-Instance GPU (MIG) support using the NVIDIA A100 SXM. Perfect for AI, HPC and data analytics workloads. Available now on Hyperstack.

Unrivalled
Performance in…
Deep Learning Training
Achieve up to 20x faster training speeds with third-generation Tensor Cores.
AI Inference
Experience up to 249x higher inference performance than CPUs for rapid AI deployment.
Data Analytics
Accelerate big data processing by up to 8x for scalable, complex workloads with 80GB memory.
High-Performance Computing (HPC)
Boost HPC workloads with 11x performance for higher throughput than previous generation V100.
NVIDIA A100 SXM
Starts from $1.36/hour

Accelerate Diverse Workloads with NVIDIA A100 SXM
Multi-Instance GPU
(MIG)
Efficiently partition a single NVIDIA A100 SXM GPU into up to seven fully isolated instances, each with dedicated resources like memory, cache and cores. MIG ensures optimal utilisation for diverse and concurrent workloads for flexibility in multi-tenant and cloud environments.
Next-Generation
NVLink
Achieve seamless multi-GPU communication with NVLink’s 2x higher throughput with NVIDIA A100 SXM. This technology enables massive scalability for complex workloads, ensuring maximum performance across interconnected GPUs in demanding applications.
Benefits of NVIDIA A100 SXM 80GB
Exceptional AI Training
Efficiently partition a single NVIDIA A100 SXM GPU into up to seven fully isolated instances, each with dedicated resources like memory, cache and cores. MIG ensures optimal utilisation for diverse and concurrent workloads for flexibility in multi-tenant and cloud environments.
Versatile Precision Support
The NVIDIA A100 SXM supports multiple precisions, from FP64 for HPC to INT8 for AI inference, ensuring superior performance and adaptability for diverse computational needs, from scientific simulations to real-time machine learning.
Elastic Scalability with MIG
MIG technology enables dynamic partitioning of GPU resources, allowing multiple users or applications to run simultaneously. This elasticity is perfect for cloud environments and diverse workload demands, optimising resource usage.
World’s Fastest Memory Bandwidth
With NVIDIA A100 SXM’s 2TB/s memory bandwidth, massive datasets can be processed efficiently and time to solution for data-intensive workloads. Ideal for large-scale simulations and AI model training.
Energy and Cost Efficiency
The NVIDIA A100 SXM provides industry-leading performance per watt, ensuring energy-efficient operation. Its robust TDP of up to 400W balances power consumption with top-tier computational capability for demanding data centre needs.
Seamless Infrastructure Integration
Designed for compatibility with NVIDIA-certified systems, including DGX and HGX platforms, the NVIDIA A100 SXM integrates smoothly into your existing infrastructures, offering straightforward deployment and exceptional performance scalability across environments.
Technical Specifications
Frequently Asked Questions
Our product support and development go hand in hand to deliver you the best solutions available.
What workload is the NVIDIA A100 SXM ideal for?
The NVIDIA A100 SXM GPU is ideal for AI training, inference, data analytics, and high-performance computing (HPC), delivering unmatched acceleration for diverse and demanding workloads.
What is the NVIDIA A100 SXM price?
The NVIDIA A100 SXM pricing on Hyperstack starts from $1.36/hour for reservation and $1.60/hour for on-demand access.
How does MIG technology improve GPU efficiency?
MIG allows a single GPU to be divided into isolated instances, optimising resource utilisation and enabling multiple users or applications to operate simultaneously without interference.
Does the NVIDIA A100 SXM support NVLink?
Yes, the NVIDIA A100 SXM supports NVLink, offering double the throughput of the previous generation for efficient multi-GPU communication and scalability.
What is the memory bandwidth of the NVIDIA A100 SXM?
With a memory bandwidth of 2,039 GB/s, the NVIDIA A100 SXM ensures fast and efficient processing of massive datasets for advanced workloads.
How can I access the NVIDIA A100 SXM on Hyperstack?
The NVIDIA A100 SXM GPUs are now available on Hyperstack. Reserve your access today to experience their exceptional performance. Reserve here for early access.