If you're looking to train large language models, run scientific workloads or perform high-performance distributed training, the NVIDIA H100 SXM is the ideal choice for you. The NVIDIA H100 SXM is made on the Hopper architecture and offers higher throughput and efficiency compared to NVIDIA H100 PCIe GPUs. With 80 GB HBM3 memory, 1,984 Tensor Cores, NVLink at 600 GB/s and the SXM5 interconnect, you're getting the best of all with NVIDIA H100 SXM.
Now, let’s look at how the NVIDIA H100 SXM is deployed on Hyperstack and what you get.
Hyperstack gives you a production-grade cloud environment to run your workloads with the NVIDIA H100 SXM.
VM Flavour |
GPUs |
CPU Cores |
RAM |
Root Disk (GB) |
Ephemeral Disk (GB) |
n3-H100-SXM5x8 |
8 |
192 |
1800 |
100 |
32000 |
NVIDIA H100 SXM GPUs on Hyperstack come in an 8-GPU configuration setup. This high-performance setup combines 8 H100 SXM5 GPUs with 192 CPU cores, 1.8 TB of RAM, a 100 GB root disk and a massive 32 TB of ephemeral storage.
With everything deployed on a single node, you avoid cross-node latency, making it perfect for running large language models exceeding 65 billion parameters, multimodal AI systems and enterprise-scale inference workloads.
The NVSwitch on the NVIDIA H100 SXM scales NVLink further by enabling full-mesh GPU-to-GPU connectivity across all 8 GPUs in your VM. This removes bottlenecks during parallel processing and enables near-linear scaling for large-scale AI training, inference and simulation.
The NVIDIA H100 SXM on Hyperstack supports high-speed networking of up to 350 Gbps bandwidth. This allows faster dataset ingestion, weight synchronisation and multi-node orchestration.
Each H100 SXM VM on Hyperstack also includes up to 32 TB of ephemeral NVMe storage, ideal for high-speed access to training datasets, temporary checkpoints, and data pre-processing during AI training runs.
For long-term data requirements, persistent NVMe storage ensures your critical datasets and models are retained securely, supporting recurring experiments and staged production deployments.
Take point-in-time captures of your VM with the snapshot feature for NVIDIA H100 SXM, including its OS state, libraries, configurations and files. This is ideal for A/B testing, rollback protection, and recovering from failed experiments.
Each H100 SXM VM includes a 100 GB bootable volume that stores your OS, environment, tools, and configuration files. That means you can restart your VM and pick up where you left off, instantly.
Hyperstack offers different pricing depending on your workload type, whether you need temporary power or long-term infrastructure.
You can access NVIDIA H100 SXM in minutes via on-demand access. This is ideal for workloads that require immediate compute power:
You can reserve the same GPU and performance for a lower price in advance but for future deployments:
You also do not have to worry about spending on an idle GPU compute costs for H100 SXM. How? Just enable the hibernation feature on Hyperstack to save costs when your VM is idle.
Follow these steps to hibernate your VM:
Sometimes, on-demand GPU access can be unreliable during peak. If you’re working on long-running projects, need predictable costs or have deadlines, reserving ensures you always have the power you need.
Reserving capacity in advance on Hyperstack is not a complicated process, but rather simple. You just need to:
NVIDIA H100 SXM delivers world-class performance for AI training, inference and HPC. But Hyperstack takes it further with our real cloud environment. From fast networking to persistent storage and snapshots, you’re not just getting access to hardware, you’re getting a complete production-ready platform.
No matter if you're launching your first project or scaling large production workloads, our NVIDIA 100 SXM VMs deliver the performance without the complexity.
Getting started with Hyperstack is fast and hassle-free. Launch your VM today!
Here are some helpful resources that will help you deploy your NVIDIA H100 SXM on Hyperstack:
NVIDIA H100 SXM is a high-performance GPU designed for large-scale AI and HPC workloads, built on Hopper architecture.
Each H100 SXM GPU has 80 GB of HBM3 memory for ultra-high bandwidth.
Each NVIDIA H100 SXM VM includes 8 H100 SXM GPUs..
NVIDIA H100 SXM is ideal for LLM training, multi-modal AI, inference pipelines, simulation and data analytics.
The on-demand price of NVIDIA H100 SXM is $2.40/hour and reserved VMs cost $2.04/hour.
Yes, with the Hibernation Feature, you can pause your VM and resume later without setup.
Visit the reservation page, fill in your details, submit the form and the Hyperstack team will assist you further.