TABLE OF CONTENTS
Updated: 31 Jul 2025
What is NVIDIA H100 SXM?
If you're looking to train large language models, run scientific workloads or perform high-performance distributed training, the NVIDIA H100 SXM is the ideal choice for you. The NVIDIA H100 SXM is made on the Hopper architecture and offers higher throughput and efficiency compared to NVIDIA H100 PCIe GPUs. With 80 GB HBM3 memory, 1,984 Tensor Cores, NVLink at 600 GB/s and the SXM5 interconnect, you're getting the best of all with NVIDIA H100 SXM.
Now, let’s look at how the NVIDIA H100 SXM is deployed on Hyperstack and what you get.
What are the Features of NVIDIA H100 SXM?
Hyperstack gives you a production-grade cloud environment to run your workloads with the NVIDIA H100 SXM.
Multi-GPU VMs for Demanding AI
VM Flavour |
GPUs |
CPU Cores |
RAM |
Root Disk (GB) |
Ephemeral Disk (GB) |
n3-H100-SXM5x8 |
8 |
192 |
1800 |
100 |
32000 |
NVIDIA H100 SXM GPUs on Hyperstack come in an 8-GPU configuration setup. This high-performance setup combines 8 H100 SXM5 GPUs with 192 CPU cores, 1.8 TB of RAM, a 100 GB root disk and a massive 32 TB of ephemeral storage.
With everything deployed on a single node, you avoid cross-node latency, making it perfect for running large language models exceeding 65 billion parameters, multimodal AI systems and enterprise-scale inference workloads.
NVSwitch to Scale
The NVSwitch on the NVIDIA H100 SXM scales NVLink further by enabling full-mesh GPU-to-GPU connectivity across all 8 GPUs in your VM. This removes bottlenecks during parallel processing and enables near-linear scaling for large-scale AI training, inference and simulation.
High-Speed Networking and Storage
The NVIDIA H100 SXM on Hyperstack supports high-speed networking of up to 350 Gbps bandwidth. This allows faster dataset ingestion, weight synchronisation and multi-node orchestration.
Each H100 SXM VM on Hyperstack also includes up to 32 TB of ephemeral NVMe storage, ideal for high-speed access to training datasets, temporary checkpoints, and data pre-processing during AI training runs.
For long-term data requirements, persistent NVMe storage ensures your critical datasets and models are retained securely, supporting recurring experiments and staged production deployments.
Snapshot Support
Take point-in-time captures of your VM with the snapshot feature for NVIDIA H100 SXM, including its OS state, libraries, configurations and files. This is ideal for A/B testing, rollback protection, and recovering from failed experiments.
Persistent Boot Volume
Each H100 SXM VM includes a 100 GB bootable volume that stores your OS, environment, tools, and configuration files. That means you can restart your VM and pick up where you left off, instantly.
NVIDIA H100 SXM GPU Pricing
Hyperstack offers different pricing depending on your workload type, whether you need temporary power or long-term infrastructure.
On-Demand H100 SXM
You can access NVIDIA H100 SXM in minutes via on-demand access. This is ideal for workloads that require immediate compute power:
- Hourly Rate: $2.40/hour
- Billing: Pay-as-you-go, only for what you use
- Ideal for: Testing, short-term jobs, spike workloads, experimentation, scalable workloads
Reserved H100 SXM
You can reserve the same GPU and performance for a lower price in advance but for future deployments:
- Hourly Rate: $2.04/hour
- Billing: Reserved pricing for fixed durations
- Ideal for: Consistent AI workloads, product development
Hibernate Your NVIDIA H100 SXM
You also do not have to worry about spending on an idle GPU compute costs for H100 SXM. How? Just enable the hibernation feature on Hyperstack to save costs when your VM is idle.
- Hibernate and Resume Anytime
- Keep Your Environment Intact
- No reconfiguration needed
Follow these steps to hibernate your VM:
- Go to the VM Details Page of the H100 SXM virtual machine you want to hibernate.
- In the top-right corner, hover over the "More Options" dropdown menu.
- A list of available VM actions such as Stop, Hard reboot and Hibernate, will be displayed.
- Click on "Hibernate this VM" to put the H100 SXM into hibernation mode.
How to Reserve Your NVIDIA H100 SXM
Sometimes, on-demand GPU access can be unreliable during peak. If you’re working on long-running projects, need predictable costs or have deadlines, reserving ensures you always have the power you need.
- You get the NVIDIA H100 SXM on-demand for $2.40/hour but the same GPU can be reserved for a discounted hourly pricing of $2.04/hour when you commit to a fixed term.
- There is guaranteed availability during peak hours or urgent deployment cycles. You do not have to worry about NVIDIA H100 SXM going unavailable at the time you need it, for example, time-sensitive model training.
- Stay in control with the Contract Usage tab. This is available only when you reserve, so you can track real-time GPU usage, estimate future consumption and prevent idle waste.
Reservation Process for NVIDIA H100 SXM
Reserving capacity in advance on Hyperstack is not a complicated process, but rather simple. You just need to:
- Visit the Reservation Page on Hyperstack here.
- Complete the Form, including:
- Company Name
- Use Case (e.g., LLM training, multimodal AI, inference)
- Number of GPUs Required (e.g., 8, 16, 32)
- Duration (e.g., 1 month, 3 months, 6 months)
- Submit Your Request and our team will contact you to finalise details and ensure optimal performance for your workload.
Conclusion
NVIDIA H100 SXM delivers world-class performance for AI training, inference and HPC. But Hyperstack takes it further with our real cloud environment. From fast networking to persistent storage and snapshots, you’re not just getting access to hardware, you’re getting a complete production-ready platform.
No matter if you're launching your first project or scaling large production workloads, our NVIDIA 100 SXM VMs deliver the performance without the complexity.
Getting started with Hyperstack is fast and hassle-free. Launch your VM today!
Ready to Get Started?
Here are some helpful resources that will help you deploy your NVIDIA H100 SXM on Hyperstack:
- New to Hyperstack? Sign up Today to Get Started
- Check out the Hyperstack API Documentation
- Explore the Quick Platform Tour
- Need help? Contact us anytime at support@hyperstack.cloud
FAQs
What is NVIDIA H100 SXM?
NVIDIA H100 SXM is a high-performance GPU designed for large-scale AI and HPC workloads, built on Hopper architecture.
How much memory does NVIDIA H100 SXM have?
Each H100 SXM GPU has 80 GB of HBM3 memory for ultra-high bandwidth.
How many GPUs are in one H100 SXM configuration?
Each NVIDIA H100 SXM VM includes 8 H100 SXM GPUs..
What workloads is the NVIDIA H100 SXM suited for?
NVIDIA H100 SXM is ideal for LLM training, multi-modal AI, inference pipelines, simulation and data analytics.
What is the cost of the NVIDIA H100 SXM?
The on-demand price of NVIDIA H100 SXM is $2.40/hour and reserved VMs cost $2.04/hour.
Can I hibernate my NVIDIA H100 SXM VM?
Yes, with the Hibernation Feature, you can pause your VM and resume later without setup.
How to reserve the NVIDIA H100 SXM?
Visit the reservation page, fill in your details, submit the form and the Hyperstack team will assist you further.
Subscribe to Hyperstack!
Enter your email to get updates to your inbox every week
Get Started
Ready to build the next big thing in AI?