<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">


A Supercloud Specialised for AI

Introducing the most advanced AI cluster of its kind: Hyperstack’s HGX SXM5 H100 is built on custom DGX reference architecture. Deploy from 8 to 5,120 cards in a single cluster - only available through Hyperstack's Supercloud reservation.


Next Availability of 4,128 Cards Live on March 24!

Benefits of HGX SXM5 H100

hgx sxm h100 dgx reference architecture

The Largest Single Cluster of H100s

We ensure a surplus of power to run even the most demanding workloads. With 5120 H100 80G cards operating in a single cluster, there are no break points, allowing for multitenant capabilities.


Built with NVIDIA DGX Protocols

Built on NVIDIA DGX reference architecture, the HGX SXM5 H100s model integrates seamlessly into the DGX ecosystem, providing a comprehensive solution for enterprise-level AI development and applications.


Unmatched Network Connectivity

While most platforms boast “fast” connectivity, typically ranging from 200gibps to 800gibps, Hyperstack's Supercloud operates at a staggering 3.2tibps (3,200gibps), offering a significant performance enhancement over traditional platforms.

Unrivalled Performance in…


AI Training & Inference

30x faster inference speed and 9x faster training speed.*


LLM Performance

30x faster processing*, enhancing language model performance.


Single Cluster Scalability

The AI Supercloud environment is the largest single cluster of H100s available outside of Public Cloud.



Specifically designed to allow all nodes to utilise fully their CPU, GPU and Network capabilities without limits.


Service Delivery


Bespoke Solutions for Diverse Needs

Every business is unique, and at Hyperstack, we tailor our service delivery to match your specific requirements. We personally onboard you to the Supercloud, understanding your unique challenges and objectives, ensuring a solution that aligns perfectly with your business goals.


Scalability at Your Fingertips

Flexibility and scalability are the cornerstones of our service delivery model. Built in clusters of 5,120 cards, you will not find a service outside of public cloud that can deliver the same scale and performance that we offer for AI workloads.

Key Features of HGX SXM5 H100

Enhanced Connectivity

The AI Supercloud operates at 3.2tibps using high-throughput, low-latency NVIDIA InfiniBand 'Compute network' for superior data processing and transfer. Every node can fully saturate the high throughput, low latency NVidia InfiniBand ‘Compute network’ without contention. The network is designed to allow all nodes to fully utilise their CPU, GPU and Network capabilities without limits.


Supercloud Storage

Supercloud users can build an environnment to any configuration. Instead of traditional storage layers, we use a Data Management Platform data layout and virtual metadata servers to distribute and parallelise all metadata and data across the cluster for incredibly low latency and high performance no matter the file size or number.

SXM5 Form Factor

SXM Form Factor

High-density GPU configurations, efficient cooling, and energy optimisation with the superior SXM form factor

DGX reference architecture

DGX reference architecture

Designed with DGX reference architecture to meet the rigorous demands of enterprise-level AI and Machine Learning applications.

Scalable Design

Scalable Design

Modular architecture for seamless scalability to meet evolving computational needs, built in single clusters of 5120 H100 cards. 

Enhanced Connectivity

TDP of 700 W

Designed to operate at a higher TDP compared to the PCIe version, the SXM H100 is ideal for the most intensive AI and HPC applications that demand peak computational power.




NVLink & NVSwitch

The HGX SXM5 H100 utilises NVLink and NVSwitch technologies, providing significantly higher interconnect bandwidth compared to our PCIe version. 

GPUDirect Technology

GPUDirect Technology

Enhanced data movement and improved performance: read and write to/from GPU memory, eliminating unnecessary memory copies, decreasing CPU overheads and reducing latency.

H100 SXM5

Up to 8 Weeks Delivery Time For Up to 5,120 H100 SXM5 Card Cluster!


Dedicated Onboarding & Support Team

We have a dedicated technical team dedicated to onboarding HGX SXM5 H100 users, ready to help you set up a Supercloud environment to any configuration you require.

24/7 Expert Assistance

Our commitment to excellence extends to our customer support. We offer 24/7 assistance, ensuring that expert help is always available whenever you need it. Our team is equipped to handle any query or challenge you might encounter with the HGX SXM5 H100 and your Supercloud environment.

Personalised Support & Onboarding

At Hyperstack, we believe in providing a customer support experience that goes beyond the conventional. We offer personalised, comprehensive solutions, dedicated to onboarding the perfect environment specialised to your needs.

Technical Expertise and Deep Product Knowledge

Our support team possesses deep technical expertise and extensive product knowledge. This ensures that any support you receive for the HGX SXM5 H100 and Supercloud onboarding is not just general guidance but informed, specialised assistance tailored to the nuances of this advanced technology.

How much faster is H100 than A100?

The H100 offers 30x faster inference speed and 9x faster training speed than the A100.

What are the key features of the H100 SXM?

The key features of H100 SXM include

  • SXM Form Factor: It is designed with high-density GPU configurations, efficient cooling, and energy optimisation, making it suitable for demanding applications.
  • DGX Reference Architecture: Integrated with the DGX reference architecture, it meets the rigorous demands of enterprise-level AI and machine learning applications.
  • Scalable Design: With a modular architecture, it allows for seamless scalability to meet evolving computational needs. It can be built into single clusters of up to 5120 H100 cards.
  • GPUDirect Technology: This technology enhances data movement and improves performance by enabling direct reading and writing to/from GPU memory, reducing CPU overheads and latency.
  • NVLink & NVSwitch: Utilising NVLink and NVSwitch technologies, provides significantly higher interconnect bandwidth compared to PCIe versions, enhancing overall performance.
  • TDP of 700W: The H100 SXM is designed to operate at a higher Thermal Design Power (TDP) compared to PCIe versions, making it ideal for the most intensive AI and high-performance computing (HPC) applications that demand peak computational power.

Technical Specifications

Form factor
34 teraFLOPS
FP64 Tensor Core
67 teraFLOPS
67 teraFLOPS
TF32 Tensor Core
989 teraFLOPS
BFLOAT16 Tensor Core
1,979 teraFLOPS
FP16 Tensor Core
1,979 teraFLOPS
FP8 Tensor Core
3,958 teraFLOPS
INT8 Tensor Core
3,958 TOPS
GPU Memory
GPU Memory Bandwidth
3.2 tibps
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 700W (configurable)
NVLink: 900GB/s PCIe / Gen5: 128GB/s

Only available through Supercloud reservation