<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

Access NVIDIA H100s from just $2.06/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

NVIDIA H100 PCIe
nvidia-h100-pcie
NVIDIA H100 PCIe

Power Cutting-Edge AI with NVIDIA H100 PCIe

Designed to power the world's most advanced workloads, the NVIDIA H100 PCIe excels in AI, data analytics, and HPC applications, and offers unmatched computational speed and data throughput. Available directly through Hyperstack.

product-banner-curve

Unrivalled Performance in...

tick_check

AI Capabilities

30x faster inference speed and 9x faster model training than the NVIDIA A100.

tick_check

Maths Precision

Supports a wide range of maths precisions, including FP64, FP32, FP16, and INT8.

tick_check

HPC Performance

Up to 7x higher performance for HPC applications.

tick_check

Energy Efficiency

26x more energy efficient than CPUs.

Key Features

SXM Form Factor

Tempus egestas sed sed risus pretium. Volutpat diam ut venenatis tellus in metus vulputate eu scelerisque. Quis varius quam quisque id diam vel. Nec nam aliquam sem et tortor consequat id. Iaculis urna id volutpat lacus laoreet. Sit amet nisl suscipit adipiscing bibendum est ultricies. Scelerisque eleifend donec pretium vulputate sapien. Eu augue ut lectus arcu bibendum at varius vel. Ipsum dolor sit amet consectetur adipiscing elit pellentesque habitant. Risus nec feugiat in fermentum. Massa sapien faucibus et molestie ac feugiat sed lectus. Aliquet nibh praesent tristique magna

SXM-form-factor

DGX Reference Architecture

Tempus egestas sed sed risus pretium. Volutpat diam ut venenatis tellus in metus vulputate eu scelerisque. Quis varius quam quisque id diam vel. Nec nam aliquam sem et tortor consequat id. Iaculis urna id volutpat lacus laoreet. Sit amet nisl suscipit adipiscing bibendum est ultricies. Scelerisque eleifend donec pretium vulputate sapien. Eu augue ut lectus arcu bibendum at varius vel. Ipsum dolor

NVIDIA H100 PCIe

Starts from $1.33/hour

nvidia-h100-pcie-reserve-now

Key Features

pcle-form-factor

PCIe Form Factor

Designed with a standard NVIDIA H100 PCIe form factor, ensuring compatibility with a wide range of systems. 

High-Performance Architecture

High Performance Architecture

Optimised for diverse workloads, The NVIDIA H100 PCIe is especially adept at handling AI, machine learning, and complex computational tasks.

Scalable Design

Scalable Design

The modular nature of the NVIDIA H100 PCIe allows for easy scalability and integration into existing systems.

Enhanced Connectivity

Enhanced Connectivity

The NVIDIA H100 PCIe handles swift data transfer with ease, essential for tasks involving large data sets and complex computational models.

Advanced Storage Capabilities

Advanced Storage Capabilities

Local NVMe in GPU nodes and parallel file system for rapid data access and distributed training.

Memory Bandwidth

Better Memory Bandwidth

Offers the highest PCIe card memory bandwidth exceeding 2000 GBps, ideal for handling the largest models and most massive datasets.

maximum-tdp-of-350-w-1

Maximum TDP of 350 W

Operates unconstrained up to a thermal design power level of 350 W, ensuring that the card can handle the most demanding computational speeds without thermal constraints.

multi-instance-gpu-capability-1

Multi-Instance GPU Capability

The NVIDIA H100 PCIe features Multi-Instance GPU (MIG) capability, allowing it to partition into up to 7 isolated instances. This flexibility is crucial for efficiently managing varying workload demands in dynamic computing environments.

NVIDIA-NVLink

NVIDIA NVLink

Three built-in NVLink bridges deliver 600 GB/s bidirectional bandwidth, providing robust and reliable data transfer capabilities essential for high-performance computing tasks.

Technical Specifications

GPU: NVIDIA H100 PCIe

Base Clock Frequency: 1.065 GHz

Graphics Memory: 80 GB HBM3

Form factor
8x NVIDIA H100 PCIe
FP64
26 teraFLOPS
FP64 Tensor Core
5 1teraFLOPS
FP32
51 teraFLOPS
TF32 Tensor Core
756 teraFLOPS
BFLOAT16 Tensor Core
1,513 teraFLOPS
FP16 Tensor Core
1,513 teraFLOPS
FP8 Tensor Core
3,026 teraFLOPS
INT8 Tensor Core
3,026 TOPS
GPU Memory
80GB
GPU Memory Bandwidth
2TB/s
Decoders
7 NVDEC / 7 JPEG
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 350W (configurable)
Interconnect
NVLink: 600GB/s / PCIe Gen5: 128GB/s
Form factor
8x NVIDIA H100 PCIe
FP64
26 teraFLOPS
FP64 Tensor Core
5 1teraFLOPS
FP32
51 teraFLOPS
TF32 Tensor Core
756 teraFLOPS
BFLOAT16 Tensor Core
1,513 teraFLOPS
FP16 Tensor Core
1,513 teraFLOPS
FP8 Tensor Core
3,026 teraFLOPS
INT8 Tensor Core
3,026 TOPS
GPU Memory
80GB
GPU Memory Bandwidth
2TB/s
Decoders
7 NVDEC / 7 JPEG
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 350W (configurable)
Interconnect
NVLink: 600GB/s / PCIe Gen5: 128GB/s

Technical Specifications

Form factor
8x NVIDIA H100 PCIe
FP64
26 teraFLOPS
FP64 Tensor Core
5 1teraFLOPS
FP32
51 teraFLOPS
TF32 Tensor Core
756 teraFLOPS
BFLOAT16 Tensor Core
1,513 teraFLOPS
FP16 Tensor Core
1,513 teraFLOPS
FP8 Tensor Core
3,026 teraFLOPS
INT8 Tensor Core
3,026 TOPS
GPU Memory
80GB
GPU Memory Bandwidth
2TB/s
Decoders
7 NVDEC / 7 JPEG
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 350W (configurable)
Interconnect
NVLink: 600GB/s / PCIe Gen5: 128GB/s

Also Available for Reservation

nvidia-h100-pcie-available-on-hyperstack

Frequently Asked Questions

We build our services around you. Our product support and product development go hand in hand to deliver you the best solutions available.

How fast is the NVIDIA H100 PCIe card?

The NVIDIA H100 has a peak FP32 performance of 80 TFLOPs, much higher than the NVIDIA A100's 40 TFLOPs.

Is NVIDIA H100 better than NVIDIA A100?

The NVIDIA H100 offers higher raw performance. Consider factors like Memory bandwidth as NVIDIA A100 has higher HBM2E bandwidth, potentially impacting certain tasks. Power consumption is another factor as the NVIDIA H100 uses more power (700W vs. 500W for A100).

How much is the NVIDIA H100 PCIe memory?

The NVIDIA H100 has a GPU memory of 80 GB.

What kind of power requirements does the NVIDIA H100 PCIe have?

The NVIDIA H100 PCIe has a TDP of up to 300-350W.

What is the NVIDIA H100 GPU price?

You can rent an NVIDIA H100 GPU on Hyperstack from $ 2.12 per hour.