<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">
NVIDIA H100
pcie-h100
NVIDIA PCIE H100

Power Cutting-Edge AI with NVIDIA H100 PCIe

Designed to power the world's most advanced workloads, the H100 PCIe excels in AI, data analytics, and HPC applications, and offers unmatched computational speed and data throughput. Available directly through Hyperstack.

product-banner-curve

Unrivalled Performance in...

tick_check

AI Capabilities

30x faster inference speed and 9x faster model training than the A100.

tick_check

Maths Precision

Supports a wide range of maths precisions, including FP64, FP32, FP16, and INT8.

tick_check

HPC Performance

Up to 7x higher performance for HPC applications.

tick_check

Energy Efficiency

26x more energy efficient than CPUs.

Key Features

SXM Form Factor

Tempus egestas sed sed risus pretium. Volutpat diam ut venenatis tellus in metus vulputate eu scelerisque. Quis varius quam quisque id diam vel. Nec nam aliquam sem et tortor consequat id. Iaculis urna id volutpat lacus laoreet. Sit amet nisl suscipit adipiscing bibendum est ultricies. Scelerisque eleifend donec pretium vulputate sapien. Eu augue ut lectus arcu bibendum at varius vel. Ipsum dolor sit amet consectetur adipiscing elit pellentesque habitant. Risus nec feugiat in fermentum. Massa sapien faucibus et molestie ac feugiat sed lectus. Aliquet nibh praesent tristique magna

SXM-form-factor

DGX Reference Architecture

Tempus egestas sed sed risus pretium. Volutpat diam ut venenatis tellus in metus vulputate eu scelerisque. Quis varius quam quisque id diam vel. Nec nam aliquam sem et tortor consequat id. Iaculis urna id volutpat lacus laoreet. Sit amet nisl suscipit adipiscing bibendum est ultricies. Scelerisque eleifend donec pretium vulputate sapien. Eu augue ut lectus arcu bibendum at varius vel. Ipsum dolor

NVIDIA H100 PCIe

Starting from $2.50 per hour

NVIDIA-H100-PCIe

Key Features

pcle-form-factor

PCIe Form Factor

Designed with a standard H100 PCIe form factor, ensuring compatibility with a wide range of systems. 

High-Performance Architecture

High Performance Architecture

Optimised for diverse workloads, The H100 PCIe is especially adept at handling AI, machine learning, and complex computational tasks.

Scalable Design

Scalable Design

The modular nature of the PCIe H100 allows for easy scalability and integration into existing systems.

Enhanced Connectivity

Enhanced Connectivity

The PCIe H100 handles swift data transfer with ease, essential for tasks involving large data sets and complex computational models.

Advanced Storage Capabilities

Advanced Storage Capabilities

Local NVMe in GPU nodes and parallel file system for rapid data access and distributed training.

Memory Bandwidth

Better Memory Bandwidth

Offers the highest PCIe card memory bandwidth exceeding 2000 GBps, ideal for handling the largest models and most massive datasets.

maximum tdp of 350 w

Maximum TDP of 350 W

Operates unconstrained up to a thermal design power level of 350 W, ensuring that the card can handle the most demanding computational speeds without thermal constraints.

multi-instance-gpu-capability

Multi-Instance GPU Capability

The PCIe H100 features Multi-Instance GPU (MIG) capability, allowing it to partition into up to 7 isolated instances. This flexibility is crucial for efficiently managing varying workload demands in dynamic computing environments.

NVIDIA-NVLink

NVIDIA NVLink

Three built-in NVLink bridges deliver 600 GB/s bidirectional bandwidth, providing robust and reliable data transfer capabilities essential for high-performance computing tasks.

Technical Specifications

GPU: NVIDIA H100 PCIe

Base Clock Frequency: 1.065 GHz

Graphics Memory: 80 GB HBM3

Form factor
8x NVIDIA H100 PCIe
FP64
26 teraFLOPS
FP64 Tensor Core
5 1teraFLOPS
FP32
51 teraFLOPS
TF32 Tensor Core
756 teraFLOPS
BFLOAT16 Tensor Core
1,513 teraFLOPS
FP16 Tensor Core
1,513 teraFLOPS
FP8 Tensor Core
3,026 teraFLOPS
INT8 Tensor Core
3,026 TOPS
GPU Memory
80GB
GPU Memory Bandwidth
2TB/s
Decoders
7 NVDEC / 7 JPEG
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 350W (configurable)
Interconnect
NVLink: 600GB/s / PCIe Gen5: 128GB/s
Form factor
8x NVIDIA H100 PCIe
FP64
26 teraFLOPS
FP64 Tensor Core
5 1teraFLOPS
FP32
51 teraFLOPS
TF32 Tensor Core
756 teraFLOPS
BFLOAT16 Tensor Core
1,513 teraFLOPS
FP16 Tensor Core
1,513 teraFLOPS
FP8 Tensor Core
3,026 teraFLOPS
INT8 Tensor Core
3,026 TOPS
GPU Memory
80GB
GPU Memory Bandwidth
2TB/s
Decoders
7 NVDEC / 7 JPEG
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 350W (configurable)
Interconnect
NVLink: 600GB/s / PCIe Gen5: 128GB/s

Technical Specifications

Form factor
8x NVIDIA H100 PCIe
FP64
26 teraFLOPS
FP64 Tensor Core
5 1teraFLOPS
FP32
51 teraFLOPS
TF32 Tensor Core
756 teraFLOPS
BFLOAT16 Tensor Core
1,513 teraFLOPS
FP16 Tensor Core
1,513 teraFLOPS
FP8 Tensor Core
3,026 teraFLOPS
INT8 Tensor Core
3,026 TOPS
GPU Memory
80GB
GPU Memory Bandwidth
2TB/s
Decoders
7 NVDEC / 7 JPEG
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 350W (configurable)
Interconnect
NVLink: 600GB/s / PCIe Gen5: 128GB/s

Also Available for Reservation

pcie-h100-available-on-demand-through-hyperstack

Frequently Asked Questions

Frequently asked questions about the NVIDIA H100 PCIe

How fast is the NVIDIA H100 PCIe?

The NVIDIA H100 has a peak FP32 performance of 80 TFLOPs, much higher than the NVIDIA A100's 40 TFLOPs.

Is NVIDIA H100 better than NVIDIA A100?

The NVIDIA H100 offers higher raw performance. Consider factors like Memory bandwidth as NVIDIA A100 has higher HBM2E bandwidth, potentially impacting certain tasks. Power consumption is another factor as H100 uses more power (700W vs. 500W for A100).

How many GB is H100?

The NVIDIA H100 has a GPU memory of 80 GB.

What kind of power requirements does the NVIDIA H100 PCIe have?

The NVIDIA H100 PCIe has a TDP of up to 300-350W.