Unrivalled Performance in...
30x faster inference speed and 9x faster model training than the A100.
Supports a wide range of maths precisions, including FP64, FP32, FP16, and INT8.
Up to 7x higher performance for HPC applications.
26x more energy efficient than CPUs.
SXM Form Factor
Tempus egestas sed sed risus pretium. Volutpat diam ut venenatis tellus in metus vulputate eu scelerisque. Quis varius quam quisque id diam vel. Nec nam aliquam sem et tortor consequat id. Iaculis urna id volutpat lacus laoreet. Sit amet nisl suscipit adipiscing bibendum est ultricies. Scelerisque eleifend donec pretium vulputate sapien. Eu augue ut lectus arcu bibendum at varius vel. Ipsum dolor sit amet consectetur adipiscing elit pellentesque habitant. Risus nec feugiat in fermentum. Massa sapien faucibus et molestie ac feugiat sed lectus. Aliquet nibh praesent tristique magna
DGX Reference Architecture
Tempus egestas sed sed risus pretium. Volutpat diam ut venenatis tellus in metus vulputate eu scelerisque. Quis varius quam quisque id diam vel. Nec nam aliquam sem et tortor consequat id. Iaculis urna id volutpat lacus laoreet. Sit amet nisl suscipit adipiscing bibendum est ultricies. Scelerisque eleifend donec pretium vulputate sapien. Eu augue ut lectus arcu bibendum at varius vel. Ipsum dolor
PCIe Form Factor
Designed with a standard H100 PCIe form factor, ensuring compatibility with a wide range of systems.
High Performance Architecture
Optimised for diverse workloads, The H100 PCIe is especially adept at handling AI, machine learning, and complex computational tasks.
The modular nature of the PCIe H100 allows for easy scalability and integration into existing systems.
The PCIe H100 handles swift data transfer with ease, essential for tasks involving large data sets and complex computational models.
Advanced Storage Capabilities
Local NVMe in GPU nodes and parallel file system for rapid data access and distributed training.
Better Memory Bandwidth
Offers the highest PCIe card memory bandwidth exceeding 2000 GBps, ideal for handling the largest models and most massive datasets.
Maximum TDP of 350 W
Operates unconstrained up to a thermal design power level of 350 W, ensuring that the card can handle the most demanding computational speeds without thermal constraints.
Multi-Instance GPU Capability
The PCIe H100 features Multi-Instance GPU (MIG) capability, allowing it to partition into up to 7 isolated instances. This flexibility is crucial for efficiently managing varying workload demands in dynamic computing environments.
Three built-in NVLink bridges deliver 600 GB/s bidirectional bandwidth, providing robust and reliable data transfer capabilities essential for high-performance computing tasks.
GPU: NVIDIA H100 PCIe
Base Clock Frequency: 1.065 GHz
Graphics Memory: 80 GB HBM3
Frequently Asked Questions
We build our services around you. Our product support and product developmentgo hand in hand to deliver you the best solutions available.
How fast is the H100 PCIe?
The H100 has a peak FP32 performance of 80 TFLOPs, much higher than the A100's 40 TFLOPs.
Is H100 better than A100?
The H100 offers higher raw performance. Consider factors like Memory bandwidth as A100 has higher HBM2E bandwidth, potentially impacting certain tasks. Power consumption is another factor as H100 uses more power (700W vs. 500W for A100).
How many GB is H100?
The H100 has a GPU memory of 80 GB.
What kind of power requirements does the H100 PCIe have?
The H100 PCIe has a TDP of up to 300-350W.