TABLE OF CONTENTS
Updated: 15 Aug 2025
NVIDIA H100 SXM On-Demand
We’re back with your weekly dose of Hyperstack updates!
Grab your coffee and stretch out that scroll finger, here’s what’s new this week. From H100 reservation to gpt-oss-120b tutorial and exciting blogs, there’s plenty to explore.
Let’s jump in!
Plan Ahead for the Next Wave of AI
With gpt-oss-120b and 20b models making their way into production, securing the right GPU for your workload is critical. The H100 GPUs work efficiently with both models, check out our tutorials below to see exactly how we’ve deployed them.
If you’re not ready to deploy yet, you can still reserve NVIDIA H100 PCIe GPUs today for future workloads. Reserving capacity now will ensure you:
-
Have guaranteed access when others are scrambling.
-
Avoid frustrating deployment delays and bottlenecks.
-
Rely on enterprise-grade performance every single time.
Connect with our team to reserve GPU capacity today.
New on our Blog
Check out our latest blog on Hyperstack:
How to Deploy OpenAI's gpt-oss-120b on Hyperstack:
A Step-by-Step Guide
gpt-oss 120B is a powerful open-source large language model containing approximately 116 billion parameters. The model matches the performance of OpenAI’s o3-mini and o4-mini on many benchmarks, making it an ideal choice for advanced AI applications. The best part about the model is its flexibility in deployment. You can run it on as few as two GPUs, though four or eight GPUs deliver optimal performance.
Check out the full tutorial below!
NVIDIA H100 PCIe GPU:
Specs, Pricing and How to Reserve Your VM
In our latest blog, we explored the NVIDIA H100 PCIe GPU, covering its specs, features, pricing, and reservation options on Hyperstack. From accelerating AI model training and inference to powering complex simulations, the H100 PCIe delivers reliable performance for data-intensive workloads. Discover how on-demand, reserved and NVLink-enabled configurations can scale your projects efficiently while optimising cloud costs.
Check out the full blog below!
NVIDIA L40 GPU:
Specs, Pricing and How to Reserve Your VM
In our latest blog, we explore the NVIDIA L40 GPU, detailing its cutting-edge specs, unique features, and cloud deployment options on Hyperstack. From neural graphics acceleration and AI inferencing to real-time rendering and virtualisation, the L40 is built for next-gen workloads. Learn about pricing, storage and reservation options to optimise your GPU-powered projects.
Check out the full blog below!
Share Your Ideas
We’re always looking to make Hyperstack more useful for you.
If there’s a feature, improvement or change that would help your workflow, let us know. Your feedback helps guide future updates and ensures we’re addressing your needs.
That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!
Missed the Previous Editions?
Catch up on everything you need to know from Hyperstack Weekly below:
Subscribe to Hyperstack!
Enter your email to get updates to your inbox every week
Get Started
Ready to build the next big thing in AI?