TABLE OF CONTENTS
Updated: 6 Jun 2025
NVIDIA H100 SXM On-Demand
We’re back with your weekly dose of Hyperstack updates!
Grab your coffee, stretch out that scroll finger and explore what’s new this week. We’ve got certain features in US-1 region, stability updates and our favourite- a hands-on tutorial for running the latest DeepSeek-R1-0528 on Hyperstack. It’s already stirring up buzz across social media and now you can try it too.
Let’s jump in!
What’s New on Hyperstack
Check out what's new on Hyperstack this week:
Feature Enablement in US-1 Region
Deploying the NVIDIA H100 SXM5 GPU in the US-1 region no longer has restrictions for booting from volume, hibernation and snapshotting. These features are now fully supported, giving you more flexibility and control over your VMs. Great news is that this will also apply to future US-1 GPU cards.
Stability Improvements and Fixes
We made several stability improvements on our platform and fixed bugs affecting key pair name validation and volume name checks during creation.
Run Your LLM Workloads on NVIDIA H100 SXM
Running Llama 3.3, Llama 3.1 or other LLMs at scale? If you’re hitting latency issues or rising costs, try running them on the NVIDIA H100 SXM5.
Our NVIDIA H100 SXM VMs offer high bandwidth, high-speed networking and NVSwitch for full GPU-to-GPU connectivity, so your models run in parallel without bottlenecks. That means lower latency, smoother scaling and more inferences per second, especially for production-grade LLMs.
Cut Inference Costs. Boost Performance.
Achieve 2.8× faster LLM inference with NVIDIA H100 SXM5 compared to NVIDIA A100 NVLink, all while being 1.65× more cost-efficient. Don't settle for less when you can get more value per token on our platform.
New in Our Blog
Check out our latest blog on Hyperstack:
How to Run DeepSeek-R1-0528 on Hyperstack:
A Comprehensive Guide
The latest DeepSeek-R1 update is making waves across social media with everyone eager to try it. The new DeepSeek-R1-0528 version is available for both the 8 billion parameter distilled model and the full 671 billion parameter model. Its performance is now rivalling top models like O3 and Gemini 2.5 Pro. Check out our latest blog here to start running the updated version.
What is AI Model Fine-Tuning:
And What You Need to Know
Fine-tuning ensures your AI solution aligns perfectly with the specific requirements of your target market. However, going through the entire fine-tuning pipeline across multiple platforms can often be challenging. Check out our latest blog here to learn how fine-tuning brings your AI model closer to your specific use case and facilitates scalable, real-world deployment.
Turn On Real-Time Alerts Today
Be the first to know about maintenance, outages and critical updates on Hyperstack. Visit status.hyperstack.cloud today and click "Subscribe to Updates" in the top-right corner to customise how you stay informed.
You Build. We Power.
Big or small, your work matters. From experimental models to full-scale production pipelines, every project on Hyperstack moves AI forward. Got your success story? We'd love to hear it and you might just see it featured in our next newsletter!
That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!
Missed the Previous Editions?
Catch up on everything you need to know from Hyperstack Weekly below:
Subscribe to Hyperstack!
Enter your email to get updates to your inbox every week
Looking for long-term discounts?
Fill in the form to download reserved pricing: