TABLE OF CONTENTS
NVIDIA H100 SXM On-Demand
We’re back with your weekly dose of Hyperstack updates!
Grab your coffee and stretch out that scroll finger, here’s what’s new this week. From Account Lockout Protection to the latest Qwen tutorial and blogs, there’s plenty to explore.
Let’s jump in!
What's New on Hyperstack
Here's what's new on Hyperstack this week:
Account Lockout Protection
We’ve made key security enhancements this week!
Now, user accounts are temporarily locked after multiple failed login attempts, adding an extra layer of protection against unauthorised access.
New on our Blog
Check out our latest blog on Hyperstack:
Train Your Own ChatGPT Model for $75 with Nanochat on Hyperstack:
A Step-by-Step Guide
Want to build your own ChatGPT-like model? Sounds fascinating until you start worrying about costs, hardware limitations and setup headaches. Thanks to AI expert Andrej Karpathy and his open-source repo Nanochat, you can now fine-tune a full conversational model on your own hardware. While Andrej mentions the training cost of around $100, you can do the same for just $75 with Hyperstack’s high-performance NVIDIA H100 PCIe NVLink GPUs. In our latest tutorial, we show you exactly how to make it happen, step by step.
Check out the full tutorial below!
Deploying Qwen3-VL-30B-A3B-Instruct-FP8 on Hyperstack:
A Step-by-Step Guide
Qwen3-VL-30B-A3B-Instruct-FP8 is a fine-tuned, FP8-quantised version of the Qwen3-VL-30B-A3B-Instruct model from the Qwen3 series. This vision-language model is designed to process and generate text, images, and videos, excelling in tasks that require reasoning across multiple modalities. The FP8 quantisation reduces memory usage and computational requirements, making it suitable for deployment on edge devices as well as cloud environments. In our latest tutorial, we deploy Qwen3-VL-30B-A3B-Instruct-FP8 on Hyperstack.
Check out the full tutorial below!
Top Cloud GPU Providers for AI in 2025:
What You Need to Know
Training, fine-tuning and deploying AI models demand high-performance GPUs, fast networking and flexible cloud infrastructure. With many platforms offering powerful GPUs, choosing the right cloud GPU provider can be daunting. To help you make the right choice, we’ve compared the top 5 cloud GPU providers based on performance, pricing, features, and ideal AI use cases. No matter if you’re running large-scale LLM training or experimenting with smaller AI projects, our blog highlights which platform suits your needs best.
Check out the full blog below!
Top Cloud GPU Providers for Deep Learning in 2025:
What You Need to Know
Deep learning workloads are demanding and require massive compute, fast interconnect, efficient memory access and scalability. And as these AI models grow larger (e.g. modern LLMs, diffusion models, multi-modal networks), the quality of the underlying hardware and cloud architecture often makes or breaks your productivity. In this article, we’ll explore the best cloud GPU providers for deep learning. For each, we’ll explore which features they offer, how these benefit deep learning workloads and when you might prefer one over another.
Check out the full blog below!
Serverless vs Dedicated Inference:
What's Right for Your AI Product
Building an AI product today is not just about choosing the right model. It is now more about choosing the right way to run that model in production. Do you need something quick, lightweight and pay-as-you-go? Or do you need more power, performance guarantees and full control over your infrastructure? That’s when you start to decide whether to choose Serverless or Dedicated Inference. Picking the right one can make or break your AI experience, especially when you move from prototypes to production-scale deployments.
Check out the full blog below!
Have an idea you'd like to see in Hyperstack? Let’s bring it to life.
At Hyperstack, we’re committed to continuous improvement and your ideas are a key driver of our innovation.
→ Is there a feature you’ve been waiting for?
→ Something that could speed up your workday?
→ Or a tweak that would make things feel effortless?
Tell us what would make your Hyperstack experience even better. Your feedback sets the direction for what we build next.
That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!
Missed the Previous Editions?
Catch up on everything you need to know from Hyperstack Weekly below:
Subscribe to Hyperstack!
Enter your email to get updates to your inbox every week
Get Started
Ready to build the next big thing in AI?




