<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

NVIDIA H100 SXMs On-Demand at $2.40/hour - Reserve from just $1.90/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

alert

We’ve been made aware of a fraudulent website impersonating Hyperstack at hyperstack.my.
This domain is not affiliated with Hyperstack or NexGen Cloud.

If you’ve been approached or interacted with this site, please contact our team immediately at support@hyperstack.cloud.

close
|

Published on 4 Aug 2025

July Monthly Update: New Features, Updates and More

TABLE OF CONTENTS

updated

Updated: 4 Aug 2025

NVIDIA H100 SXM On-Demand

Sign up/Login
summary

We’re back with your monthly dose of Hyperstack updates!

Coffee ready? Here’s everything we packed into July from the feature request page and new Ubuntu images to SOC 2 Type 1 certification and AI Studio. 

Let’s jump in! 

What's New on Hyperstack

Here's a sneak peek at what's new on Hyperstack:

Hyperstack Feature Request

Hyperstack Feature Request is live, now you can submit any idea you have in mind.

Have a new product in mind, an improvement to existing services or a tool like our  GPU selector or DevOps integration? Go to the Hyperstack Feature Request Page and write your idea.

Let’s build the future of Hyperstack together.

Screenshot 2025-07-30 231743

AI Studio

Launched in July, AI Studio is your all-in-one Gen AI platform to build and deploy open-source LLMs effortlessly. No need to spin up VMs or manage infrastructure. Just bring your dataset and start creating with Gen AI.

Thanks for your input and feedback. We’ve been able to shape platforms like this, making it easier than ever to experiment and build Gen AI workflows.

Try AI Studio if you haven’t already. To get started with AI Studio, check out our documentation for a quick start.

AI studio

Expanded Features in Norway1

You can now hibernate, take snapshots and boot from volume on all on-demand VMs in our Norway1 region. Plus, volume storage is now available across all VM types in Norway1.

New Supported Images

In July, we also introduced the latest Ubuntu-based images. These images include updated CUDA and driver support, making it simpler to kick off your AI/ML workloads:

  • Server 22.04 LTS R570 CUDA 12.8 with Docker
  • Server 22.04 LTS R570 CUDA 12.8
  • Server 24.04 LTS R570 CUDA 12.8
  • Server 24.04 LTS R570 CUDA 12.8 with Docker

NVIDIA A100 SXM is Now Available on Hyperstack

The high-performance NVIDIA A100 SXM GPU VMs are now live and ready for deployment on Hyperstack. Run your most demanding AI, ML and HPC workloads with on-demand pricing at $1.60/hour or reserve capacity for just $1.36/hour to lock in savings and guaranteed availability for your projects.

SOC 2 Type 1 Certification

NexGen Cloud is now SOC 2 Type 1 certified, reinforcing our commitment to keeping your data secure and your workloads protected. Our entire platform has been audited and verified by a licensed CPA firm, meeting rigorous security standards with enterprise-grade safeguards. And we’re not stopping here, SOC 2 Type 2 certification is already underway.

Latest Improvements and Fixes on Hyperstack

Here are the latest fixes and improvements on Hyperstack:

  • Billing Forms Validation: We improved field validation to ensure accurate and standardised billing information.

  • Firewall Rules Bug Fix: We resolved an issue that prevented firewall rules from being created during VM deployment.

  • Platform Stability: Performance improvements and reliability enhancements across the platform were made to make your experience smoother.

New on our Blog

Check out our latest blog on Hyperstack:

On-Demand GPUs for AI:

Power Faster Development and Scaling 

Turning your AI idea into a production-grade product does not come from a great model alone. It demands high-performance compute infrastructure. And such infrastructure comes with powerful GPU resources, often on-demand GPUs for AI. No matter if you're an AI research team fine-tuning LLMs or a SaaS startup testing inference workloads, on-demand GPUs for AI give you the flexibility and performance you need. No need to worry about upfront hardware costs or the long lead times of traditional compute. You get it all with powerful GPUs on demand. Check out our latest blog to learn more.

How On-Demand GPUs for AI Power Faster Development and Scaling - Blog Post - 1000x600

Top 5 Cloud GPU Rental Platforms:

Pricing, Performance and Features

Let’s be real, training large AI models or fine-tuning LLMs on consumer-grade GPUs is painful. You’re waiting hours (sometimes days), your machine’s on fire and worst of all? You’re still not even close to deployment. You may also be trying cloud services, only to realise they are burning through your budget faster than your model is overfitting. That’s exactly why cloud GPU rental platforms are everyone's go-to choice now. You just need to rent a cloud GPU according to your workload and pay for what you use. Check out our latest blog to learn more.

Top 5 Cloud GPU Rental Platforms Compared_ Pricing, Performance & Features - Blog Post - 1000x600

NVIDIA H200 SXM Guide:

Specs, Pricing and How to Reserve Your GPU VM

We break down everything you need to know about using the NVIDIA H200 SXM on Hyperstack like how it’s priced, when to choose on-demand vs reserved VMs and how to reserve your VMs. The NVIDIA H200 SXM is part of NVIDIA’s Hopper architecture GPU for AI, high-performance computing (HPC) and memory-intensive applications. If you're planning large-scale AI projects, this guide helps you optimise performance and budget with ease. Check out our latest blog to learn more.

NVIDIA H200 SXM Guide_ Specs, Pricing and How to Reserve Your GPU - Blog Post - 1000x600

NVIDIA H100 SXM Guide:

Specs, Pricing and How to Reserve Your GPU VM

We explore everything you need to know about the NVIDIA H100 SXM on Hyperstack, from its 8-GPU high-performance setup and NVLink-powered scaling to storage options, pricing and how to reserve capacity. Whether you're training LLMs or running multimodal inference, this guide helps you deploy faster and scale efficiently with predictable performance and cost. If you're looking to train large language models, run scientific workloads or perform high-performance distributed training, the NVIDIA H100 SXM is the ideal choice for you. Check out our latest blog to learn more.

NVIDIA H100 SXM_ Specs, Pricing and How to Reserve - Blog Post - 1000x600


Big Ideas Deserve the Right Platform

At Hyperstack, we’re here to support builders, creators and innovators who are pushing AI forward, from early prototypes to production-ready deployments.

Got something exciting in the works? We’d love to hear about it. Stand a chance to be featured in our next Weekly Rundown. Share your story here.


 

For any questions or suggestions, feel free to reach out at support@hyperstack.cloud. Stay tuned for even more updates and exciting tools next month.

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Get Started

Ready to build the next big thing in AI?

Sign up now
Talk to an expert

Share On Social Media

July Monthly Update: New Features, Updates and More
2:31

8 Aug 2025

New on Hyperstack Here's all that's new on Hyperstack this week: OpenAI gpt-oss-120b on ...

25 Jul 2025

We’re Now SOC 2 Type 1 Certified Big news: NexGen Cloud is officially SOC 2 Type 1 ...