<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">
Reserve here

NVIDIA H100 SXMs On-Demand at $2.40/hour - Reserve from just $1.90/hour. Reserve here

Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

alert

We’ve been made aware of a fraudulent website impersonating Hyperstack at hyperstack.my.
This domain is not affiliated with Hyperstack or NexGen Cloud.

If you’ve been approached or interacted with this site, please contact our team immediately at support@hyperstack.cloud.

close
|

Updated on 5 Dec 2025

Hyperstack November Update: New Features and What's Coming Next

TABLE OF CONTENTS

NVIDIA H100 SXM On-Demand

Sign up/Login
summary

We’re back with your monthly dose of Hyperstack updates!

November is packed with exciting features on Hyperstack. From AI Studio updates to VM IP Update on Hibernation, there is so much to explore.

Even more exciting is that we’re bringing you a line-up of interesting blogs straight from our team of experts, you won’t want to miss.

New on Hyperstack 

Check out what's new on the Hyperstack this month:

Public IP Behaviour Change During Hibernation

You can now choose to keep your VM’s public IP address during hibernation. By default, hibernation will automatically release the public IP, helping to minimise idle resource costs. Learn how to hibernate a Virtual Machine using the UI.

New on AI Studio

Here’s what’s new on our full-stack Gen AI platform, AI Studio this month:

  • Import LoRA Adapters Directly From Hugging Face: No more manual downloads or messy workflows. You can now import external LoRA adapters from Hugging Face straight into AI Studio and plug them into supported base models. Use them instantly in the AI Studio Playground or deploy them via API for inference.
  • Sample Datasets Now in the UI: No more hunting for starter data. You’ll now find a curated sample dataset directly inside the AI Studio interface, perfect for quick experiments, testing or getting hands-on without setup friction.
  • Export Your Fine-Tuned Models: You can now export any fine-tuned model you create in AI Studio and use it for external use, giving you more control and flexibility in your ML workflows. 

New Deployment Coming Live

A new deployment for NVIDIA RTX Pro 6000 SE will be going live soon.

The NVIDIA RTX Pro 6000 Blackwell provides an instant uplift to the workloads you already run every day. If your workloads are consistent, securing access early to our new deployment ensures you stay ahead of the curve. Reservation starts at just $1.26/hr.

Secure Guaranteed Access →

Fixes and Improvements

  • Networking is now more reliable when deploying multiple VMs in a new environment. The first VM manages the setup to avoid any configuration conflicts.
  • A new retain_ip parameter has been added to the Hibernate VM API, so you can programmatically decide whether your VM's public IP stays attached during hibernation.

Coming Soon on Hyperstack

Here’s a glimpse of what’s coming next:

  • Object Storage: Get ready for Hyperstack Object Storage, a scalable storage solution to handle unstructured data on any scale. Built on Amazon S3-compatible technology, it delivers a secure, flexible and API-ready way to manage everything from AI/ML datasets to backups and media files. 

Stay tuned for the latest updates in our upcoming rundowns.



New on the Blog: Insights From Our Team

This month is especially exciting on our blog. Our team is sharing some truly insightful pieces and you’ll definitely want to give them a read:

Run DeepSeek OCR on Hyperstack with your Own UI:

A Comprehensive Guide

In our latest tutorial, we show how to set up DeepSeek-OCR on a Hyperstack VM to create a high-performance, private OCR workflow. DeepSeek-OCR is a 3-billion-parameter multimodal model that combines a vision encoder and language decoder to extract text and preserve document structure, including tables and complex layouts. Using vLLM for GPU-accelerated serving, you can run PDFs and images through a simple Gradio UI or build a custom REST API. 

Learn more in our latest tutorial.

Run DeepSeek OCR on Hyperstack with your Own - Blog post - 1000x620

ECC on NVIDIA H100 PCIe VMs:

How to Enable or Disable It and Why It Matters

In our latest tutorial, we explain the importance of ECC (Error-Correcting Code) on NVIDIA H100 PCIe VMs. ECC detects and corrects memory errors, ensuring data integrity for AI, HPC, and scientific workloads. We guide you through checking, enabling or disabling ECC safely on Hyperstack for balancing performance and reliability.

Learn more in our latest tutorial.

ECC on NVIDIA H100 PCIe VMs - Blog post - 1000x620

Become an Expert with Prometheus on Hyperstack: 

Tips and Tricks

In our latest blog, we show how to set up Prometheus on your own Hyperstack VM to gain full control over monitoring your infrastructure. Prometheus collects, stores, and analyses metrics from VMs, containers, and applications, enabling alerts for issues like high CPU usage, slow applications, or downtime. We guide you through installing Docker, configuring Prometheus, monitoring additional servers with Node Exporter and applying zero-downtime config updates. 

Learn more in our latest blog.

Tips and Tricks to Become an Expert - Blog post - 1000x620

Deploying Mistral Large 3 on Hyperstack: 

A Step-by-Step Guide 

Mistral has released Mistral Large 3, its most powerful multimodal model with frontier-level performance across reasoning, vision and agentic workflows. Whether you're building advanced assistants, enterprise automation or large-scale multimodal applications, Mistral Large 3 delivers the reliability and capability you need. Explore our full tutorial below to deploy and run Mistral Large 3 with ease.

Learn more in our latest tutorial.

Mistral Large 3  - Blog Post - 1000x600

How to Integrate Hyperstack AI Studio with Claude Code: 

A Beginner's Guide 

To power Claude Code with custom models, we need a robust, flexible AI backend that can serve models, support fine-tuning, and provide scalable inference. This is where Hyperstack AI Studio becomes crucial. Hyperstack provides a complete generative AI platform where developers can host, fine-tune, benchmark, and deploy LLMs with OpenAI-compatible APIs, making it fully compatible with tools like Claude Code.

Learn more in our latest tutorial.

Integrating Hyperstack AI Studio with Claude Code - Blog Post - 1000x620


 

Your Ideas Power Hyperstack

You know your workflow better than anyone. If there’s anything you wish Hyperstack did differently or better, now’s your chance to tell us.

Maybe it’s a feature you’ve been thinking about, a tool that could speed up your workflow, or a simple improvement that would make your project easier. Whatever it is, we’re listening.

Share Feature Request →


 

For any questions or suggestions, feel free to reach out at support@hyperstack.cloud. Stay tuned for even more updates and exciting tools next month.

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Get Started

Ready to build the next big thing in AI?

Sign up now
Talk to an expert

Share On Social Media

Hyperstack November Update
2:31

28 Nov 2025

New on Hyperstack Check out what’s new on our hardware side this week: NVIDIA RTX Pro ...

24 Nov 2025

New on AI Studio Here’s what’s new on our full-stack Gen AI platform, AI Studio this ...