<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">
Reserve here

NVIDIA H100 SXMs On-Demand at $2.40/hour - Reserve from just $1.90/hour. Reserve here

Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

alert

We’ve been made aware of a fraudulent website impersonating Hyperstack at hyperstack.my.
This domain is not affiliated with Hyperstack or NexGen Cloud.

If you’ve been approached or interacted with this site, please contact our team immediately at support@hyperstack.cloud.

close

publish-dateNovember 24, 2025

5 min read

Updated-dateUpdated on 18 Feb 2026

Hyperstack Weekly Rundown 47

Written by

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

Technical Copywriter, NexGen cloud

Share this post

Table of contents

summary

NVIDIA H100 SXM On-Demand

Sign up/Login
summary

We’re back with your weekly dose of Hyperstack updates!

This week comes with some seriously useful upgrades you’ll want to try. Whether you’re spinning up VMs, fine-tuning models or experimenting in AI Studio, these updates are here to make your workflow smoother and your builds faster.

Take a minute to explore what’s new and see how far you can push your next project on Hyperstack.

 

New on AI Studio

Here’s what’s new on our full-stack Gen AI platform, AI Studio this week:

Import LoRA Adapters Directly From Hugging Face

No more manual downloads or messy workflows. You can now import external LoRA adapters from Hugging Face straight into AI Studio and plug them into supported base models. Use them instantly in the AI Studio Playground or deploy them via API for inference. Learn more about importing LoRA adapters here.

Sample Datasets Now in the UI

No more hunting for starter data. You’ll now find a curated sample dataset directly inside the AI Studio interface, perfect for quick experiments, testing or getting hands-on without setup friction.

Export Your Fine-Tuned Models

You can now export any fine-tuned model you create in AI Studio and use it for external use, giving you more control and flexibility in your ML workflows. 

Haven't tried AI Studio yet? Give Hyperstack AI Studio a spin and see how simple and faster AI building can be.

Try AI Studio →

New on Hyperstack

Here’s what’s new on Hyperstack this week:

Public IP Behaviour Change During Hibernation

You now have the option to retain your VM’s public IP address during hibernation. By default, the public IP is now automatically released to help reduce idle resource costs. Learn how to hibernate a Virtual Machine using the UI.

Latest Fixes and Improvements

A new retain_ip parameter has been added to the Hibernate VM API, so you can programmatically decide whether your VM's public IP stays attached during hibernation.

New on our Blog

Check out the latest tutorials on Hyperstack:

Integrate Hyperstack AI Studio with RooCode for Next-Gen Coding Support:

A Step-by-Step Guide

Modern developers are increasingly turning to AI-powered coding environments to accelerate development and improve code quality. One of the most exciting entrants in this space is Roo Code, a powerful AI-driven coding assistant designed to work directly inside Visual Studio Code (VS Code). In this guide, we’ll walk through how to integrate Hyperstack AI Studio with RooCODE to supercharge your development workflow. 

Check out the full tutorial below!

RooCode for Next-Gen Coding Support - Blog Post - 1000x620

Integrate Hyperstack AI Studio with Zed Code Editor for Powerful Coding Agents:

A Step-by-Step Guide

AI coding tools have evolved into intelligent environments that support code understanding, refactoring, and reasoning about complex systems. For developers who value performance and advanced AI-driven workflows, pairing Zed Editor with Hyperstack AI Studio delivers a powerful solution. This guide covers what sets Zed Editor apart, how Hyperstack AI Studio elevates its AI integration, and provides a step-by-step walkthrough for seamless setup.

Check out the full tutorial below!

Zed Code Editor for Powerful Coding Agents - Blog Post - 1000x620

Integrate Hyperstack AI Studio as a Provider in LiteLLM:

A Step-by-Step Guide

With the rapid evolution of AI-driven development tools, integrating large language models (LLMs) into software systems has become increasingly accessible and modular. Developers are no longer restricted to a single provider now, they can build hybrid AI systems by combining inference backends, model management tools, and application layer SDKs. Two such powerful tools that make this process seamless are Hyperstack AI Studio and LiteLLM. In this guide, we provide detailed steps to integrate Hyperstack AI Studio as a Provider in LiteLLM.

Check out the full tutorial below!

Hyperstack AI Studio as a Provider in LiteLLM - Blog Post - 1000x620

We’ve got more tutorials coming next week, so stay tuned.

Have an idea you'd like to see in Hyperstack? Let’s bring it to life.

At Hyperstack, we’re committed to continuous improvement and your ideas are a key driver of our innovation.

→ Is there a feature you’ve been waiting for?
→ Something that could speed up your workday?
→ Or a tweak that would make things feel effortless?

Tell us what would make your Hyperstack experience even better. Your feedback sets the direction for what we build next.

Share Feature Request


 

That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #45

👉 Hyperstack Weekly Rundown #46

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Hyperstack Weekly Rundown 47: Latest Edition
2:31

Related content

Stay updated with our latest articles.

product-updates Product Updates link

Hyperstack January Update: New Features, Improvements and Blogs

Hyperstack Monthly Update It’s time for our January monthly ...

Hyperstack Monthly Update

It’s time for our January monthly update.

A new month, a stronger Hyperstack. January was all about new features, updates and improvements that make your experience even better. Let’s take a quick look at what shipped and what’s coming up next.


New on Hyperstack

Check out what's new on Hyperstack this month:

Kubernetes CSI Driver RBAC Policy

We have added a new policy, KubernetesCSIDriver to grant users the required permissions for proper CSI driver functionality within Kubernetes clusters.

Contracted VMs Isolated from On-Demand Credit Enforcement

Contracted virtual machines no longer consume on-demand credits. They are exempt from auto-hibernation due to low balance and can be launched or restored even when the account balance is zero.  Learn more about Contract Billing and Resource Behaviour here.

Firewall Warnings for Kubernetes VMs

A warning banner now appears in the Firewalls tab for any VM that’s part of a Kubernetes cluster. This helps users avoid firewall changes that could disrupt cluster operations or controller reconciliation.

Image–Flavour Compatibility Support

Hyperstack now detects compatibility between selected images and VM flavours to help ensure successful deployments. While no images currently define restrictions, this enables future warnings for suboptimal combinations and enforcement of incompatible configurations.

New on AI Studio

Check out what's new on AI Studio this month:

Third-Party Hosted Models

AI Studio now supports selected third-party hosted models for inference-only use. These models are available directly from the Playground. Check out the available third-party models here.

Persistent Playground Chat

Playground chat sessions are now persistent, retaining conversation history, selected models, prompts and parameter settings across page refreshes and logins.

Latest Fixes and Improvements

Check out all the things we fixed and improved this month on our platform:

  • Kubernetes Ingress Capacity Upgrade: Increased the maximum HAProxy connections on Kubernetes ingress load balancer nodes from 250 to 2,000, significantly improving ingress concurrency and stability for high-load workloads.
  • Cluster and Node Naming Updates: New cluster names are now limited to 20 characters. Node VM names no longer include the kube- prefix and now use a shorter, clearer format based on cluster name, node role, and count.
  • Image Size Display Update: The display_size field in the List Images API now uses IEC units (e.g., GiB) instead of SI units (GB) for more accurate byte conversion.
  • Improved Billing Clarity for Postpaid Accounts: The Billing Overview page now shows billing data exclusively for on-demand usage. Contract-related details are available in the Resource Activity and Contracts tabs.
  • Insufficient Balance Policies Applied to Object Storage: Object storage access is now subject to the same account balance rules as other Hyperstack resources.
  • Reliable VM Creation with Bootable Volumes via API: VM creation  create_bootable_volume = true now waits for the volume to become available, improving provisioning reliability.
  • Firewall Management Fixes: Fixed issues with firewall assignment and detachment workflows. The VM list now correctly scopes to the selected environment, and firewall removal actions function as expected.
  • Improved Redirect After Session Expiry: Users are now redirected back to their previous page after re-authentication, even if the access token expired during a page refresh.
  • TLS SAN Mismatch Fix for Standard Kubernetes Deployments: The public IP of the master node (acting as both bastion and API endpoint) is now included in the Kubernetes API server certificate SANs, resolving TLS handshake failures.

New on the Blog

Check out exciting blogs on Hyperstack this month:

Best Object Storage Solutions for Cloud Data

What’s the Difference?

If you’re working with large volumes of cloud data, object storage is not an option anymore. No matter if you’re managing AI training datasets, media libraries or long-term archives, choosing the right object storage solution affects performance. In this blog, you’ll learn what object storage is, why you need it and which solutions stand out as the best object storage solutions for cloud data in 2026. 

Learn more in our latest blog.

5 Best Object Storage Solutions - Blog post - 1000x600

What is a GPU Cluster

How it Powers Modern AI Workloads

Modern AI workloads push computing infrastructure far beyond what a single server can handle. Training LLMs, running deep learning experiments or processing large datasets requires more parallelism and high-throughput computation. This is why enterprises and large-scale organisations choose GPU clusters for modern AI workloads. In this blog, you’ll learn what a GPU cluster is, how it works, why it’s essential for modern AI workloads and real-world use cases.

Learn more in our latest blog.

What Is a GPU Cluster_ - Blog post - 1000x600

How to Run and Deploy Qwen3-Coder-Next:

A Complete Guide

Qwen3-Coder-Next is the latest open-weight language model from the Qwen Team, built specifically for coding agents and local development. Unlike traditional dense models, it utilises a Mixture-of-Experts (MoE) architecture. While it boasts a massive 80B total parameter count, it only activates 3B parameters per token. The tutorial walks through deploying Qwen3-Coder-Next on Hyperstack using GPU-powered virtual machines

Learn more in our latest blog.

How to Run and Deploy Qwen3-Coder-Next - Blog post - 1000x600


Your Ideas Power Hyperstack

You know your workflow better than anyone. If there’s anything you wish Hyperstack did differently or better, now’s your chance to tell us.

Maybe it’s a feature you’ve been thinking about, a tool that could speed up your workflow, or a simple improvement that would make your project easier. Whatever it is, we’re listening.

Share Feature Request →


 

For any questions or suggestions, feel free to reach out at support@hyperstack.cloud. Stay tuned for even more updates and exciting tools next month.

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

calendar 9 Feb 2026

Read More
product-updates Product Updates link

Hyperstack Weekly Rundown 52

Welcome to Hyperstack Weekly Rundown Another week, another ...

Welcome to Hyperstack Weekly Rundown

Another week, another set of updates,

This one’s worth a few minutes of your time. We’ve shipped new changes, made improvements and released interesting content to help you build and optimise with confidence. So grab your coffee ☕, get comfortable and scroll on to see what’s new on Hyperstack.

Let’s get into it. 


New on Hyperstack

Check out what's new on Hyperstack this week:

Contracted VMs Now Isolated from On-Demand Credit Enforcement

Your contracted virtual machines no longer consume on-demand credit. They’re exempt from auto-hibernation due to low balance and can be launched or restored even with zero credit.

Learn more about Contract Billing and Resource Behaviour here.

Latest Fixes and Improvements

We’ve also rolled out several fixes and improvements to make your day-to-day experience better on Hyperstack:

Improved Billing for Postpaid Accounts

Billing Overview now shows only on-demand usage. Contract usage and charges are available in the Resource Activity and Contracts tabs for improved clarity.

Object Storage Now Follows Standard Balance Policies

Object storage access is now subject to the same balance rules as other Hyperstack resources.

New on our Blog

Check out the latest tutorials on Hyperstack:

5 Affordable Cloud GPU Providers in 2026:

Features, Pricing and Use Cases

Looking for a cloud GPU that won’t drain your wallet but still performs for your AI projects? Cloud GPU options are everywhere but the cheapest option is not always the smartest. Hidden fees and scaling can turn a “bargain” into a headache. And by that time, you have already spent a lot. Our latest blog below helps you pick an affordable GPU cloud provider, so you can train, fine-tune and deploy AI models without compromise or surprise costs

Check out the full blog below!

5 Most Affordable Cloud GPU - Blog post - 1000x600-1

5 Kubernetes Best Practices in 2026: 

Every Developer Should Know

Kubernetes gives you incredible power but without the right practices, that power can turn into complexity very fast. If you’re just getting started or already running workloads in production, the difference between a smooth experience and constant fighting often comes down to a few decisions. Our latest blog walks you through some of the best Kubernetes practices that help you build systems that scale naturally, stay reliable under pressure and avoid unnecessary costs. 

Check out the full blog below!

Kubernetes Best Practices - Blog post - 1000x600

How to Deploy Open WebUI:

A Quick Setup Guide

Open WebUI is an open-source, feature-rich, and user-friendly self-hosted AI interface designed to run entirely offline. Built on universal standards, it supports Ollama and OpenAI-compatible APIs, making it easy to connect local or remote LLM backends. It provides a ChatGPT-like web experience while giving you full control over models, data and infrastructure. This setup guide shows you how to deploy Open WebUI on Hyperstack so you can quickly get started using GPU-powered cloud infrastructure.

Check out the full blog below!

How to Deploy Open WebUI - Blog post - 1000x600-1

Top 5 Use Cases of Object Storage in 2026:

All You Need to Know

Data is now bigger and more unpredictable than ever. AI models are consuming petabytes of training data and media platforms are streaming millions of files every second. And guess what? Traditional storage systems cannot keep up. This is why teams are switching to object storage to handle their data. In this blog, we explore the most impactful object storage use cases, why businesses are adopting it and how organisations can use object storage to power data-heavy workloads.

Check out the full blog below!

Top 5 Use Cases of Object Storage - Blog post - 1000x600-1

Unstructured Data- Concept, Use Cases and Storage:

Know All About it

Every day, organisations generate loads of data but did you know that over 80% of it is unstructured? From emails and documents to videos, social media posts and AI training datasets, unstructured data holds the insights that structured tables simply cannot capture. Understanding it is not just a nice-to-have but is important to powering AI, predictive analytics and modern cloud applications. In this blog, we break down what unstructured data is, why it matters and how it can be stored, accessed and used at scale. 

Check out the full blog below!

What is Unstructured Data - Blog post - 1000x600-1

Deploying and Using Qwen 3 TTS CustomVoice on Hyperstack:

A Step-by-Step Guide

If you are looking to run Qwen 3 TTS CustomVoice efficiently on cloud GPUs, this tutorial shows you exactly how to deploy it on Hyperstack. Qwen 3 TTS is designed for high-fidelity, low-latency speech synthesis, but real-world performance depends heavily on GPU configuration, memory setup, and deployment strategy. Our latest tutorials walk through the full process so you can achieve production-ready text-to-speech performance quickly. 

Check out the full blog below!

Deploying and Using Qwen 3 TTS - Blog post - 1000x600-1

Your Ideas Power Hyperstack

You know your workflow better than anyone. If there’s anything you wish Hyperstack did differently or better, now’s your chance to tell us.

Maybe it’s a feature you’ve been thinking about, a tool that could speed up your workflow, or a simple improvement that would make your project easier. Whatever it is, we’re listening.

Share Feature Request


 

That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #51

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

calendar 30 Jan 2026

Read More
product-updates Product Updates link

Hyperstack Weekly Rundown 51

Welcome to Hyperstack Weekly Rundown Another week and more ...

Welcome to Hyperstack Weekly Rundown

Another week and more updates to play with.

In this edition of the Hyperstack Weekly Rundown, you’ll find new AI Studio capabilities, Kubernetes improvements, large root-disk flavours and the latest blogs to keep you ahead of what’s coming next.

Scroll on as there’s plenty to explore.


New on Hyperstack

Check out what's new on Hyperstack this week:

API-Only Large Root Disk Flavours

We’ve introduced Large Root Disk flavours, available via API only. These flavours remove ephemeral storage and reallocate it to a single, larger persistent root disk. These are ideal for container-heavy workloads, inference jobs and short-lived ML runs.

Note: snapshotting, hibernation and boot-from-volume are not supported. 

Kubernetes Ingress Capacity Upgrade

We have increased HAProxy max connections on Kubernetes ingress load balancer nodes from 250 to 2,000 and applied supporting configurations to handle significantly higher ingress concurrency.

This ensures better stability and smoother traffic handling for Kubernetes workloads running under high load.

Try Hyperstack On-Demand Kubernetes Today →

New on AI Studio

Check out what's new on Hyperstack AI Studio this week:

Third-Party Hosted Models

Selected third-party models are available for inference-only use and can be accessed straight from the Playground. So you can experiment, test and compare popular models without leaving the platform.

Check out the available third-party models here.

Screenshot 2026-01-15 153920

Persistent Playground Chat

The AI Studio Playground now supports persistent chat sessions, retaining conversation history, selected model, prompts and parameter settings across page refreshes and logins.

Try Models in the Playground →

Latest Fixes and Improvements

We’ve also rolled out several fixes and improvements to make your day-to-day experience better on Hyperstack:

  • Kubernetes CSI Driver RBAC Policy: Added policy:KubernetesCSIDriver to grant users the permissions required for proper CSI driver functionality.
  • Cluster and Node Naming Updates: New cluster names are now limited to 20 characters. Node VM names no longer include a kube- prefix and use a shorter format based on the cluster name, node role, and count for improved clarity and consistency.
  • Image Size Display Update: The display_size field in the List Images API now uses IEC units (e.g., GiB), replacing the previous SI-based (GB) formatting for more accurate byte conversion.

New on our Blog

Check out the latest tutorials on Hyperstack:

5 Kubernetes Use Cases:

That Will Define Cloud-Native Workloads in 2026

Every few years, cloud-native hits a turning point. New workloads suddenly demand more scale, more automation and more resilience than our existing tools can handle and AI is pushing that shift hard. Teams aren’t asking “Should we adopt Kubernetes?” anymore  but “How far can Kubernetes take us?” Whether you’re building AI pipelines, running multi-region microservices or scaling GPU-heavy training jobs, Kubernetes is a natural fit for modern cloud-native workloads.

Check out the full blog below!

5 Kubernetes Use Cases That Will Define - Blog post - 1000x620

PyTorch vs TensorFlow in 2026: 

Which Framework to Choose?

Choosing the right deep learning framework can directly impact how fast you build, train and deploy AI models. Both frameworks support popular AI technologies like generative AI and enterprise-scale machine learning systems. While they often achieve similar results, the way you work with them and scale with them can feel very different. Our latest blog breaks down PyTorch vs TensorFlow, helping you decide which framework fits your goals, workflow and production needs in today’s fast-moving AI.

Check out the full blog below!

PyTorch vs TensorFlow - Blog post - 1000x620

LLMs vs SLMs:

Your Guide to Choosing the Right Model for AI Workloads

If you’re unsure when your workload needs an LLM or an SLM, the answer depends on what you’re optimising for. LLMs offer better reasoning and generalisation, while SLMs deliver faster inference and lower operational costs. Most teams end up using both, just for different parts of their pipeline. In this blog, you’ll get a clear breakdown of LLMs vs SLMs and GPU recommendations for deployment.

Check out the full blog below!

LLMs vs. SLMs_ Choosing the Right - Blog post - 1000x620

Popular Open-Source Text-to-Speech Models in 2026:

All You Need to Know

You’re building something intelligent, something that thinks. But then you realise… it should speak too, not robotic but truly human-like. A product with a voice that connects, guides and responds across languages, platforms and users. With open-source TTS models, you can run, fine-tune and deploy your way. No lock-ins but you get flexibility, performance and innovation. Our latest blog walks you through the popular open-source text-to-speech models and how to choose the right one for your stack.

Check out the full blog below!

Top 5 Best open source text-to-speech models - Blog post - 1000x620

Hear What Our Users Say

We could say our users are happy but it’s better coming from them. Take a look below.

Updated, Hyperstack new testimonial - newsletter

Share a Testimonial →

Your Ideas Power Hyperstack

You know your workflow better than anyone. If there’s anything you wish Hyperstack did differently or better, now’s your chance to tell us.

Maybe it’s a feature you’ve been thinking about, a tool that could speed up your workflow, or a simple improvement that would make your project easier. Whatever it is, we’re listening.

Share Feature Request


 

That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #50

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

calendar 20 Jan 2026

Read More