<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">
Reserve here

NVIDIA H100 SXMs On-Demand at $2.40/hour - Reserve from just $1.90/hour. Reserve here

Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

alert

We’ve been made aware of a fraudulent website impersonating Hyperstack at hyperstack.my.
This domain is not affiliated with Hyperstack or NexGen Cloud.

If you’ve been approached or interacted with this site, please contact our team immediately at support@hyperstack.cloud.

close

publish-dateNovember 24, 2025

5 min read

Updated-dateUpdated on 18 Feb 2026

Hyperstack Weekly Rundown 47

Written by

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

Technical Copywriter, NexGen cloud

Share this post

Table of contents

summary

NVIDIA H100 SXM On-Demand

Sign up/Login
summary

We’re back with your weekly dose of Hyperstack updates!

This week comes with some seriously useful upgrades you’ll want to try. Whether you’re spinning up VMs, fine-tuning models or experimenting in AI Studio, these updates are here to make your workflow smoother and your builds faster.

Take a minute to explore what’s new and see how far you can push your next project on Hyperstack.

 

New on AI Studio

Here’s what’s new on our full-stack Gen AI platform, AI Studio this week:

Import LoRA Adapters Directly From Hugging Face

No more manual downloads or messy workflows. You can now import external LoRA adapters from Hugging Face straight into AI Studio and plug them into supported base models. Use them instantly in the AI Studio Playground or deploy them via API for inference. Learn more about importing LoRA adapters here.

Sample Datasets Now in the UI

No more hunting for starter data. You’ll now find a curated sample dataset directly inside the AI Studio interface, perfect for quick experiments, testing or getting hands-on without setup friction.

Export Your Fine-Tuned Models

You can now export any fine-tuned model you create in AI Studio and use it for external use, giving you more control and flexibility in your ML workflows. 

Haven't tried AI Studio yet? Give Hyperstack AI Studio a spin and see how simple and faster AI building can be.

Try AI Studio →

New on Hyperstack

Here’s what’s new on Hyperstack this week:

Public IP Behaviour Change During Hibernation

You now have the option to retain your VM’s public IP address during hibernation. By default, the public IP is now automatically released to help reduce idle resource costs. Learn how to hibernate a Virtual Machine using the UI.

Latest Fixes and Improvements

A new retain_ip parameter has been added to the Hibernate VM API, so you can programmatically decide whether your VM's public IP stays attached during hibernation.

New on our Blog

Check out the latest tutorials on Hyperstack:

Integrate Hyperstack AI Studio with RooCode for Next-Gen Coding Support:

A Step-by-Step Guide

Modern developers are increasingly turning to AI-powered coding environments to accelerate development and improve code quality. One of the most exciting entrants in this space is Roo Code, a powerful AI-driven coding assistant designed to work directly inside Visual Studio Code (VS Code). In this guide, we’ll walk through how to integrate Hyperstack AI Studio with RooCODE to supercharge your development workflow. 

Check out the full tutorial below!

RooCode for Next-Gen Coding Support - Blog Post - 1000x620

Integrate Hyperstack AI Studio with Zed Code Editor for Powerful Coding Agents:

A Step-by-Step Guide

AI coding tools have evolved into intelligent environments that support code understanding, refactoring, and reasoning about complex systems. For developers who value performance and advanced AI-driven workflows, pairing Zed Editor with Hyperstack AI Studio delivers a powerful solution. This guide covers what sets Zed Editor apart, how Hyperstack AI Studio elevates its AI integration, and provides a step-by-step walkthrough for seamless setup.

Check out the full tutorial below!

Zed Code Editor for Powerful Coding Agents - Blog Post - 1000x620

Integrate Hyperstack AI Studio as a Provider in LiteLLM:

A Step-by-Step Guide

With the rapid evolution of AI-driven development tools, integrating large language models (LLMs) into software systems has become increasingly accessible and modular. Developers are no longer restricted to a single provider now, they can build hybrid AI systems by combining inference backends, model management tools, and application layer SDKs. Two such powerful tools that make this process seamless are Hyperstack AI Studio and LiteLLM. In this guide, we provide detailed steps to integrate Hyperstack AI Studio as a Provider in LiteLLM.

Check out the full tutorial below!

Hyperstack AI Studio as a Provider in LiteLLM - Blog Post - 1000x620

We’ve got more tutorials coming next week, so stay tuned.

Have an idea you'd like to see in Hyperstack? Let’s bring it to life.

At Hyperstack, we’re committed to continuous improvement and your ideas are a key driver of our innovation.

→ Is there a feature you’ve been waiting for?
→ Something that could speed up your workday?
→ Or a tweak that would make things feel effortless?

Tell us what would make your Hyperstack experience even better. Your feedback sets the direction for what we build next.

Share Feature Request


 

That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #45

👉 Hyperstack Weekly Rundown #46

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Hyperstack Weekly Rundown 47: Latest Edition
2:31

Related content

Stay updated with our latest articles.

product-updates Product Updates link

Hyperstack March Update: New Features, Improvements and Tutorials

Hyperstack Monthly Update March brought a range of updates ...

Hyperstack Monthly Update

March brought a range of updates to the Hyperstack platform, with a focus on improving how you interact with your infrastructure. We also released exciting tutorials on Hyperstack MCP, OpenClaw and NemoClaw following its introduction at the NVIDIA GTC 2026.

Scroll down to explore the highlights.


New on Hyperstack

Here's what we released on Hyperstack in March:

Hyperstack MCP Server

We introduced the Hyperstack MCP (Model Context Protocol) Server, a new interface layer that allows you to manage infrastructure using natural language.

Instead of relying on manual API calls or navigating multiple dashboards, the MCP Server translates plain English instructions into secure and structured API operations. This makes common tasks like provisioning VMs, managing resources or updating configurations faster and more intuitive.

The MCP Server is compatible with MCP-enabled clients such as Claude Desktop and Open WebUI. This lets you integrate infrastructure control directly into the tools you already use.

  • No need to write or manage raw API calls
  • Faster execution of routine infrastructure tasks
  • Reduced friction in developer workflows

Node Group Firewall Management for Clusters

Firewall rules can now be defined and managed at the node group level within Kubernetes clusters. These rules can be configured directly from the console after cluster deployment and are automatically applied across all worker nodes in the selected node group.

Node Group Firewall API Support

Firewall management for node groups is now fully supported via API using the firewall_ids field. This capability is available across the following endpoints:

Across all cluster-related API responses that include the node_groups object, it will now return associated firewall details for each node group.

Latest Fixes and Improvements

We also made some improvements on Hyperstack to make your experience even better:

Kubernetes Cluster Deployment Defaults

The cluster deployment experience has been improved by automatically selecting the Kubernetes version and setting Full Deployment Mode as the default option.

User Role Management Improvements

The Create and Edit User Role experience has been enhanced with:

  • A new resource-based permission filter
  • Clear visibility of permissions by resource group with selection counts
  • Reduced horizontal scrolling and improved layout for better usability

Improved Resource Availability Error Messaging

VM and Kubernetes cluster creation failures due to temporary capacity shortages now display a clearer message, indicating resource unavailability and suggesting a retry.

VM Hibernation Request Handling Improvements

Improved lifecycle handling ensures instance states remain consistent during operations like hibernation, preventing duplicate requests while an operation is already in progress.

Firewall Attachment Improvements

Enhanced reliability and usability when attaching virtual machines to firewalls, with more consistent handling of VM states and firewall updates.

New on the Blog

Check out exciting blogs and tutorials on Hyperstack this month:

Deploying NVIDIA's NemoClaw on Hyperstack:

Step-by-Step Guide 

In our latest tutorial, we cover what NemoClaw is, how it works as NVIDIA's open-source security stack for OpenClaw, and walk you through the full deployment process on Hyperstack GPUs including setup, configuration and verification steps.

NVIDIAs NemoClaw - Blog Post - 1000x600-1

Securing OpenClaw on Hyperstack: Safe AI Agent Deployment:

A Comprehensive Guide

In our latest tutorial, we cover how to securely deploy OpenClaw AI agents on Hyperstack, including network isolation, access controls, environment hardening and safety configurations to ensure your agentic workloads run reliably and safely on cloud GPUs.

Securing OpenClaw - Blog post - 1000x600

Why Move Your AI Workloads from Public Cloud to Private Cloud:

5 Important Signs

In our latest blog, we walk through five concrete signs that public cloud is holding your AI operations back, including unpredictable billing, data sovereignty risks, limited GPU availability, compliance challenges and performance ceilings that private cloud can help you overcome.

5 Signs Its Time to Move Your AI Workloads - Blog post - 1000x600

Manage Cloud Infrastructure with Open WebUI:

Using the Hyperstack MCP Server

In our latest tutorial, we show you how to connect the Hyperstack MCP Server to Open WebUI, allowing you to provision VMs, manage GPU resources and control your cloud infrastructure through natural language conversations with AI assistants like Claude Desktop.

How to Manage Cloud Infrastructure with AI Clients - Blog post - 1000x600


NVIDIA GTC 2026: Conversations on Secure AI Infrastructure

Collage Image ( abd ) - Blog

Our team attended the NVIDIA GTC 2026 in March and the conversations we had there reflected something a lot of teams are thinking about right now: how to build infrastructure for production AI the right way.

We had discussions around our Secure Private Cloud and why isolation, compliance and control need to be built in from day one rather than added on later. If you're deploying AI that needs dedicated infrastructure with full visibility and governance, book a meeting with the Hyperstack team:

Your Ideas Power Hyperstack

You know your workflow better than anyone. If there’s anything you wish Hyperstack did differently or better, now’s your chance to tell us.

Maybe it’s a feature you’ve been thinking about, a tool that could speed up your workflow, or a simple improvement that would make your project easier. Whatever it is, we’re listening.

Share Feature Request →


 

For any questions or suggestions, feel free to reach out at support@hyperstack.cloud. Stay tuned for even more updates and exciting tools next month.

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

calendar 2 Apr 2026

Read More
product-updates Product Updates link

Hyperstack Weekly Rundown 53

Welcome to Hyperstack Weekly Rundown This week on ...

Welcome to Hyperstack Weekly Rundown

This week on Hyperstack, we are introducing a new way to interact with infrastructure. With the launch of the Hyperstack MCP Server, you can now manage cloud resources using natural language. We’re also sharing exciting tutorials on running Qwen 3.5 on Hyperstack to a benchmark exploring how KV cache compression impacts inference performance. 

Take a few minutes to catch up on what’s new and what you can try next on Hyperstack!


New on Hyperstack

Check out what we released on Hyperstack this week:

Hyperstack MCP Server

We’re excited to introduce the Hyperstack MCP (Model Context Protocol) Server. It is a new way to manage your infrastructure using natural language. With MCP support, users can interact with their cloud resources using natural language through compatible AI clients like Claude Desktop and Open WebUI.

The MCP Server translates plain-English instructions into secure, authenticated Hyperstack API actions. This means you can create, manage and monitor infrastructure without manually writing API calls.

Once connected, you can perform tasks such as:

  • Creating and managing Virtual Machines
  • Provisioning and scaling Kubernetes clusters
  • Creating and attaching storage volumes
  • Retrieving billing and usage information
  • Managing environments
  • Executing multi-step infrastructure workflows

New on our Blog

Check out the latest blogs on Hyperstack:

Optimising Long-Context LLMs:

With KVPress Compression on Hyperstack 

KV cache size is becoming a major bottleneck for LLM inference speed and memory. In this benchmark on Hyperstack’s H100 infrastructure, we compare KnormPress and NVIDIA’s DMS to see how much KV cache can be compressed without impacting reasoning performance on the Qwen-3-8B model.

Read the full benchmark →

Long-Context LLMs - Blog post - 1000x600

How to Deploy Qwen 3.5 on Hyperstack: 

A Step-by-Step Guide

Qwen 3.5 is a powerful open-weight AI model built for advanced assistant workflows across text, code, images, and video. In this guide, we show how to run Qwen 3.5 on Hyperstack infrastructure to get high-performance inference for large, multimodal workloads.

Read the full guide →

Deploying Qwen3.5 - Blog post - 1000x600

UI, API and Now MCP:

A New Way to Interact with Your GPU Cloud

For decades, we’ve interacted with software through UIs or APIs. But Model Context Protocol (MCP) introduces a third way, letting AI clients interact with systems through natural language. This blog explains what MCP is, how it works and why it’s changing the way we build and use software.

Read the full article →

UI, API, and Now MCP - Blog post - 1000x600

Help Shape the Future of Hyperstack

Great products are built with the people who use them. If there’s something you would like to see on Hyperstack whether it is a new feature, workflow improvement or integration that would make your work easier, we would love to hear about it.

Your feedback helps us prioritise what matters most and build a platform that works better for the community.

Share Feature Request


 

That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack updates below:

👉 Hyperstack February Update

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

calendar 6 Mar 2026

Read More
product-updates Product Updates link

Hyperstack February Update: New Features, Improvements and Blogs

Hyperstack Monthly Update It’s time for our February ...

Hyperstack Monthly Update

It’s time for our February monthly update.

February at Hyperstack was focused on making the platform more resilient and predictable. We introduced better safeguards for Kubernetes environments, improved VM deployment reliability and enhanced networking defaults.

Scroll down to see what we shipped this month!


What’s New on Hyperstack

Here's what we released on Hyperstack in February:

Firewall Warnings for Kubernetes VMs

We’ve added a warning banner to the Firewalls tab for VMs in a Kubernetes cluster. This banner alerts users before making firewall configuration changes that could disrupt:

  • Cluster networking
  • Reconciliation processes

Image-Flavour Compatibility Support

Hyperstack now supports detecting compatibility between selected images and VM flavours. While no images currently define restrictions, this enables:

  • Future warnings for suboptimal image
  • Restrictions on incompatible configurations

Latest Fixes and Improvements

  • Security Enhancements: Platform security has been further strengthened with internal improvements that increase protection and provide safer, more reliable system interactions for users and their workloads.
  • Improved Usability and Error Messaging: User experience has been improved with clearer error messages and more intuitive system feedback, making it easier to understand issues and resolve them quickly.
  • New “No-Reboot” Flavour Label: Applicable flavours now include a “no-reboot” label, helping users easily identify configurations that support operations without requiring instance reboots—reducing downtime and maintaining workload continuity. 
  • Reliable VM Creation with Bootable Volumes via API: VM creation using the Create Virtual Machines API with create_bootable_volume = true now waits for the volume to become available before proceeding, improving provisioning reliability.
  • Firewall Management Fixes: Resolved issues with firewall assignment and detachment workflows. The VM list in the firewall assignment modal now correctly shows only VMs in the same environment as the selected firewall, and the “Save changes” button now functions as expected when removing all firewall attachments from a VM.
  • Improved Redirect After Session Expiry: Users are now correctly returned to the page they were on after signing in again, even if their access token was missing or had expired during refresh.
  • Fixed TLS SAN Mismatch in Standard Kubernetes Deployments: The public IP address of the master node serving as both bastion and API endpoint (in Standard configuration deployments) is now included in the Kubernetes API server certificate’s Subject Alternative Names (SANs), resolving TLS handshake failures caused by hostname mismatches.
  • New Default Networking Mode for Kubernetes Clusters: New Kubernetes clusters now use VXLAN encapsulation (UDP port 4789) for pod networking instead of IPIP, allowing firewall configuration without disrupting connectivity. Existing clusters remain on IPIP.
  • Environment VM Limit Notification: Users are now notified during VM or cluster deployment if the selected environment has reached its maximum VM capacity, with guidance to deploy in a different environment.

New on the Blog

Check out exciting blogs on Hyperstack this month:

How to Deploy Ollama on Hyperstack:

A Quick Setup Guide

Our Ollama setup guide shows you how to deploy Ollama on Hyperstack so you can quickly run LLMs on GPU-powered cloud infrastructure. Ollama is ideal for fast experimentation and local-style model testing while Hyperstack provides on-demand GPUs for reliable performance. Follow the steps in the tutorial to launch a working Ollama setup in minutes and start running models with minimal configuration.

Learn more in our latest blog.

How to Deploy Ollama - Blog post - 1000x600

5 Hidden Costs of Picking the Wrong GPU Cloud Provider

What You Need to Know

Teams building AI, LLMs, GenAI, ML pipelines and HPC workloads face pressure to ship faster, control costs and still meet security, compliance and performance requirements. Many GPU cloud providers fall short and choosing the wrong one can quietly drain budget. The real cost rarely matches the pricing page but it shows up later as delays, high bills, weak security and frustrated teams. In this blog, we break down five hidden costs of the wrong GPU cloud platform and what modern GPU infrastructure must deliver to run AI at scale.

Learn more in our latest blog.

5 Hidden Costs of Picking - Blog post - 1000x600

How to Choose the Right Generative AI Platform:

For Your Projects

Most Generative AI projects don’t fail because the model underperforms. They fail because the platform you deploy on cannot support the full Gen AI lifecycle. You may have a strong idea and a powerful model but they are not enough. What makes it a success is how fast that idea can move from prototype to production. If each step requires a different tool or custom infrastructure, speed drops. This is why you need a single platform that offers it all. In this blog, we help you understand the factors important to choosing the right generative AI platform for your projects.

Learn more in our latest blog.

Generative AI Platform - Blog Post - 1000x600


Your Ideas Power Hyperstack

You know your workflow better than anyone. If there’s anything you wish Hyperstack did differently or better, now’s your chance to tell us.

Maybe it’s a feature you’ve been thinking about, a tool that could speed up your workflow, or a simple improvement that would make your project easier. Whatever it is, we’re listening.

Share Feature Request →


 

For any questions or suggestions, feel free to reach out at support@hyperstack.cloud. Stay tuned for even more updates and exciting tools next month.

Damanpreet Kaur Vohra

Damanpreet Kaur Vohra

calendar 5 Mar 2026

Read More