<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">
Reserve here

NVIDIA H100 SXMs On-Demand at $2.40/hour - Reserve from just $1.90/hour. Reserve here

Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

alert

We’ve been made aware of a fraudulent website impersonating Hyperstack at hyperstack.my.
This domain is not affiliated with Hyperstack or NexGen Cloud.

If you’ve been approached or interacted with this site, please contact our team immediately at support@hyperstack.cloud.

close
|

Updated on 20 Jan 2026

Hyperstack Weekly Rundown 51

TABLE OF CONTENTS

NVIDIA H100 SXM On-Demand

Sign up/Login

Welcome to Hyperstack Weekly Rundown

Another week and more updates to play with.

In this edition of the Hyperstack Weekly Rundown, you’ll find new AI Studio capabilities, Kubernetes improvements, large root-disk flavours and the latest blogs to keep you ahead of what’s coming next.

Scroll on as there’s plenty to explore.


New on Hyperstack

Check out what's new on Hyperstack this week:

API-Only Large Root Disk Flavours

We’ve introduced Large Root Disk flavours, available via API only. These flavours remove ephemeral storage and reallocate it to a single, larger persistent root disk. These are ideal for container-heavy workloads, inference jobs and short-lived ML runs.

Note: snapshotting, hibernation and boot-from-volume are not supported. 

Kubernetes Ingress Capacity Upgrade

We have increased HAProxy max connections on Kubernetes ingress load balancer nodes from 250 to 2,000 and applied supporting configurations to handle significantly higher ingress concurrency.

This ensures better stability and smoother traffic handling for Kubernetes workloads running under high load.

Try Hyperstack On-Demand Kubernetes Today →

New on AI Studio

Check out what's new on Hyperstack AI Studio this week:

Third-Party Hosted Models

Selected third-party models are available for inference-only use and can be accessed straight from the Playground. So you can experiment, test and compare popular models without leaving the platform.

Check out the available third-party models here.

Screenshot 2026-01-15 153920

Persistent Playground Chat

The AI Studio Playground now supports persistent chat sessions, retaining conversation history, selected model, prompts and parameter settings across page refreshes and logins.

Try Models in the Playground →

Latest Fixes and Improvements

We’ve also rolled out several fixes and improvements to make your day-to-day experience better on Hyperstack:

  • Kubernetes CSI Driver RBAC Policy: Added policy:KubernetesCSIDriver to grant users the permissions required for proper CSI driver functionality.
  • Cluster and Node Naming Updates: New cluster names are now limited to 20 characters. Node VM names no longer include a kube- prefix and use a shorter format based on the cluster name, node role, and count for improved clarity and consistency.
  • Image Size Display Update: The display_size field in the List Images API now uses IEC units (e.g., GiB), replacing the previous SI-based (GB) formatting for more accurate byte conversion.

New on our Blog

Check out the latest tutorials on Hyperstack:

5 Kubernetes Use Cases:

That Will Define Cloud-Native Workloads in 2026

Every few years, cloud-native hits a turning point. New workloads suddenly demand more scale, more automation and more resilience than our existing tools can handle and AI is pushing that shift hard. Teams aren’t asking “Should we adopt Kubernetes?” anymore  but “How far can Kubernetes take us?” Whether you’re building AI pipelines, running multi-region microservices or scaling GPU-heavy training jobs, Kubernetes is a natural fit for modern cloud-native workloads.

Check out the full blog below!

5 Kubernetes Use Cases That Will Define - Blog post - 1000x620

PyTorch vs TensorFlow in 2026: 

Which Framework to Choose?

Choosing the right deep learning framework can directly impact how fast you build, train and deploy AI models. Both frameworks support popular AI technologies like generative AI and enterprise-scale machine learning systems. While they often achieve similar results, the way you work with them and scale with them can feel very different. Our latest blog breaks down PyTorch vs TensorFlow, helping you decide which framework fits your goals, workflow and production needs in today’s fast-moving AI.

Check out the full blog below!

PyTorch vs TensorFlow - Blog post - 1000x620

LLMs vs SLMs:

Your Guide to Choosing the Right Model for AI Workloads

If you’re unsure when your workload needs an LLM or an SLM, the answer depends on what you’re optimising for. LLMs offer better reasoning and generalisation, while SLMs deliver faster inference and lower operational costs. Most teams end up using both, just for different parts of their pipeline. In this blog, you’ll get a clear breakdown of LLMs vs SLMs and GPU recommendations for deployment.

Check out the full blog below!

LLMs vs. SLMs_ Choosing the Right - Blog post - 1000x620

Popular Open-Source Text-to-Speech Models in 2026:

All You Need to Know

You’re building something intelligent, something that thinks. But then you realise… it should speak too, not robotic but truly human-like. A product with a voice that connects, guides and responds across languages, platforms and users. With open-source TTS models, you can run, fine-tune and deploy your way. No lock-ins but you get flexibility, performance and innovation. Our latest blog walks you through the popular open-source text-to-speech models and how to choose the right one for your stack.

Check out the full blog below!

Top 5 Best open source text-to-speech models - Blog post - 1000x620

Hear What Our Users Say

We could say our users are happy but it’s better coming from them. Take a look below.

Updated, Hyperstack new testimonial - newsletter

Share a Testimonial →

Your Ideas Power Hyperstack

You know your workflow better than anyone. If there’s anything you wish Hyperstack did differently or better, now’s your chance to tell us.

Maybe it’s a feature you’ve been thinking about, a tool that could speed up your workflow, or a simple improvement that would make your project easier. Whatever it is, we’re listening.

Share Feature Request


 

That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #50

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Get Started

Ready to build the next big thing in AI?

Sign up now
Talk to an expert

Share On Social Media

Hyperstack Weekly Rundown 51: Latest Edition
2:31

12 Jan 2026

Welcome to Hyperstack’s First Rundown of 2026 New year. New momentum. This is our first ...

5 Jan 2026

Happy New Year and welcome to 2026 🎉 Before we start, we want to thank you for being a ...