TABLE OF CONTENTS
NVIDIA H100 SXM On-Demand
We’re back with your weekly dose of Hyperstack updates!
Grab your coffee and stretch out that scroll finger, here’s what’s new this week. From Kubernetes enhancements and updates to tutorials and exciting blogs, there’s plenty to explore.
Let’s jump in!
What’s New on Hyperstack
Check out what's new on Hyperstack this week:
Virtual Machine Console Logs
You can now access VM console logs directly through the UI or API. Whether you’re troubleshooting or monitoring performance, this makes it faster and easier to get the insights you need without extra steps.
New in Kubernetes
We’ve rolled out major enhancements to make your Kubernetes experience more powerful, flexible and easier to manage.
-
Smarter Cluster Management: Enjoy support for multiple deployment modes, clusters with different worker node flavours via node groups, scalable master nodes and upgraded cluster APIs.
-
Deployment Modes Your Way: Choose between Default Deployment (with bastion host + public load balancer) or Standard Deployment (minimal setup with master + worker nodes).
-
Node Groups for Flexibility: Assign different worker node flavours to specific groups within the same cluster for optimised performance.
-
Node Group APIs: New endpoints let you create, retrieve and delete node groups with ease.
-
High Availability by Default: All clusters are now provisioned with multiple master nodes to improve availability. Plus, you can scale the number of master nodes in a cluster.
Move Volumes Across Environments
Need to transfer volumes within the same region? Now you can move them seamlessly via both UI and API.
Volume Attachment Protection
Protect your workloads from accidental disruptions. New API options prevent volumes from being detached from VMs, whether after volume attachment or during volume attachment.
Fixes and Improvements
We’ve also made several improvements to Hyperstack this week:
-
Volumes API Improvements: Now shows an array of attachments with details of all associated VM connections.
-
Create Cluster API Enhancements: Supports deployment mode, multi-master setups and node groups right from payload fields.
-
Updated Cluster Schema: Cluster API response objects now provide detailed
node_groups
andnodes
data. -
Faster Environment Creation: Region field now auto-fills when creating environments during VM deployment, reducing errors and saving time.
Reserve NVIDIA RTX A6000 GPUs
Running long training cycles or large-scale projects? Don’t risk downtime when demand spikes. Secure dedicated NVIDIA RTX A6000 GPUs on Hyperstack for only $0.35/hr and get priority support with your reservation.
- Guaranteed performance, no market delays
- Faster deployment with dedicated capacity
- Priority support to keep your workloads moving
Your team deserves performance you can count on. Why wait for capacity when you can reserve it now?
Talk to Our Team to Reserve NVIDIA RTX A6000 GPUs →
New on our Blog
Check out our latest blog on Hyperstack:
Fine-Tune for Less than $1:
A Step-by-Step Guide
Think fine-tuning is complicated or expensive? On AI Studio, you can fine-tune a base model for less than $1* and have a custom model ready to deploy in just a few minutes. By uploading your own dataset and following a few simple steps, you can create a model that delivers outputs for your specific needs. In our latest tutorial, we fine-tune a base model with a domain-specific dataset to build a custom model that delivers more relevant responses in Playground testing.
Check out the full tutorial below!
What is Serverless Inference:
And Why AI Teams Are Making the Switch
Deploying AI models shouldn’t feel like building infrastructure from scratch every time. But for many AI teams, that’s exactly what happens. If you're working with open-source models, you already know the importance of efficient deployment. Delays in allocating GPUs, scaling endpoints or maintaining runtime environments can slow you down. That’s exactly what serverless inference solves.
Check out the full blog below!
5 Evaluation Metrics That Test Your LLM's Capabilities :
And How AI Studio Helps
When you train or fine-tune an LLM, you naturally expect it to perform better whether on reasoning tasks, problem-solving or domain-specific instructions. But without proper evaluation, those expectations remain assumptions. Evaluation provides fast insight into whether your fine-tuned model has actually improved. More importantly, it allows you to apply custom rules or standards that reflect your domain-specific requirements. Instead of just hoping the model works better, you can track, test and verify its performance
Check out the full blog below!
Got an Idea in Mind? Let’s Make it Real.
At Hyperstack, we’re always pushing to do better and the best ideas often come from you.
→ Is there a feature you’ve been waiting for?
→ Something that could speed up your workday?
→ Or a tweak that would make things feel effortless?
Tell us what would make your Hyperstack experience even better. Your feedback sets the direction for what we build next.
That's it for this week's Hyperstack Rundown! Stay tuned for more updates next week and subscribe to our newsletter below for exclusive AI and GPU insights delivered to your inbox!
Missed the Previous Editions?
Catch up on everything you need to know from Hyperstack Weekly below:
Subscribe to Hyperstack!
Enter your email to get updates to your inbox every week
Get Started
Ready to build the next big thing in AI?