Here's a sneak peek at what's new on Hyperstack:
Hyperstack Feature Request is live, now you can submit any idea you have in mind.
Have a new product in mind, an improvement to existing services or a tool like our GPU selector or DevOps integration? Go to the Hyperstack Feature Request Page and write your idea.
Let’s build the future of Hyperstack together.
Launched in July, AI Studio is your all-in-one Gen AI platform to build and deploy open-source LLMs effortlessly. No need to spin up VMs or manage infrastructure. Just bring your dataset and start creating with Gen AI.
Thanks for your input and feedback. We’ve been able to shape platforms like this, making it easier than ever to experiment and build Gen AI workflows.
Try AI Studio if you haven’t already. To get started with AI Studio, check out our documentation for a quick start.
You can now hibernate, take snapshots and boot from volume on all on-demand VMs in our Norway1 region. Plus, volume storage is now available across all VM types in Norway1.
In July, we also introduced the latest Ubuntu-based images. These images include updated CUDA and driver support, making it simpler to kick off your AI/ML workloads:
The high-performance NVIDIA A100 SXM GPU VMs are now live and ready for deployment on Hyperstack. Run your most demanding AI, ML and HPC workloads with on-demand pricing at $1.60/hour or reserve capacity for just $1.36/hour to lock in savings and guaranteed availability for your projects.
NexGen Cloud is now SOC 2 Type 1 certified, reinforcing our commitment to keeping your data secure and your workloads protected. Our entire platform has been audited and verified by a licensed CPA firm, meeting rigorous security standards with enterprise-grade safeguards. And we’re not stopping here, SOC 2 Type 2 certification is already underway.
Here are the latest fixes and improvements on Hyperstack:
Billing Forms Validation: We improved field validation to ensure accurate and standardised billing information.
Firewall Rules Bug Fix: We resolved an issue that prevented firewall rules from being created during VM deployment.
Platform Stability: Performance improvements and reliability enhancements across the platform were made to make your experience smoother.
Check out our latest blog on Hyperstack:
Turning your AI idea into a production-grade product does not come from a great model alone. It demands high-performance compute infrastructure. And such infrastructure comes with powerful GPU resources, often on-demand GPUs for AI. No matter if you're an AI research team fine-tuning LLMs or a SaaS startup testing inference workloads, on-demand GPUs for AI give you the flexibility and performance you need. No need to worry about upfront hardware costs or the long lead times of traditional compute. You get it all with powerful GPUs on demand. Check out our latest blog to learn more.
Let’s be real, training large AI models or fine-tuning LLMs on consumer-grade GPUs is painful. You’re waiting hours (sometimes days), your machine’s on fire and worst of all? You’re still not even close to deployment. You may also be trying cloud services, only to realise they are burning through your budget faster than your model is overfitting. That’s exactly why cloud GPU rental platforms are everyone's go-to choice now. You just need to rent a cloud GPU according to your workload and pay for what you use. Check out our latest blog to learn more.
We break down everything you need to know about using the NVIDIA H200 SXM on Hyperstack like how it’s priced, when to choose on-demand vs reserved VMs and how to reserve your VMs. The NVIDIA H200 SXM is part of NVIDIA’s Hopper architecture GPU for AI, high-performance computing (HPC) and memory-intensive applications. If you're planning large-scale AI projects, this guide helps you optimise performance and budget with ease. Check out our latest blog to learn more.
We explore everything you need to know about the NVIDIA H100 SXM on Hyperstack, from its 8-GPU high-performance setup and NVLink-powered scaling to storage options, pricing and how to reserve capacity. Whether you're training LLMs or running multimodal inference, this guide helps you deploy faster and scale efficiently with predictable performance and cost. If you're looking to train large language models, run scientific workloads or perform high-performance distributed training, the NVIDIA H100 SXM is the ideal choice for you. Check out our latest blog to learn more.
At Hyperstack, we’re here to support builders, creators and innovators who are pushing AI forward, from early prototypes to production-ready deployments.
Got something exciting in the works? We’d love to hear about it. Stand a chance to be featured in our next Weekly Rundown. Share your story here.
For any questions or suggestions, feel free to reach out at support@hyperstack.cloud. Stay tuned for even more updates and exciting tools next month.