<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

NVIDIA H100 SXMs On-Demand at $3.00/hour - Reserve from just $2.10/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

GPU Selector For LLMs

Find the ideal GPU with our easy-to-use LLM GPU Finder tool. Whether you need to fine-tune or run inference, we’ll help you choose the right hardware for your project.

Ready to Find Your GPU?

How to Use the LLM GPU Finder

Group 41959

Step

01

Choose Your Model

Select from our list of popular LLMs or enter any HuggingFace model name.

Step

02

Explore Training Options

View memory requirements for various training approaches:

  • Full fine-tuning
  • LoRa fine-tuning
  • And others

Step

03

Check Inference Requirements

See memory needs for different precision levels:

  • Float32
  • Float16
  • Int8
  • And others

Step

04

Get GPU Recommendations

Based on your use case, we'll suggest the optimal GPU available on Hyperstack.

Step

05

Start Your Project

Click through to Hyperstack and begin working on your LLM project immediately.

Benefits of LLM GPU Finder

precision-matters

Precision Matters:

We account for higher-precision tasks requiring more powerful GPUs.

training-vs-inference

Training vs. Inference

Our recommendations consider that training typically needs more robust GPUs than inference.

training-vs-inference

Tailored for You:

We provide personalised suggestions based on your LLM and use case.

Frequently Asked Questions

What is the LLM GPU Finder tool?

The LLM GPU Finder tool helps you find the ideal GPU for your specific needs like fine-tuning or inferencing with LLMs.

How do I choose a model on LLM GPU Finder?

You can either select from our list of popular LLMs or enter any Hugging Face model name on LLM GPU Finder to receive tailored GPU recommendations.

What training options can I explore on LLM GPU Finder?

You can view memory requirements for various training approaches, including full fine-tuning and LoRa fine-tuning on our LLM GPU Finder.

How does the LLM GPU finder differentiate between training and inference?

Our LLM GPU finder accounts for the different memory and GPU requirements for training (which typically requires more robust GPUs) compared to inference tasks.

Can I get recommendations for GPUs based on my project requirements on LLM GPU Finder?

Absolutely. Our GPU finder tool provides recommendations for the optimal GPU available on Hyperstack based on your selected model and intended use case.