TABLE OF CONTENTS
NVIDIA H100 SXM On-Demand
In our latest tutorial, we walk you through how to run the Flux model on ComfyUI using NVIDIA H100 GPUs on Hyperstack. You’ll learn what Flux is, why ComfyUI is ideal for it and how H100 PCIe GPUs make it the best choice for smooth AI model deployment.
What is Flux?
Flux is one of the latest and most powerful text-to-image diffusion models, built to provide higher realism, sharper details and better prompt alignment compared to older models like SDXL. Unlike standard diffusion models, Flux integrates multi-modal conditioning (text, style prompts and additional context) to generate outputs that better align with user intent.
There are different versions of Flux you can choose from depending on your needs:
-
Flux Schnell is the lightweight open-source option.
-
Flux Dev is more advanced and available for non-commercial use.
-
Flux Pro is a high-end, API-driven option for professionals
Check out How to Prompt Flux 1.1 Pro for Stunning Image Generation in our blog here!
Features of Flux
When you use Flux fopr image generation, here’s what you can expect:
-
Exceptional Image Quality: You get crisp details, natural textures and better adherence to your prompts.
-
Handles Text Well: Flux does a better job at rendering clear and readable text inside images.
-
Different Modes for Flexibility: Pick Schnell for speed, Dev for balance or full precision for the best visual fidelity.
-
In-Context Editing: With variants like Kontext, you can edit and refine your images as you go.
-
Efficiency Options: Optimised versions of Flux make it possible to run with less VRAM or faster inference if you’re on powerful hardware.
Why Run Flux on ComfyUI?
If you want full control over your image generation workflow, ComfyUI is one of the best tools out there. Running Flux on ComfyUI gives you:
-
Modular control over every step: You can fine-tune how Flux interacts with prompts, samplers, LoRAs and additional conditioning tools like ControlNet or adapters.
-
Optimised performance: Thanks to ComfyUI’s smart caching and memory management, Flux can run smoothly across different hardware setups, from high-VRAM GPUs to more modest systems.
-
Flexibility across models: Flux can be combined with other supported diffusion models, adapters or even video/audio nodes for hybrid workflows, all within the same interface.
-
Reproducibility and sharing: Workflows built with Flux embed directly into your outputs, so you can reload the exact same setup later or share it with others.
-
Creative freedom: Instead of a fixed pipeline, you can experiment with Flux in multi-step processes like chaining it with SDXL, applying LoRA fine-tunes or extending to video sequences.
Why Deploy Flux with NVIDIA H100 GPUs?
Flux is a demanding model and if you want the best performance without bottlenecks, NVIDIA H100 PCIe GPUs are the way to go. Here’s why:
-
Massive Compute Power: With up to 80 GB of HBM3 memory and NVLink support, you can run even the heaviest Flux models smoothly.
-
High-Speed Networking: Get up to 350 Gbps of high-speed networking. If you’re scaling Flux across multiple nodes, you won’t run into network slowdowns.
-
Transformer Engine Acceleration: Mixed-precision operations run faster on H100 GPUs, which means lower latency and quicker image generations.
-
Optimised for Diffusion Models: Flux makes heavy use of tensor operations during denoising and H100s handle that with ease.
How to Set Up and Run ComfyUI with FLUX.1 on NVIDIA H100 GPU
If you want to run the Flux model on ComfyUI with NVIDIA H100 GPUs, you’ll need to follow the steps below:
Step 1: Set Up Your Hugging Face Account
-
Create an account on Hugging Face and verify your email.
-
Generate a new Access Token here and create a "read" token with any name you choose.
Step 2: Launch Your NVIDIA H100 GPU VM
-
Launch an NVIDIA H100 GPU VM with the latest Ubuntu image (CUDA + Docker preinstalled).
-
Make sure that port 3000 is open.
Step 3: Run ComfyUI Container
-
Connect to your VM via SSH and run the following command:
mkdir -p storage && docker run -it --rm --name comfyui-cu129 --gpus all -p 3000:8188 -v "$(pwd)"/storage:/root -e CLI_ARGS="--fast" yanwk/comfyui-boot:cu129-slim
- Wait until you see the message below:
[ComfyUI-Manager] All startup tasks have been completed.
Once visible, you can proceed with the next step.
Step 4: Download Model Files
Open a second SSH session and run the following command to download models:
cd storage/ComfyUI/models/
sudo wget -O ./clip/clip_l.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors
sudo wget -O ./clip/t5xxl_fp16.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors
sudo wget --header="Authorization: Bearer <PASTE_YOUR_HF_ACCESS_TOKEN_HERE>" -O ./vae/ae.safetensors https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors
sudo wget --header="Authorization: Bearer <PASTE_YOUR_HF_ACCESS_TOKEN_HERE>" -O ./unet/flux1-dev.safetensors https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors
NOTE: Make sure to paste your Hugging Face access token in the last two lines above, where it says <PASTE_YOUR_HF_ACCESS_TOKEN_HERE>.
Step 5: Launch ComfyUI
-
Open up http://your_public_ip:3000 and start ComfyUI.
Step 6: Load Workflow
-
Download the JSON file here.
-
Drag and drop it into the ComfyUI workspace (centre panel).
-
Enter your prompt in the green text box and hit RUN.
Here’s the image generated from our prompt:
FAQs
What is Flux?
Flux is a popular text-to-image diffusion model that generates highly detailed and realistic visuals. It comes in different variants (Schnell, Dev, Pro) so you can balance speed, quality and resource requirements.
Why should I use Flux?
Flux offers superior prompt alignment, better text rendering and realism compared to older models like SDXL. If you want high-quality outputs for creative projects, Flux is one of the best options available.
What is ComfyUI?
ComfyUI is a node-based, visual interface for diffusion models. It lets you drag and drop components like samplers, VAEs and prompts to design your own generation pipeline.
Why run Flux on ComfyUI?
ComfyUI gives you more flexibility and control. You can customise workflows, add extensions like ControlNet or LoRA and experiment with different model settings, all of which make it perfect for running Flux.
What GPU do I need to run Flux on ComfyUI?
Flux is a large model, so you’ll need a high-performance GPU. NVIDIA H100 is the best choice because it offers 80 GB HBM3 memory.
Subscribe to Hyperstack!
Enter your email to get updates to your inbox every week
Get Started
Ready to build the next big thing in AI?