Skip to content

ravensorb/comfyui-docker

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ComfyUI Docker

ComfyUI Flux is a Docker-based setup for running ComfyUI with FLUX.1 models and HunyuanVideo as well as a few additional features.

Features

  • Dockerized ComfyUI environment
  • Automatic installation of ComfyUI and ComfyUI-Manager
  • Low VRAM Mode: Download and use FP8 models for reduced VRAM usage
  • Pre-configured with FLUX models and VAEs
  • Ability to automatically install HunyuanVideo models and VAEs
  • Easy model management and updates
  • GPU support with CUDA 12.1 or 12.4

Prerequisites

  • Docker and Docker Compose
  • NVIDIA GPU with CUDA support (for GPU acceleration)
  • (Optional) Huggingface account and token (for downloading FLUX.1[dev] official model)

Quick Start

  1. (Optional) Create a .env file in the project root and add your Huggingface token:

    HF_TOKEN=your_huggingface_token
    LOW_VRAM=false  # Set to true to enable low VRAM mode
    COMFYUI_DOWNLOAD_VIDEO_MODELS=true # Download HunyuanVideo_repackaged related models (they are large)
  2. Download the docker-compose.yml file:

wget https://raw.githubusercontent.com/ravensorb/comfyui-docker/main/compose.yml

Alternatively, you can create a compose.yml file and copy/paste the following contents:

services:
  comfyui:
    container_name: comfyui
    image: ravensorb/comfyui-docker:latest
    restart: unless-stopped
    ports:
      - "8188:8188"
    volumes:
      - "./data/app:/app"
      - "./data/data:/data"
      - "./data/config:/config"
    environment:
      - CLI_ARGS=
      - HF_TOKEN=${HF_TOKEN}
      - LOW_VRAM=${LOW_VRAM:-false}
      - COMFYUI_DOWNLOAD_VIDEO_MODELS=${COMFYUI_DOWNLOAD_VIDEO_MODELS:-true}
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities:
                - gpu
  1. Run the container using Docker Compose:

    docker compose up -d

    Note: The first time you run the container, it will download all the included models before starting up. This process may take some time depending on your internet connection (could be 20 minutes or more).

  2. Access ComfyUI in your browser at http://localhost:8188

Low VRAM Mode

By setting the LOW_VRAM environment variable to true, the container will download and use the FP8 models, which are optimized for lower VRAM usage. The FP8 versions have CLIP and VAE merged, so only the checkpoint files are needed.

Enable Low VRAM Mode:

LOW_VRAM=true

Model Files

Overview of the model files that will be automatically downloaded when using this container. Some model files require an HF_TOKEN for download.

When LOW_VRAM=false (default)

Type Model File Name Size Notes
UNet flux1-schnell.safetensors 23 GiB
UNet flux1-dev.safetensors 23 GiB requires HF_TOKEN for download
CLIP clip_l.safetensors 235 MiB
CLIP t5xxl_fp16.safetensors 9.2 GiB
CLIP t5xxl_fp8_e4m3fn.safetensors 4.6 GiB
LoRA flux_realism_lora.safetensors 22 MiB
VAE ae.safetensors 320 MiB

When LOW_VRAM=true

Type Model File Name Size Notes
Checkpoint flux1-dev-fp8.safetensors 17 GiB
Checkpoint flux1-schnell-fp8.safetensors 17 GiB

Workflows

Download the images below and drag them into ComfyUI to load the corresponding workflows.

Official versions

FLUX.1[schnell] FLUX.1[dev]

FP8 versions (LOW_VRAM)

FLUX.1[schnell] FP8 FLUX.1[dev] FP8

Updating

The ComfyUI and ComfyUI-Manager are automatically updated when the container starts. To update the base image and other dependencies, pull the latest version of the Docker image using:

docker compose pull

Additional Notes

About

Docker-based setup for running ComfyUI with various models and custom nodes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 75.5%
  • Dockerfile 24.5%