We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TencentARC/flux-mini
gpustack/FLUX.1-mini-GGUF
HyperX-Sentience/Flux-Mini-GGUF
TencentARC/FluxKits
gpustack/llama-box
The text was updated successfully, but these errors were encountered:
Support has already been added via #490 🙂 (at least it works on my machine)
sd.exe --diffusion-model models\flux-mini-q8_0.gguf --clip_l models\clip\clip_l-q8_0.gguf --t5xxl models\clip\t5xxl_q4_k.gguf --vae models\vae\flux\ae.f16.gguf -p "A vase with roses" --cfg-scale 1 --sampling-method euler --vae-tiling --steps 20 --guidance 3.5 --color --seed 0 -W 1024 -H 1024
Sorry, something went wrong.
No branches or pull requests
Model Files:
Official Safetensors Format File:
TencentARC/flux-mini
Official GGUF Format Files:
gpustack/FLUX.1-mini-GGUF
Unofficial GGUF Format Files:
HyperX-Sentience/Flux-Mini-GGUF
Official Github page:
TencentARC/FluxKits
Other Useful Links:
gpustack/llama-box
The text was updated successfully, but these errors were encountered: