-
Notifications
You must be signed in to change notification settings - Fork 6.6k
[SDXL Turbo] Add some docs #5982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The documentation is not available anymore as the PR was closed or merged. |
|
|
||
| # SDXL Turbo | ||
|
|
||
| Stable Diffusion XL (SDXL) Turbo was proposed in [Adversarial Diffusion Distillation](https://stability.ai/research/adversarial-diffusion-distillation) by Axel Sauer Dominik Lorenz Andreas Blattmann Robin Rombach. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Stable Diffusion XL (SDXL) Turbo was proposed in [Adversarial Diffusion Distillation](https://stability.ai/research/adversarial-diffusion-distillation) by Axel Sauer Dominik Lorenz Andreas Blattmann Robin Rombach. | |
| Stable Diffusion XL (SDXL) Turbo was proposed in [Adversarial Diffusion Distillation](https://stability.ai/research/adversarial-diffusion-distillation) by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. |
pcuenca
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks a lot!
|
|
||
| ```py | ||
| # uncomment to install the necessary libraries in Colab | ||
| #!pip install -q diffusers transformers accelerate omegaconf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| #!pip install -q diffusers transformers accelerate omegaconf | |
| #!pip install -q diffusers transformers accelerate |
Is omegaconf required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah think it's in order to load the single file format - @DN6 we should probs try to not have it be required
| import torch | ||
|
|
||
| pipeline = StableDiffusionXLPipeline.from_single_file( | ||
| "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16, variant="fp16") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have to use the variant when we specify the full filename?
| ``` | ||
|
|
||
| <div class="flex justify-center"> | ||
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-turbo-text2img.png" alt="generated image of an astronaut in a jungle"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still not there, is it?
| ``` | ||
|
|
||
| <div class="flex justify-center"> | ||
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-turbo-text2img.png" alt="generated image of an astronaut in a jungle"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-turbo-text2img.png" alt="generated image of an astronaut in a jungle"/> | |
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-turbo-text2img.png" alt="generated image with SDXL Turbo"/> |
|
|
||
| ## Speed-up SDXL Turbo even more | ||
|
|
||
| TODO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| TODO | |
| - Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. | |
| ```py | |
| pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) |
- When using the default VAE, keep it in
float32to avoid costlydtypeconversions before and after each generation. You only need to do this one before your first generation:
pipe.upcast_vae()As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32.
Co-authored-by: Pedro Cuenca <[email protected]>
* add diffusers example * add diffusers example * Comment about making it faster * Apply suggestions from code review Co-authored-by: Pedro Cuenca <[email protected]> --------- Co-authored-by: Pedro Cuenca <[email protected]>
What does this PR do?
Adds some docs for SDXL Turbo