Skip to content

Commit 182eb95

Browse files
[Community Pipelines] K-Diffusion Pipeline (#1360)
* up * add readme * up * uP
1 parent ad93593 commit 182eb95

File tree

2 files changed

+541
-0
lines changed

2 files changed

+541
-0
lines changed

examples/community/README.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ If a community doesn't work as expected, please open an issue and ping the autho
2222
| Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting| [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
2323
| Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting| [Text Based Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) |
2424
| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - |[Stuti R.](https://github.com/kingstut) |
25+
| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
2526

2627

2728

@@ -663,4 +664,65 @@ Based https://arxiv.org/abs/2208.04202, this is used for diffusion on discrete d
663664
from diffusers import DiffusionPipeline
664665
pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion")
665666
image = pipe().images[0]
667+
668+
```
669+
670+
### Stable Diffusion with K Diffusion
671+
672+
Make sure you have @crowsonkb's https://github.com/crowsonkb/k-diffusion installed:
673+
666674
```
675+
pip install k-diffusion
676+
```
677+
678+
You can use the community pipeline as follows:
679+
680+
```python
681+
from diffusers import DiffusionPipeline
682+
683+
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
684+
pipe = pipe.to("cuda")
685+
686+
prompt = "an astronaut riding a horse on mars"
687+
pipe.set_sampler("sample_heun")
688+
generator = torch.Generator(device="cuda").manual_seed(seed)
689+
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]
690+
691+
image.save("./astronaut_heun_k_diffusion.png")
692+
```
693+
694+
To make sure that K Diffusion and `diffusers` yield the same results:
695+
696+
**Diffusers**:
697+
```python
698+
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
699+
700+
seed = 33
701+
702+
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
703+
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
704+
pipe = pipe.to("cuda")
705+
706+
generator = torch.Generator(device="cuda").manual_seed(seed)
707+
image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
708+
```
709+
710+
![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler.png)
711+
712+
**K Diffusion**:
713+
```python
714+
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
715+
716+
seed = 33
717+
718+
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
719+
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
720+
pipe = pipe.to("cuda")
721+
722+
pipe.set_sampler("sample_euler")
723+
generator = torch.Generator(device="cuda").manual_seed(seed)
724+
image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
725+
```
726+
727+
![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler_k_diffusion.png)
728+

0 commit comments

Comments
 (0)