Skip to content

Commit ed6c61c

Browse files
Fix small community pipeline import bug and finish README (#869)
* up * Finish
1 parent 146419f commit ed6c61c

File tree

2 files changed

+62
-7
lines changed

2 files changed

+62
-7
lines changed

examples/community/README.md

Lines changed: 57 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,15 @@
33
> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
44
55
**Community** examples consist of both inference and training examples that have been added by the community.
6+
Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out.
7+
If a community doesn't work as expected, please open an issue and ping the author on it.
68

7-
| Example | Description | Author | Colab |
8-
|:----------|:----------------------|:-----------------|----------:|
9-
| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion| [Suraj Patil](https://github.com/patil-suraj/) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) |
10-
| One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [Patrick von Platen](https://github.com/patrickvonplaten/) | - |
11-
| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Nate Raw](https://github.com/nateraw/) | - |
9+
| Example | Description | Code Example | Colab | Author |
10+
|:----------|:----------------------|:-----------------|:-------------|----------:|
11+
| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion| [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) |
12+
| One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
13+
| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | [Nate Raw](https://github.com/nateraw/) |
14+
| Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
1215

1316
## Example usages
1417

@@ -66,7 +69,7 @@ Generated images tend to be of higher qualtiy than natively using stable diffusi
6669

6770
![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg).
6871

69-
### One Step U-Net (Dummy)
72+
### One Step Unet
7073

7174
The dummy "one-step-unet" can be run as follows:
7275

@@ -112,3 +115,51 @@ frame_filepaths = pipe.walk(
112115
The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion.
113116

114117
> **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.**
118+
119+
### Stable Diffusion Mega
120+
121+
The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
122+
123+
```python
124+
#!/usr/bin/env python3
125+
from diffusers import DiffusionPipeline
126+
import PIL
127+
import requests
128+
from io import BytesIO
129+
import torch
130+
131+
132+
def download_image(url):
133+
response = requests.get(url)
134+
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
135+
136+
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="stable_diffusion_mega", dtype=torch.float16, revision="fp16")
137+
pipe.to("cuda")
138+
pipe.enable_attention_slicing()
139+
140+
141+
### Text-to-Image
142+
143+
images = pipe.text2img("An astronaut riding a horse").images
144+
145+
### Image-to-Image
146+
147+
init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
148+
149+
prompt = "A fantasy landscape, trending on artstation"
150+
151+
images = pipe.img2img(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5).images
152+
153+
### Inpainting
154+
155+
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
156+
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
157+
init_image = download_image(img_url).resize((512, 512))
158+
mask_image = download_image(mask_url).resize((512, 512))
159+
160+
prompt = "a cat sitting on a bench"
161+
images = pipe.inpaint(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images
162+
```
163+
164+
As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline.
165+

src/diffusers/dynamic_modules_utils.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,7 +168,11 @@ def find_pipeline_class(loaded_module):
168168

169169
pipeline_class = None
170170
for cls_name, cls in cls_members.items():
171-
if cls_name != DiffusionPipeline.__name__ and issubclass(cls, DiffusionPipeline):
171+
if (
172+
cls_name != DiffusionPipeline.__name__
173+
and issubclass(cls, DiffusionPipeline)
174+
and cls.__module__.split(".")[0] != "diffusers"
175+
):
172176
if pipeline_class is not None:
173177
raise ValueError(
174178
f"Multiple classes that inherit from {DiffusionPipeline.__name__} have been found:"

0 commit comments

Comments
 (0)