diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml
index d74bd3785343..df41854a9fe7 100644
--- a/docs/source/en/_toctree.yml
+++ b/docs/source/en/_toctree.yml
@@ -25,7 +25,7 @@
- local: using-diffusers/schedulers
title: Load and compare different schedulers
- local: using-diffusers/custom_pipeline_overview
- title: Load and add custom pipelines
+ title: Load community pipelines
- local: using-diffusers/kerascv
title: Load KerasCV Stable Diffusion checkpoints
title: Loading & Hub
@@ -47,9 +47,9 @@
- local: using-diffusers/reproducibility
title: Create reproducible pipelines
- local: using-diffusers/custom_pipeline_examples
- title: Community Pipelines
+ title: Community pipelines
- local: using-diffusers/contribute_pipeline
- title: How to contribute a Pipeline
+ title: How to contribute a community pipeline
- local: using-diffusers/using_safetensors
title: Using safetensors
- local: using-diffusers/stable_diffusion_jax_how_to
diff --git a/docs/source/en/using-diffusers/contribute_pipeline.mdx b/docs/source/en/using-diffusers/contribute_pipeline.mdx
index 8ee6d6ae4fb1..2c2b5abedcec 100644
--- a/docs/source/en/using-diffusers/contribute_pipeline.mdx
+++ b/docs/source/en/using-diffusers/contribute_pipeline.mdx
@@ -10,30 +10,21 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->
-# How to build a community pipeline
+# How to contribute a community pipeline
-*Note*: this page was built from the GitHub Issue on Community Pipelines [#841](https://github.com/huggingface/diffusers/issues/841).
+
-Let's make an example!
-Say you want to define a pipeline that just does a single forward pass to a U-Net and then calls a scheduler only once (Note, this doesn't make any sense from a scientific point of view, but only represents an example of how things work under the hood).
+๐ก Take a look at GitHub Issue [#841](https://github.com/huggingface/diffusers/issues/841) for more context about why we're adding community pipelines to help everyone easily share their work without being slowed down.
-Cool! So you open your favorite IDE and start creating your pipeline ๐ป.
-First, what model weights and configurations do we need?
-We have a U-Net and a scheduler, so our pipeline should take a U-Net and a scheduler as an argument.
-Also, as stated above, you'd like to be able to load weights and the scheduler config for Hub and share your code with others, so we'll inherit from `DiffusionPipeline`:
+
-```python
-from diffusers import DiffusionPipeline
-import torch
+Community pipelines allow you to add any additional features you'd like on top of the [`DiffusionPipeline`]. The main benefit of building on top of the `DiffusionPipeline` is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access.
+This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you'll create a "one-step" pipeline where the `UNet` does a single forward pass and calls the scheduler once.
-class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
- def __init__(self, unet, scheduler):
- super().__init__()
-```
+## Initialize the pipeline
-Now, we must save the `unet` and `scheduler` in a config file so that you can save your pipeline with `save_pretrained`.
-Therefore, make sure you add every component that is save-able to the `register_modules` function:
+You should start by creating a `one_step_unet.py` file for your community pipeline. In this file, create a pipeline class that inherits from the [`DiffusionPipeline`] to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a `UNet` and a scheduler, so you'll need to add these as arguments to the `__init__` function:
```python
from diffusers import DiffusionPipeline
@@ -43,39 +34,54 @@ import torch
class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
def __init__(self, unet, scheduler):
super().__init__()
+```
+
+To ensure your pipeline and its components (`unet` and `scheduler`) can be saved with [`~DiffusionPipeline.save_pretrained`], add them to the `register_modules` function:
+
+```diff
+ from diffusers import DiffusionPipeline
+ import torch
+
+ class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
+ def __init__(self, unet, scheduler):
+ super().__init__()
- self.register_modules(unet=unet, scheduler=scheduler)
++ self.register_modules(unet=unet, scheduler=scheduler)
```
-Cool, the init is done! ๐ฅ Now, let's go into the forward pass, which we recommend defining as `__call__` . Here you're given all the creative freedom there is. For our amazing "one-step" pipeline, we simply create a random image and call the unet once and the scheduler once:
+Cool, the `__init__` step is done and you can move to the forward pass now! ๐ฅ
-```python
-from diffusers import DiffusionPipeline
-import torch
+## Define the forward pass
+In the forward pass, which we recommend defining as `__call__`, you have complete creative freedom to add whatever feature you'd like. For our amazing one-step pipeline, create a random image and only call the `unet` and `scheduler` once by setting `timestep=1`:
-class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
- def __init__(self, unet, scheduler):
- super().__init__()
+```diff
+ from diffusers import DiffusionPipeline
+ import torch
+
+
+ class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
+ def __init__(self, unet, scheduler):
+ super().__init__()
- self.register_modules(unet=unet, scheduler=scheduler)
+ self.register_modules(unet=unet, scheduler=scheduler)
- def __call__(self):
- image = torch.randn(
- (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size),
- )
- timestep = 1
++ def __call__(self):
++ image = torch.randn(
++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size),
++ )
++ timestep = 1
- model_output = self.unet(image, timestep).sample
- scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
++ model_output = self.unet(image, timestep).sample
++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
- return scheduler_output
++ return scheduler_output
```
-Cool, that's it! ๐ You can now run this pipeline by passing a `unet` and a `scheduler` to the init:
+That's it! ๐ You can now run this pipeline by passing a `unet` and `scheduler` to it:
```python
-from diffusers import DDPMScheduler, Unet2DModel
+from diffusers import DDPMScheduler, UNet2DModel
scheduler = DDPMScheduler()
unet = UNet2DModel()
@@ -85,7 +91,7 @@ pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler)
output = pipeline()
```
-But what's even better is that you can load pre-existing weights into the pipeline if they match exactly your pipeline structure. This is e.g. the case for [https://huggingface.co/google/ddpm-cifar10-32](https://huggingface.co/google/ddpm-cifar10-32) so that we can do the following:
+But what's even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the [`google/ddpm-cifar10-32`](https://huggingface.co/google/ddpm-cifar10-32) weights into the one-step pipeline:
```python
pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32")
@@ -93,63 +99,72 @@ pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-
output = pipeline()
```
-We want to share this amazing pipeline with the community, so we would open a PR request to add the following code under `one_step_unet.py` to [https://github.com/huggingface/diffusers/tree/main/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) .
-
-```python
-from diffusers import DiffusionPipeline
-import torch
-
+## Share your pipeline
-class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
- def __init__(self, unet, scheduler):
- super().__init__()
+Open a Pull Request on the ๐งจ Diffusers [repository](https://github.com/huggingface/diffusers) to add your awesome pipeline in `one_step_unet.py` to the [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) subfolder.
- self.register_modules(unet=unet, scheduler=scheduler)
+Once it is merged, anyone with `diffusers >= 0.4.0` installed can use this pipeline magically ๐ช by specifying it in the `custom_pipeline` argument:
- def __call__(self):
- image = torch.randn(
- (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size),
- )
- timestep = 1
-
- model_output = self.unet(image, timestep).sample
- scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
+```python
+from diffusers import DiffusionPipeline
- return scheduler_output
+pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
+pipe()
```
-Our amazing pipeline got merged here: [#840](https://github.com/huggingface/diffusers/pull/840).
-Now everybody that has `diffusers >= 0.4.0` installed can use our pipeline magically ๐ช as follows:
+Another way to share your community pipeline is to upload the `one_step_unet.py` file directly to your preferred [model repository](https://huggingface.co/docs/hub/models-uploading) on the Hub. Instead of specifying the `one_step_unet.py` file, pass the model repository id to the `custom_pipeline` argument:
```python
from diffusers import DiffusionPipeline
-pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
-pipe()
+pipeline = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet")
```
-Another way to upload your custom_pipeline, besides sending a PR, is uploading the code that contains it to the Hugging Face Hub, [as exemplified here](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview#loading-custom-pipelines-from-the-hub).
+Take a look at the following table to compare the two sharing workflows to help you decide the best option for you:
+
+| | GitHub community pipeline | HF Hub community pipeline |
+|----------------|------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
+| usage | same | same |
+| review process | open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower | upload directly to a Hub repository without any review; this is the fastest workflow |
+| visibility | included in the official Diffusers repository and documentation | included on your HF Hub profile and relies on your own usage/promotion to gain visibility |
-**Try it out now - it works!**
+
-In general, you will want to create much more sophisticated pipelines, so we recommend looking at existing pipelines here: [https://github.com/huggingface/diffusers/tree/main/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community).
+๐ก You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from `DiffusionPipeline` because this is automatically detected.
-IMPORTANT:
-You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from `DiffusionPipeline` as this will be automatically detected.
+
## How do community pipelines work?
-A community pipeline is a class that has to inherit from ['DiffusionPipeline']:
-and that has been added to `examples/community` [files](https://github.com/huggingface/diffusers/tree/main/examples/community).
-The community can load the pipeline code via the custom_pipeline argument from DiffusionPipeline. See docs [here](https://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.custom_pipeline):
-This means:
-The model weights and configs of the pipeline should be loaded from the `pretrained_model_name_or_path` [argument](https://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path):
-whereas the code that powers the community pipeline is defined in a file added in [`examples/community`](https://github.com/huggingface/diffusers/tree/main/examples/community).
+A community pipeline is a class that inherits from [`DiffusionPipeline`] which means:
+
+- It can be loaded with the [`custom_pipeline`] argument.
+- The model weights and scheduler configuration are loaded from [`pretrained_model_name_or_path`].
+- The code that implements a feature in the community pipeline is defined in a `pipeline.py` file.
+
+Sometimes you can't load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline:
-Now, it might very well be that only some of your pipeline components weights can be downloaded from an official repo.
-The other components should then be passed directly to init as is the case for the ClIP guidance notebook [here](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb#scrollTo=z9Kglma6hjki).
+```python
+from diffusers import DiffusionPipeline
+from transformers import CLIPFeatureExtractor, CLIPModel
+
+model_id = "CompVis/stable-diffusion-v1-4"
+clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
+
+feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id)
+clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16)
+
+pipeline = DiffusionPipeline.from_pretrained(
+ model_id,
+ custom_pipeline="clip_guided_stable_diffusion",
+ clip_model=clip_model,
+ feature_extractor=feature_extractor,
+ scheduler=scheduler,
+ torch_dtype=torch.float16,
+)
+```
-The magic behind all of this is that we load the code directly from GitHub. You can check it out in more detail if you follow the functionality defined here:
+The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it'll be available to all ๐งจ Diffusers packages.
```python
# 2. Load the pipeline class, if using custom module then load it from the hub
@@ -164,6 +179,3 @@ else:
diffusers_module = importlib.import_module(cls.__module__.split(".")[0])
pipeline_class = getattr(diffusers_module, config_dict["_class_name"])
```
-
-This is why a community pipeline merged to GitHub will be directly available to all `diffusers` packages.
-
diff --git a/docs/source/en/using-diffusers/custom_pipeline_examples.mdx b/docs/source/en/using-diffusers/custom_pipeline_examples.mdx
index 2dfa71f0d33c..93ac6d1f782c 100644
--- a/docs/source/en/using-diffusers/custom_pipeline_examples.mdx
+++ b/docs/source/en/using-diffusers/custom_pipeline_examples.mdx
@@ -10,7 +10,7 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->
-# Custom Pipelines
+# Community pipelines
> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
diff --git a/docs/source/en/using-diffusers/custom_pipeline_overview.mdx b/docs/source/en/using-diffusers/custom_pipeline_overview.mdx
index 934e639983d2..3c5df7c0dd6e 100644
--- a/docs/source/en/using-diffusers/custom_pipeline_overview.mdx
+++ b/docs/source/en/using-diffusers/custom_pipeline_overview.mdx
@@ -10,19 +10,21 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->
-# Loading and Adding Custom Pipelines
+# Load community pipelines
-Diffusers allows you to conveniently load any custom pipeline from the Hugging Face Hub as well as any [official community pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community)
-via the [`DiffusionPipeline`] class.
+Community pipelines are any [`DiffusionPipeline`] class that are different from the original implementation as specified in their paper (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://arxiv.org/abs/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline.
-## Loading custom pipelines from the Hub
+There are many cool community pipelines like [Speech to Image](https://github.com/huggingface/diffusers/tree/main/examples/community#speech-to-image) or [Composable Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#composable-stable-diffusion), and you can find all the official community pipelines [here](https://github.com/huggingface/diffusers/tree/main/examples/community).
-Custom pipelines can be easily loaded from any model repository on the Hub that defines a diffusion pipeline in a `pipeline.py` file.
-Let's load a dummy pipeline from [hf-internal-testing/diffusers-dummy-pipeline](https://huggingface.co/hf-internal-testing/diffusers-dummy-pipeline).
+To load any community pipeline on the Hub, pass the repository id of the community pipeline to the `custom_pipeline` argument and the model repository where you'd like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from [`hf-internal-testing/diffusers-dummy-pipeline`](https://huggingface.co/hf-internal-testing/diffusers-dummy-pipeline/blob/main/pipeline.py) and the pipeline weights and components from [`google/ddpm-cifar10-32`](https://huggingface.co/google/ddpm-cifar10-32):
-All you need to do is pass the custom pipeline repo id with the `custom_pipeline` argument alongside the repo from where you wish to load the pipeline modules.
+
-```python
+๐ By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically!
+
+
+
+```py
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
@@ -30,25 +32,9 @@ pipeline = DiffusionPipeline.from_pretrained(
)
```
-This will load the custom pipeline as defined in the [model repository](https://huggingface.co/hf-internal-testing/diffusers-dummy-pipeline/blob/main/pipeline.py).
-
-
-
-By loading a custom pipeline from the Hugging Face Hub, you are trusting that the code you are loading
-is safe ๐. Make sure to check out the code online before loading & running it automatically.
-
-
-
-## Loading official community pipelines
+Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community [CLIP Guided Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#clip-guided-stable-diffusion) pipeline, and you can pass the CLIP model components directly to it:
-Community pipelines are summarized in the [community examples folder](https://github.com/huggingface/diffusers/tree/main/examples/community).
-
-Similarly, you need to pass both the *repo id* from where you wish to load the weights as well as the `custom_pipeline` argument. Here the `custom_pipeline` argument should consist simply of the filename of the community pipeline excluding the `.py` suffix, *e.g.* `clip_guided_stable_diffusion`.
-
-Since community pipelines are often more complex, one can mix loading weights from an official *repo id*
-and passing pipeline modules directly.
-
-```python
+```py
from diffusers import DiffusionPipeline
from transformers import CLIPImageProcessor, CLIPModel
@@ -65,59 +51,4 @@ pipeline = DiffusionPipeline.from_pretrained(
)
```
-## Adding custom pipelines to the Hub
-
-To add a custom pipeline to the Hub, all you need to do is to define a pipeline class that inherits
-from [`DiffusionPipeline`] in a `pipeline.py` file.
-Make sure that the whole pipeline is encapsulated within a single class and that the `pipeline.py` file
-has only one such class.
-
-Let's quickly define an example pipeline.
-
-
-```python
-import torch
-from diffusers import DiffusionPipeline
-
-
-class MyPipeline(DiffusionPipeline):
- def __init__(self, unet, scheduler):
- super().__init__()
-
- self.register_modules(unet=unet, scheduler=scheduler)
-
- @torch.no_grad()
- def __call__(self, batch_size: int = 1, num_inference_steps: int = 50):
- # Sample gaussian noise to begin loop
- image = torch.randn(
- (batch_size, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size)
- )
-
- image = image.to(self.device)
-
- # set step values
- self.scheduler.set_timesteps(num_inference_steps)
-
- for t in self.progress_bar(self.scheduler.timesteps):
- # 1. predict noise model_output
- model_output = self.unet(image, t).sample
-
- # 2. predict previous mean of image x_t-1 and add variance depending on eta
- # eta corresponds to ฮท in paper and should be between [0, 1]
- # do x_t -> x_t-1
- image = self.scheduler.step(model_output, t, image, eta).prev_sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).numpy()
-
- return image
-```
-
-Now you can upload this short file under the name `pipeline.py` in your preferred [model repository](https://huggingface.co/docs/hub/models-uploading). For Stable Diffusion pipelines, you may also [join the community organisation for shared pipelines](https://huggingface.co/organizations/sd-diffusers-pipelines-library/share/BUPyDUuHcciGTOKaExlqtfFcyCZsVFdrjr) to upload yours.
-Finally, we can load the custom pipeline by passing the model repository name, *e.g.* `sd-diffusers-pipelines-library/my_custom_pipeline` alongside the model repository from where we want to load the `unet` and `scheduler` components.
-
-```python
-my_pipeline = DiffusionPipeline.from_pretrained(
- "google/ddpm-cifar10-32", custom_pipeline="patrickvonplaten/my_custom_pipeline"
-)
-```
+For more information about community pipelines, take a look at the [Community pipelines](custom_pipeline_examples) guide for how to use them and if you're interested in adding a community pipeline check out the [How to contribute a community pipeline](contribute_pipeline) guide!
\ No newline at end of file