Skip to content

Commit f2835f2

Browse files
Merge branch 'main' into main
2 parents 699ca9b + a4b233e commit f2835f2

File tree

214 files changed

+7162
-2545
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

214 files changed

+7162
-2545
lines changed

.github/workflows/pr_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ jobs:
4040
framework: pytorch_examples
4141
runner: docker-cpu
4242
image: diffusers/diffusers-pytorch-cpu
43-
report: torch_cpu
43+
report: torch_example_cpu
4444

4545
name: ${{ matrix.config.name }}
4646

.github/workflows/push_tests_fast.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ jobs:
3838
framework: pytorch_examples
3939
runner: docker-cpu
4040
image: diffusers/diffusers-pytorch-cpu
41-
report: torch_cpu
41+
report: torch_example_cpu
4242

4343
name: ${{ matrix.config.name }}
4444

CONTRIBUTING.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -394,8 +394,15 @@ passes. You should run the tests impacted by your changes like this:
394394
```bash
395395
$ pytest tests/<TEST_TO_RUN>.py
396396
```
397+
398+
Before you run the tests, please make sure you install the dependencies required for testing. You can do so
399+
with this command:
397400

398-
You can also run the full suite with the following command, but it takes
401+
```bash
402+
$ pip install -e ".[test]"
403+
```
404+
405+
You can run the full test suite with the following command, but it takes
399406
a beefy machine to produce a result in a decent amount of time now that
400407
Diffusers has grown a lot. Here is the command for it:
401408

docs/source/en/_toctree.yml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
- local: quicktour
55
title: Quicktour
66
- local: stable_diffusion
7-
title: Stable Diffusion
7+
title: Effective and efficient diffusion
88
- local: installation
99
title: Installation
1010
title: Get started
@@ -52,6 +52,8 @@
5252
title: How to contribute a Pipeline
5353
- local: using-diffusers/using_safetensors
5454
title: Using safetensors
55+
- local: using-diffusers/stable_diffusion_jax_how_to
56+
title: Stable Diffusion in JAX/Flax
5557
- local: using-diffusers/weighted_prompts
5658
title: Weighting Prompts
5759
title: Pipelines for Inference
@@ -95,6 +97,8 @@
9597
title: ONNX
9698
- local: optimization/open_vino
9799
title: OpenVINO
100+
- local: optimization/coreml
101+
title: Core ML
98102
- local: optimization/mps
99103
title: MPS
100104
- local: optimization/habana
@@ -202,6 +206,8 @@
202206
title: Stochastic Karras VE
203207
- local: api/pipelines/text_to_video
204208
title: Text-to-Video
209+
- local: api/pipelines/text_to_video_zero
210+
title: Text-to-Video Zero
205211
- local: api/pipelines/unclip
206212
title: UnCLIP
207213
- local: api/pipelines/latent_diffusion_uncond

docs/source/en/api/loaders.mdx

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,3 +28,11 @@ API to load such adapter neural networks via the [`loaders.py` module](https://g
2828
### UNet2DConditionLoadersMixin
2929

3030
[[autodoc]] loaders.UNet2DConditionLoadersMixin
31+
32+
### TextualInversionLoaderMixin
33+
34+
[[autodoc]] loaders.TextualInversionLoaderMixin
35+
36+
### LoraLoaderMixin
37+
38+
[[autodoc]] loaders.LoraLoaderMixin

docs/source/en/api/pipelines/alt_diffusion.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,11 @@ The abstract of the paper is the following:
2828

2929
## Tips
3030

31-
- AltDiffusion is conceptually exactly the same as [Stable Diffusion](./api/pipelines/stable_diffusion/overview).
31+
- AltDiffusion is conceptually exactly the same as [Stable Diffusion](./stable_diffusion/overview).
3232

3333
- *Run AltDiffusion*
3434

35-
AltDiffusion can be tested very easily with the [`AltDiffusionPipeline`], [`AltDiffusionImg2ImgPipeline`] and the `"BAAI/AltDiffusion-m9"` checkpoint exactly in the same way it is shown in the [Conditional Image Generation Guide](./using-diffusers/conditional_image_generation) and the [Image-to-Image Generation Guide](./using-diffusers/img2img).
35+
AltDiffusion can be tested very easily with the [`AltDiffusionPipeline`], [`AltDiffusionImg2ImgPipeline`] and the `"BAAI/AltDiffusion-m9"` checkpoint exactly in the same way it is shown in the [Conditional Image Generation Guide](../../using-diffusers/conditional_image_generation) and the [Image-to-Image Generation Guide](../../using-diffusers/img2img).
3636

3737
- *How to load and use different schedulers.*
3838

docs/source/en/api/pipelines/overview.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@ available a colab notebook to directly try them out.
8383
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
8484
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
8585
| [vq_diffusion](./vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
86+
| [text_to_video_zero](./text_to_video_zero) | [Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators](https://arxiv.org/abs/2303.13439) | Text-to-Video Generation |
8687

8788

8889
**Note**: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers.

docs/source/en/api/pipelines/semantic_stable_diffusion.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,11 @@ The abstract of the paper is the following:
2424

2525
| Pipeline | Tasks | Colab | Demo
2626
|---|---|:---:|:---:|
27-
| [pipeline_semantic_stable_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/semantic-image-editing/blob/main/examples/SemanticGuidance.ipynb) | [Coming Soon](https://huggingface.co/AIML-TUDA)
27+
| [pipeline_semantic_stable_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/semantic-image-editing/blob/main/examples/SemanticGuidance.ipynb) | [Coming Soon](https://huggingface.co/AIML-TUDA)
2828

2929
## Tips
3030

31-
- The Semantic Guidance pipeline can be used with any [Stable Diffusion](./api/pipelines/stable_diffusion/text2img) checkpoint.
31+
- The Semantic Guidance pipeline can be used with any [Stable Diffusion](./stable_diffusion/text2img) checkpoint.
3232

3333
### Run Semantic Guidance
3434

@@ -67,7 +67,7 @@ out = pipe(
6767
)
6868
```
6969

70-
For more examples check the colab notebook.
70+
For more examples check the Colab notebook.
7171

7272
## StableDiffusionSafePipelineOutput
7373
[[autodoc]] pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput

docs/source/en/api/pipelines/spectrogram_diffusion.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ As depicted above the model takes as input a MIDI file and tokenizes it into a s
3030

3131
| Pipeline | Tasks | Colab
3232
|---|---|:---:|
33-
| [pipeline_spectrogram_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion) | *Unconditional Audio Generation* | - |
33+
| [pipeline_spectrogram_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py) | *Unconditional Audio Generation* | - |
3434

3535

3636
## Example usage

docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ This should take only around 3-4 seconds on GPU (depending on hardware). The out
131131
![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_disco_dancing.png)
132132

133133

134-
**Note**: To see how to run all other ControlNet checkpoints, please have a look at [ControlNet with Stable Diffusion 1.5](#controlnet-with-stable-diffusion-1.5)
134+
**Note**: To see how to run all other ControlNet checkpoints, please have a look at [ControlNet with Stable Diffusion 1.5](#controlnet-with-stable-diffusion-1.5).
135135

136136
<!-- TODO: add space -->
137137

0 commit comments

Comments
 (0)