Skip to content

Commit 179174e

Browse files
authored
Merge pull request #7 from Pseudo-Lab/update-0.16.1
Update 0.16.1
2 parents 2d8f274 + 4d494c6 commit 179174e

File tree

199 files changed

+17301
-799
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

199 files changed

+17301
-799
lines changed

.github/workflows/pr_tests.yml

Lines changed: 23 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -21,22 +21,27 @@ jobs:
2121
fail-fast: false
2222
matrix:
2323
config:
24-
- name: Fast PyTorch CPU tests on Ubuntu
25-
framework: pytorch
24+
- name: Fast PyTorch Pipeline CPU tests
25+
framework: pytorch_pipelines
2626
runner: docker-cpu
2727
image: diffusers/diffusers-pytorch-cpu
28-
report: torch_cpu
29-
- name: Fast Flax CPU tests on Ubuntu
28+
report: torch_cpu_pipelines
29+
- name: Fast PyTorch Models & Schedulers CPU tests
30+
framework: pytorch_models
31+
runner: docker-cpu
32+
image: diffusers/diffusers-pytorch-cpu
33+
report: torch_cpu_models_schedulers
34+
- name: Fast Flax CPU tests
3035
framework: flax
3136
runner: docker-cpu
3237
image: diffusers/diffusers-flax-cpu
3338
report: flax_cpu
34-
- name: Fast ONNXRuntime CPU tests on Ubuntu
39+
- name: Fast ONNXRuntime CPU tests
3540
framework: onnxruntime
3641
runner: docker-cpu
3742
image: diffusers/diffusers-onnxruntime-cpu
3843
report: onnx_cpu
39-
- name: PyTorch Example CPU tests on Ubuntu
44+
- name: PyTorch Example CPU tests
4045
framework: pytorch_examples
4146
runner: docker-cpu
4247
image: diffusers/diffusers-pytorch-cpu
@@ -71,21 +76,29 @@ jobs:
7176
run: |
7277
python utils/print_env.py
7378
74-
- name: Run fast PyTorch CPU tests
75-
if: ${{ matrix.config.framework == 'pytorch' }}
79+
- name: Run fast PyTorch Pipeline CPU tests
80+
if: ${{ matrix.config.framework == 'pytorch_pipelines' }}
7681
run: |
7782
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
7883
-s -v -k "not Flax and not Onnx" \
7984
--make-reports=tests_${{ matrix.config.report }} \
80-
tests/
85+
tests/pipelines
86+
87+
- name: Run fast PyTorch Model Scheduler CPU tests
88+
if: ${{ matrix.config.framework == 'pytorch_models' }}
89+
run: |
90+
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
91+
-s -v -k "not Flax and not Onnx" \
92+
--make-reports=tests_${{ matrix.config.report }} \
93+
tests/models tests/schedulers tests/others
8194
8295
- name: Run fast Flax TPU tests
8396
if: ${{ matrix.config.framework == 'flax' }}
8497
run: |
8598
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
8699
-s -v -k "Flax" \
87100
--make-reports=tests_${{ matrix.config.report }} \
88-
tests/
101+
tests
89102
90103
- name: Run fast ONNXRuntime CPU tests
91104
if: ${{ matrix.config.framework == 'onnxruntime' }}

docs/source/en/_toctree.yml

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
- local: using-diffusers/schedulers
2626
title: Load and compare different schedulers
2727
- local: using-diffusers/custom_pipeline_overview
28-
title: Load and add custom pipelines
28+
title: Load community pipelines
2929
- local: using-diffusers/kerascv
3030
title: Load KerasCV Stable Diffusion checkpoints
3131
title: Loading & Hub
@@ -47,9 +47,9 @@
4747
- local: using-diffusers/reproducibility
4848
title: Create reproducible pipelines
4949
- local: using-diffusers/custom_pipeline_examples
50-
title: Community Pipelines
50+
title: Community pipelines
5151
- local: using-diffusers/contribute_pipeline
52-
title: How to contribute a Pipeline
52+
title: How to contribute a community pipeline
5353
- local: using-diffusers/using_safetensors
5454
title: Using safetensors
5555
- local: using-diffusers/stable_diffusion_jax_how_to
@@ -74,6 +74,8 @@
7474
title: ControlNet
7575
- local: training/instructpix2pix
7676
title: InstructPix2Pix Training
77+
- local: training/custom_diffusion
78+
title: Custom Diffusion
7779
title: Training
7880
- sections:
7981
- local: using-diffusers/rl
@@ -103,6 +105,8 @@
103105
title: MPS
104106
- local: optimization/habana
105107
title: Habana Gaudi
108+
- local: optimization/tome
109+
title: Token Merging
106110
title: Optimization/Special Hardware
107111
- sections:
108112
- local: conceptual/philosophy
@@ -150,6 +154,8 @@
150154
title: DDPM
151155
- local: api/pipelines/dit
152156
title: DiT
157+
- local: api/pipelines/if
158+
title: IF
153159
- local: api/pipelines/latent_diffusion
154160
title: Latent Diffusion
155161
- local: api/pipelines/paint_by_example

docs/source/en/api/loaders.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,7 @@ API to load such adapter neural networks via the [`loaders.py` module](https://g
3636
### LoraLoaderMixin
3737

3838
[[autodoc]] loaders.LoraLoaderMixin
39+
40+
### FromCkptMixin
41+
42+
[[autodoc]] loaders.FromCkptMixin

docs/source/en/api/pipelines/audioldm.mdx

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -25,14 +25,14 @@ This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit
2525

2626
## Text-to-Audio
2727

28-
The [`AudioLDMPipeline`] can be used to load pre-trained weights from [cvssp/audioldm](https://huggingface.co/cvssp/audioldm) and generate text-conditional audio outputs:
28+
The [`AudioLDMPipeline`] can be used to load pre-trained weights from [cvssp/audioldm-s-full-v2](https://huggingface.co/cvssp/audioldm-s-full-v2) and generate text-conditional audio outputs:
2929

3030
```python
3131
from diffusers import AudioLDMPipeline
3232
import torch
3333
import scipy
3434

35-
repo_id = "cvssp/audioldm"
35+
repo_id = "cvssp/audioldm-s-full-v2"
3636
pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
3737
pipe = pipe.to("cuda")
3838

@@ -56,7 +56,7 @@ Inference:
5656
### How to load and use different schedulers
5757

5858
The AudioLDM pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers
59-
that can be used with the AudioLDM pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`],
59+
that can be used with the AudioLDM pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`],
6060
[`EulerAncestralDiscreteScheduler`] etc. We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest
6161
scheduler there is.
6262

@@ -68,12 +68,14 @@ method, or pass the `scheduler` argument to the `from_pretrained` method of the
6868
>>> from diffusers import AudioLDMPipeline, DPMSolverMultistepScheduler
6969
>>> import torch
7070
71-
>>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm", torch_dtype=torch.float16)
71+
>>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm-s-full-v2", torch_dtype=torch.float16)
7272
>>> pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
7373
7474
>>> # or
75-
>>> dpm_scheduler = DPMSolverMultistepScheduler.from_pretrained("cvssp/audioldm", subfolder="scheduler")
76-
>>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm", scheduler=dpm_scheduler, torch_dtype=torch.float16)
75+
>>> dpm_scheduler = DPMSolverMultistepScheduler.from_pretrained("cvssp/audioldm-s-full-v2", subfolder="scheduler")
76+
>>> pipeline = AudioLDMPipeline.from_pretrained(
77+
... "cvssp/audioldm-s-full-v2", scheduler=dpm_scheduler, torch_dtype=torch.float16
78+
... )
7779
```
7880
7981
## AudioLDMPipeline

0 commit comments

Comments
 (0)