Skip to content

Update 0.16.1 #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 73 commits into from
Jun 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
e126a82
[Tests] Speed up panorama tests (#3067)
sayakpaul Apr 12, 2023
0a73b4d
[Post release] v0.16.0dev (#3072)
patrickvonplaten Apr 12, 2023
d06e069
Adds profiling flags, computes train metrics average. (#3053)
andsteing Apr 12, 2023
46c52f9
[Pipelines] Make sure that None functions are correctly not saved (#3…
patrickvonplaten Apr 12, 2023
e748b3c
doc string example remove from_pt (#3083)
yiyixuxu Apr 13, 2023
3a9d7d9
[Tests] parallelize (#3078)
patrickvonplaten Apr 13, 2023
3bf5ce2
Throw deprecation warning for return_cached_folder (#3092)
patrickvonplaten Apr 13, 2023
3eaead0
Allow SD attend and excite pipeline to work with any size output imag…
jcoffland Apr 13, 2023
d0f2582
[docs] Update community pipeline docs (#2989)
stevhliu Apr 13, 2023
5c9dd0a
Add to support Guess Mode for StableDiffusionControlnetPipleline (#2998)
takuma104 Apr 14, 2023
eb2ef31
fix default value for attend-and-excite (#3099)
yiyixuxu Apr 14, 2023
1bd4c9e
remvoe one line as requested by gc team (#3077)
yiyixuxu Apr 14, 2023
b811964
ddpm custom timesteps (#3007)
williamberman Apr 14, 2023
807f69b
Fix breaking change in `pipeline_stable_diffusion_controlnet.py` (#3118)
remorses Apr 16, 2023
cfc99ad
Add global pooling to controlnet (#3121)
patrickvonplaten Apr 16, 2023
beb848e
[Bug fix] Fix img2img processor with safety checker (#3127)
patrickvonplaten Apr 17, 2023
ca783a0
[Bug fix] Make sure correct timesteps are chosen for img2img (#3128)
patrickvonplaten Apr 17, 2023
ed8fd38
Improve deprecation warnings (#3131)
patrickvonplaten Apr 17, 2023
703307e
Fix config deprecation (#3129)
patrickvonplaten Apr 17, 2023
3b641ea
feat: verfication of multi-gpu support for select examples. (#3126)
sayakpaul Apr 18, 2023
cd8b750
speed up attend-and-excite fast tests (#3079)
yiyixuxu Apr 18, 2023
8ecdd3e
Optimize log_validation in train_controlnet_flax (#3110)
cgarciae Apr 18, 2023
f2df39f
make style
patrickvonplaten Apr 18, 2023
4bc157f
Correct textual inversion readme (#3145)
patrickvonplaten Apr 18, 2023
f0c74e9
Add unet act fn to other model components (#3136)
williamberman Apr 18, 2023
fc18839
class labels timestep embeddings projection dtype cast (#3137)
williamberman Apr 18, 2023
bdeff4d
[ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while …
cmdr2 Apr 19, 2023
86ecd4b
add from_ckpt method as Mixin (#2318)
1lint Apr 19, 2023
bba1c1d
Add TensorRT SD/txt2img Community Pipeline to diffusers along with Te…
asfiyab-nvidia Apr 19, 2023
c8fdfe4
Correct `Transformer2DModel.forward` docstring (#3074)
offchan42 Apr 19, 2023
3becd36
Update pipeline_stable_diffusion_inpaint_legacy.py (#2903)
hwuebben Apr 19, 2023
a4c91be
Modified altdiffusion pipline to support altdiffusion-m18 (#2993)
superhero-7 Apr 19, 2023
7e6886f
controlnet training resize inputs to multiple of 8 (#3135)
williamberman Apr 19, 2023
3979aac
adding custom diffusion training to diffusers examples (#3031)
nupurkmr9 Apr 20, 2023
a121e05
Update custom_diffusion.mdx (#3165)
Apr 20, 2023
a5b242d
Added distillation for quantization example on textual inversion. (#2…
XinyuYe-Intel Apr 20, 2023
1747005
make style
patrickvonplaten Apr 20, 2023
8d5906a
Merge branch 'main' of https://github.com/huggingface/diffusers
patrickvonplaten Apr 20, 2023
7b0ba48
Update Noise Autocorrelation Loss Function for Pix2PixZero Pipeline (…
clarencechen Apr 20, 2023
3045fb2
[DreamBooth] add text encoder LoRA support in the DreamBooth training…
sayakpaul Apr 20, 2023
9bce375
Update Habana Gaudi documentation (#3169)
regisss Apr 21, 2023
9c85611
Add model offload to x4 upscaler (#3187)
patrickvonplaten Apr 21, 2023
2f6351b
[docs] Deterministic algorithms (#3172)
stevhliu Apr 21, 2023
e573ae0
Update custom_diffusion.mdx to credit the author (#3163)
sayakpaul Apr 21, 2023
05d9bae
Fix TensorRT community pipeline device set function (#3157)
asfiyab-nvidia Apr 21, 2023
bc0392a
make `from_flax` work for controlnet (#3161)
yiyixuxu Apr 21, 2023
391cfcd
[docs] Clarify training args (#3146)
stevhliu Apr 21, 2023
2c04e58
Multi Vector Textual Inversion (#3144)
patrickvonplaten Apr 21, 2023
11f527a
Add `Karras sigmas` to HeunDiscreteScheduler (#3160)
youssefadr Apr 21, 2023
90eac14
[AudioLDM] Fix dtype of returned waveform (#3189)
sanchit-gandhi Apr 21, 2023
20e426c
Fix bug in train_dreambooth_lora (#3183)
crywang Apr 22, 2023
9965cb5
[Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)
SkyTNT Apr 22, 2023
425192f
Make sure VAE attention works with Torch 2_0 (#3200)
patrickvonplaten Apr 22, 2023
91a2a80
Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (…
williamberman Apr 22, 2023
c5933c9
[Bug fix] Fix batch size attention head size mismatch (#3214)
patrickvonplaten Apr 24, 2023
0ddc5bf
fix mixed precision training on train_dreambooth_inpaint_lora (#3138)
themrzmaster Apr 25, 2023
e9edbfc
adding enable_vae_tiling and disable_vae_tiling functions (#3225)
init-22 Apr 25, 2023
131312c
Add ControlNet v1.1 docs (#3226)
patrickvonplaten Apr 25, 2023
0d196f9
Fix issue in maybe_convert_prompt (#3188)
pdoane Apr 25, 2023
730e01e
Sync cache version check from transformers (#3179)
ychfan Apr 25, 2023
1ffcc92
Fix docs text inversion (#3166)
patrickvonplaten Apr 25, 2023
e51f19a
add model (#3230)
patrickvonplaten Apr 25, 2023
da2ce1a
Allow return pt x4 (#3236)
patrickvonplaten Apr 26, 2023
abbf3c1
Allow fp16 attn for x4 upscaler (#3239)
patrickvonplaten Apr 26, 2023
744663f
fix fast test (#3241)
patrickvonplaten Apr 26, 2023
977162c
Adds a document on token merging (#3208)
sayakpaul Apr 26, 2023
46ceba5
[AudioLDM] Update docs to use updated ckpt (#3240)
sanchit-gandhi Apr 26, 2023
6ba0efb
Release: v0.16.0
patrickvonplaten Apr 26, 2023
9c876a5
merge conflict
apolinario Apr 27, 2023
4c476e9
Fix community pipelines (#3266)
patrickvonplaten Apr 27, 2023
23159f4
Allow disabling torch 2_0 attention (#3273)
patrickvonplaten Apr 28, 2023
9b14ce3
Release: v0.16.1
patrickvonplaten Apr 28, 2023
4d494c6
Merge tag 'v0.16.1' of https://github.com/huggingface/diffusers into …
tjdtnsu May 23, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 23 additions & 10 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,22 +21,27 @@ jobs:
fail-fast: false
matrix:
config:
- name: Fast PyTorch CPU tests on Ubuntu
framework: pytorch
- name: Fast PyTorch Pipeline CPU tests
framework: pytorch_pipelines
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu
- name: Fast Flax CPU tests on Ubuntu
report: torch_cpu_pipelines
- name: Fast PyTorch Models & Schedulers CPU tests
framework: pytorch_models
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu_models_schedulers
- name: Fast Flax CPU tests
framework: flax
runner: docker-cpu
image: diffusers/diffusers-flax-cpu
report: flax_cpu
- name: Fast ONNXRuntime CPU tests on Ubuntu
- name: Fast ONNXRuntime CPU tests
framework: onnxruntime
runner: docker-cpu
image: diffusers/diffusers-onnxruntime-cpu
report: onnx_cpu
- name: PyTorch Example CPU tests on Ubuntu
- name: PyTorch Example CPU tests
framework: pytorch_examples
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
Expand Down Expand Up @@ -71,21 +76,29 @@ jobs:
run: |
python utils/print_env.py

- name: Run fast PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch' }}
- name: Run fast PyTorch Pipeline CPU tests
if: ${{ matrix.config.framework == 'pytorch_pipelines' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
tests/pipelines

- name: Run fast PyTorch Model Scheduler CPU tests
if: ${{ matrix.config.framework == 'pytorch_models' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/models tests/schedulers tests/others

- name: Run fast Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
tests

- name: Run fast ONNXRuntime CPU tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
Expand Down
12 changes: 9 additions & 3 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
- local: using-diffusers/schedulers
title: Load and compare different schedulers
- local: using-diffusers/custom_pipeline_overview
title: Load and add custom pipelines
title: Load community pipelines
- local: using-diffusers/kerascv
title: Load KerasCV Stable Diffusion checkpoints
title: Loading & Hub
Expand All @@ -47,9 +47,9 @@
- local: using-diffusers/reproducibility
title: Create reproducible pipelines
- local: using-diffusers/custom_pipeline_examples
title: Community Pipelines
title: Community pipelines
- local: using-diffusers/contribute_pipeline
title: How to contribute a Pipeline
title: How to contribute a community pipeline
- local: using-diffusers/using_safetensors
title: Using safetensors
- local: using-diffusers/stable_diffusion_jax_how_to
Expand All @@ -74,6 +74,8 @@
title: ControlNet
- local: training/instructpix2pix
title: InstructPix2Pix Training
- local: training/custom_diffusion
title: Custom Diffusion
title: Training
- sections:
- local: using-diffusers/rl
Expand Down Expand Up @@ -103,6 +105,8 @@
title: MPS
- local: optimization/habana
title: Habana Gaudi
- local: optimization/tome
title: Token Merging
title: Optimization/Special Hardware
- sections:
- local: conceptual/philosophy
Expand Down Expand Up @@ -150,6 +154,8 @@
title: DDPM
- local: api/pipelines/dit
title: DiT
- local: api/pipelines/if
title: IF
- local: api/pipelines/latent_diffusion
title: Latent Diffusion
- local: api/pipelines/paint_by_example
Expand Down
4 changes: 4 additions & 0 deletions docs/source/en/api/loaders.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,7 @@ API to load such adapter neural networks via the [`loaders.py` module](https://g
### LoraLoaderMixin

[[autodoc]] loaders.LoraLoaderMixin

### FromCkptMixin

[[autodoc]] loaders.FromCkptMixin
14 changes: 8 additions & 6 deletions docs/source/en/api/pipelines/audioldm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,14 @@ This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit

## Text-to-Audio

The [`AudioLDMPipeline`] can be used to load pre-trained weights from [cvssp/audioldm](https://huggingface.co/cvssp/audioldm) and generate text-conditional audio outputs:
The [`AudioLDMPipeline`] can be used to load pre-trained weights from [cvssp/audioldm-s-full-v2](https://huggingface.co/cvssp/audioldm-s-full-v2) and generate text-conditional audio outputs:

```python
from diffusers import AudioLDMPipeline
import torch
import scipy

repo_id = "cvssp/audioldm"
repo_id = "cvssp/audioldm-s-full-v2"
pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

Expand All @@ -56,7 +56,7 @@ Inference:
### How to load and use different schedulers

The AudioLDM pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers
that can be used with the AudioLDM pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`],
that can be used with the AudioLDM pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`],
[`EulerAncestralDiscreteScheduler`] etc. We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest
scheduler there is.

Expand All @@ -68,12 +68,14 @@ method, or pass the `scheduler` argument to the `from_pretrained` method of the
>>> from diffusers import AudioLDMPipeline, DPMSolverMultistepScheduler
>>> import torch

>>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm", torch_dtype=torch.float16)
>>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm-s-full-v2", torch_dtype=torch.float16)
>>> pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)

>>> # or
>>> dpm_scheduler = DPMSolverMultistepScheduler.from_pretrained("cvssp/audioldm", subfolder="scheduler")
>>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm", scheduler=dpm_scheduler, torch_dtype=torch.float16)
>>> dpm_scheduler = DPMSolverMultistepScheduler.from_pretrained("cvssp/audioldm-s-full-v2", subfolder="scheduler")
>>> pipeline = AudioLDMPipeline.from_pretrained(
... "cvssp/audioldm-s-full-v2", scheduler=dpm_scheduler, torch_dtype=torch.float16
... )
```

## AudioLDMPipeline
Expand Down
Loading