Skip to content

Commit 274c33e

Browse files
authored
Merge branch 'main' into ip-adapter-test-mixin
2 parents 4e9d60a + 8f2c7b4 commit 274c33e

34 files changed

+965
-601
lines changed

docker/diffusers-pytorch-compile-cuda/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,9 @@ ENV PATH="/opt/venv/bin:$PATH"
2626
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
2727
RUN python3.9 -m pip install --no-cache-dir --upgrade pip && \
2828
python3.9 -m pip install --no-cache-dir \
29-
torch==2.1.2 \
30-
torchvision==0.16.2 \
31-
torchaudio==2.1.2 \
29+
torch \
30+
torchvision \
31+
torchaudio \
3232
invisible_watermark && \
3333
python3.9 -m pip install --no-cache-dir \
3434
accelerate \

docker/diffusers-pytorch-cpu/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ ENV PATH="/opt/venv/bin:$PATH"
2525
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
2626
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
2727
python3 -m pip install --no-cache-dir \
28-
torch==2.1.2 \
29-
torchvision==0.16.2 \
30-
torchaudio==2.1.2 \
28+
torch \
29+
torchvision \
30+
torchaudio \
3131
invisible_watermark \
3232
--extra-index-url https://download.pytorch.org/whl/cpu && \
3333
python3 -m pip install --no-cache-dir \

docker/diffusers-pytorch-cuda/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ ENV PATH="/opt/venv/bin:$PATH"
2525
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
2626
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
2727
python3 -m pip install --no-cache-dir \
28-
torch==2.1.2 \
29-
torchvision==0.16.2 \
30-
torchaudio==2.1.2 \
28+
torch \
29+
torchvision \
30+
torchaudio \
3131
invisible_watermark && \
3232
python3 -m pip install --no-cache-dir \
3333
accelerate \

docker/diffusers-pytorch-xformers-cuda/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ ENV PATH="/opt/venv/bin:$PATH"
2525
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
2626
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
2727
python3 -m pip install --no-cache-dir \
28-
torch==2.1.2 \
29-
torchvision==0.16.2 \
30-
torchaudio==2.1.2 \
28+
torch \
29+
torchvision \
30+
torchaudio \
3131
invisible_watermark && \
3232
python3 -m pip install --no-cache-dir \
3333
accelerate \

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,8 @@
5858
- sections:
5959
- local: using-diffusers/textual_inversion_inference
6060
title: Textual inversion
61+
- local: using-diffusers/ip_adapter
62+
title: IP-Adapter
6163
- local: training/distributed_inference
6264
title: Distributed inference with multiple GPUs
6365
- local: using-diffusers/reusing_seeds

docs/source/en/api/loaders/ip_adapter.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@ specific language governing permissions and limitations under the License.
1212

1313
# IP-Adapter
1414

15-
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs.
15+
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder.
1616

1717
<Tip>
1818

19-
Learn how to load an IP-Adapter checkpoint and image in the [IP-Adapter](../../using-diffusers/loading_adapters#ip-adapter) loading guide.
19+
Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading](../../using-diffusers/loading_adapters#ip-adapter) guide, and you can see how to use it in the [usage](../../using-diffusers/ip_adapter) guide.
2020

2121
</Tip>
2222

docs/source/en/tutorials/using_peft_for_inference.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,25 @@ list_adapters_component_wise
165165
{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]}
166166
```
167167

168+
## Compatibility with `torch.compile`
169+
170+
If you want to compile your model with `torch.compile` make sure to first fuse the LoRA weights into the base model and unload them.
171+
172+
```py
173+
pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
174+
pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
175+
176+
pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0])
177+
# Fuses the LoRAs into the Unet
178+
pipe.fuse_lora()
179+
pipe.unload_lora_weights()
180+
181+
pipe = torch.compile(pipe)
182+
183+
prompt = "toy_face of a hacker with a hoodie, pixel art"
184+
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
185+
```
186+
168187
## Fusing adapters into the model
169188

170189
You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the [`~diffusers.loaders.LoraLoaderMixin.fuse_lora`] method, which can lead to a speed-up in inference and lower VRAM usage.

docs/source/en/using-diffusers/custom_pipeline_overview.md

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,60 @@ pipeline = DiffusionPipeline.from_pretrained(
5656
)
5757
```
5858

59+
### Load from a local file
60+
61+
Community pipelines can also be loaded from a local file if you pass a file path instead. The path to the passed directory must contain a `pipeline.py` file that contains the pipeline class in order to successfully load it.
62+
63+
```py
64+
pipeline = DiffusionPipeline.from_pretrained(
65+
"runwayml/stable-diffusion-v1-5",
66+
custom_pipeline="./path/to/pipeline_directory/",
67+
clip_model=clip_model,
68+
feature_extractor=feature_extractor,
69+
use_safetensors=True,
70+
)
71+
```
72+
73+
### Load from a specific version
74+
75+
By default, community pipelines are loaded from the latest stable version of Diffusers. To load a community pipeline from another version, use the `custom_revision` parameter.
76+
77+
<hfoptions id="version">
78+
<hfoption id="main">
79+
80+
For example, to load from the `main` branch:
81+
82+
```py
83+
pipeline = DiffusionPipeline.from_pretrained(
84+
"runwayml/stable-diffusion-v1-5",
85+
custom_pipeline="clip_guided_stable_diffusion",
86+
custom_revision="main",
87+
clip_model=clip_model,
88+
feature_extractor=feature_extractor,
89+
use_safetensors=True,
90+
)
91+
```
92+
93+
</hfoption>
94+
<hfoption id="older version">
95+
96+
For example, to load from a previous version of Diffusers like `v0.25.0`:
97+
98+
```py
99+
pipeline = DiffusionPipeline.from_pretrained(
100+
"runwayml/stable-diffusion-v1-5",
101+
custom_pipeline="clip_guided_stable_diffusion",
102+
custom_revision="v0.25.0",
103+
clip_model=clip_model,
104+
feature_extractor=feature_extractor,
105+
use_safetensors=True,
106+
)
107+
```
108+
109+
</hfoption>
110+
</hfoptions>
111+
112+
59113
For more information about community pipelines, take a look at the [Community pipelines](custom_pipeline_examples) guide for how to use them and if you're interested in adding a community pipeline check out the [How to contribute a community pipeline](contribute_pipeline) guide!
60114

61115
## Community components

0 commit comments

Comments
 (0)