Skip to content

Commit e7696e2

Browse files
AlexUmnovsayakpaul
andauthored
Updated lora inference instructions (#6913)
* Updated lora inference instructions * Update examples/dreambooth/README.md Co-authored-by: Sayak Paul <[email protected]> * Update README.md * Update README.md --------- Co-authored-by: Sayak Paul <[email protected]>
1 parent 4b89aef commit e7696e2

File tree

1 file changed

+4
-8
lines changed

1 file changed

+4
-8
lines changed

examples/dreambooth/README.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -376,18 +376,14 @@ After training, LoRA weights can be loaded very easily into the original pipelin
376376
load the original pipeline:
377377

378378
```python
379-
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
380-
import torch
381-
382-
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
383-
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
384-
pipe.to("cuda")
379+
from diffusers import DiffusionPipeline
380+
pipe = DiffusionPipeline.from_pretrained("base-model-name").to("cuda")
385381
```
386382

387-
Next, we can load the adapter layers into the UNet with the [`load_attn_procs` function](https://huggingface.co/docs/diffusers/api/loaders#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs).
383+
Next, we can load the adapter layers into the pipeline with the [`load_lora_weights` function](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters#lora).
388384

389385
```python
390-
pipe.unet.load_attn_procs("patrickvonplaten/lora_dreambooth_dog_example")
386+
pipe.load_lora_weights("path-to-the-lora-checkpoint")
391387
```
392388

393389
Finally, we can run the model in inference.

0 commit comments

Comments
 (0)