Skip to content

Commit f645b87

Browse files
committed
add: note about the new script in readme_sdxl.
1 parent 11659a6 commit f645b87

File tree

1 file changed

+35
-1
lines changed

1 file changed

+35
-1
lines changed

examples/consistency_distillation/README_sdxl.md

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,4 +111,38 @@ accelerate launch train_lcm_distill_lora_sdxl_wds.py \
111111
--report_to=wandb \
112112
--seed=453645634 \
113113
--push_to_hub \
114-
```
114+
```
115+
116+
We provide another version for LCM LoRA SDXL that follows best practices of `peft` and leverages the `datasets` library for quick experimentation. The script doesn't load two UNets unlike `train_lcm_distill_lora_sdxl_wds.py` which reduces the memory requirements quite a bit.
117+
118+
Below is an example training command that trains an LCM LoRA on the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions):
119+
120+
```bash
121+
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
122+
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
123+
export VAE_PATH="madebyollin/sdxl-vae-fp16-fix"
124+
125+
accelerate launch train_lcm_distill_lora_sdxl.py \
126+
--pretrained_teacher_model=${MODEL_NAME} \
127+
--pretrained_vae_model_name_or_path=${VAE_PATH} \
128+
--output_dir="pokemons-lora-lcm-sdxl" \
129+
--mixed_precision="fp16" \
130+
--dataset_name=$DATASET_NAME \
131+
--resolution=1024 \
132+
--train_batch_size=24 \
133+
--gradient_accumulation_steps=1 \
134+
--gradient_checkpointing \
135+
--use_8bit_adam \
136+
--lora_rank=64 \
137+
--learning_rate=1e-4 \
138+
--report_to="wandb" \
139+
--lr_scheduler="constant" \
140+
--lr_warmup_steps=0 \
141+
--max_train_steps=3000 \
142+
--checkpointing_steps=500 \
143+
--validation_steps=50 \
144+
--seed="0" \
145+
--report_to="wandb" \
146+
--push_to_hub
147+
```
148+

0 commit comments

Comments
 (0)