-
Notifications
You must be signed in to change notification settings - Fork 6.6k
Clean Up Comments in LCM(-LoRA) Distillation Scripts. #6145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| # Note that this uses the LCM paper's CFG formulation rather than the Imagen CFG formulation | ||
| pred_x0 = cond_pred_x0 + w * (cond_pred_x0 - uncond_pred_x0) | ||
| pred_noise = cond_teacher_output + w * (cond_teacher_output - uncond_teacher_output) | ||
| pred_noise = cond_pred_noise + w * (cond_pred_noise - uncond_pred_noise) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm this seems like a big change to me. I wonder how much of an effect it has on the results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @sayakpaul, this change should be a no-op when the teacher LDM checkpoint has self.scheduler.config.prediction_type == "epsilon" (as is the case for the runwayml/stable-diffusion-v1-5 and stabilityai/stable-diffusion-xl-base-1.0 checkpoints suggested in the README examples). The current logic of directly using the residual output of the teacher unet is correct when the prediction_type is epsilon:
| pred_noise = cond_teacher_output + w * (cond_teacher_output - uncond_teacher_output) |
but in general we need to convert the residual unet output to a predicted epsilon for use in DDIMSolver.ddim_step, which is done by the new predicted_source_noise function, which is based on DDIMScheduler.step:
diffusers/src/diffusers/schedulers/scheduling_ddim.py
Lines 412 to 422 in 4836cfa
| # 3. compute predicted original sample from predicted noise also called | |
| # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf | |
| if self.config.prediction_type == "epsilon": | |
| pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) | |
| pred_epsilon = model_output | |
| elif self.config.prediction_type == "sample": | |
| pred_original_sample = model_output | |
| pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) | |
| elif self.config.prediction_type == "v_prediction": | |
| pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output | |
| pred_epsilon = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample |
Note that pred_epsilon is exactly equal to model_output when prediction_type == "epsilon".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the solid explanation!
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
patil-suraj
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice and explanatory comments, great job!
Looks good to me, just left some nits.
examples/consistency_distillation/train_lcm_distill_lora_sd_wds.py
Outdated
Show resolved
Hide resolved
examples/consistency_distillation/train_lcm_distill_lora_sd_wds.py
Outdated
Show resolved
Hide resolved
examples/consistency_distillation/train_lcm_distill_lora_sd_wds.py
Outdated
Show resolved
Hide resolved
examples/consistency_distillation/train_lcm_distill_lora_sd_wds.py
Outdated
Show resolved
Hide resolved
examples/consistency_distillation/train_lcm_distill_lora_sd_wds.py
Outdated
Show resolved
Hide resolved
examples/consistency_distillation/train_lcm_distill_lora_sdxl_wds.py
Outdated
Show resolved
Hide resolved
* Clean up comments in LCM(-LoRA) distillation scripts. * Calculate predicted source noise noise_pred correctly for all prediction_types. * make style * apply suggestions from review --------- Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: dg845 <[email protected]>
* add: script to train lcm lora for sdxl with 🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by: Younes Belkada <[email protected]> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (#5911) * Add support for IPAdapterFull Co-authored-by: Patrick von Platen <[email protected]> --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * Fix a bug in `add_noise` function (#6085) * fix * copies --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (#6100) add widget * [Advanced Training Script] Fix pipe example (#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by: Sayak Paul <[email protected]> * IP adapter support for most pipelines (#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from #6145 Co-authored-by: dg845 <[email protected]> * [Peft] fix saving / loading when unet is not "unet" (#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by: Sayak Paul <[email protected]> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by: Sayak Paul <[email protected]> * [Wuerstchen] fix fp16 training and correct lora args (#6245) fix fp16 training Co-authored-by: Sayak Paul <[email protected]> * [docs] fix: animatediff docs (#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)" This reverts commit 4c7e983. * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)" This reverts commit 0bb9cf0. * Revert "[docs] fix: animatediff docs (#6339)" This reverts commit 11659a6. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by: Suraj Patil <[email protected]> Co-authored-by: Younes Belkada <[email protected]> Co-authored-by: Fabio Rigano <[email protected]> Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: apolinário <[email protected]> Co-authored-by: Charchit Sharma <[email protected]> Co-authored-by: Aryan V S <[email protected]> Co-authored-by: dg845 <[email protected]> Co-authored-by: Kashif Rasul <[email protected]>
* add: script to train lcm lora for sdxl with 🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by: Younes Belkada <[email protected]> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (huggingface#5911) * Add support for IPAdapterFull Co-authored-by: Patrick von Platen <[email protected]> --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * Fix a bug in `add_noise` function (huggingface#6085) * fix * copies --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (huggingface#6100) add widget * [Advanced Training Script] Fix pipe example (huggingface#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (huggingface#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by: Sayak Paul <[email protected]> * IP adapter support for most pipelines (huggingface#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from huggingface#6145 Co-authored-by: dg845 <[email protected]> * [Peft] fix saving / loading when unet is not "unet" (huggingface#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by: Sayak Paul <[email protected]> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by: Sayak Paul <[email protected]> * [Wuerstchen] fix fp16 training and correct lora args (huggingface#6245) fix fp16 training Co-authored-by: Sayak Paul <[email protected]> * [docs] fix: animatediff docs (huggingface#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (huggingface#6046)" This reverts commit 4c7e983. * Revert "[Wuerstchen] fix fp16 training and correct lora args (huggingface#6245)" This reverts commit 0bb9cf0. * Revert "[docs] fix: animatediff docs (huggingface#6339)" This reverts commit 11659a6. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by: Suraj Patil <[email protected]> Co-authored-by: Younes Belkada <[email protected]> Co-authored-by: Fabio Rigano <[email protected]> Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: apolinário <[email protected]> Co-authored-by: Charchit Sharma <[email protected]> Co-authored-by: Aryan V S <[email protected]> Co-authored-by: dg845 <[email protected]> Co-authored-by: Kashif Rasul <[email protected]>
* add: script to train lcm lora for sdxl with 🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by: Younes Belkada <[email protected]> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (huggingface#5911) * Add support for IPAdapterFull Co-authored-by: Patrick von Platen <[email protected]> --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * Fix a bug in `add_noise` function (huggingface#6085) * fix * copies --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (huggingface#6100) add widget * [Advanced Training Script] Fix pipe example (huggingface#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (huggingface#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by: Sayak Paul <[email protected]> * IP adapter support for most pipelines (huggingface#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from huggingface#6145 Co-authored-by: dg845 <[email protected]> * [Peft] fix saving / loading when unet is not "unet" (huggingface#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by: Sayak Paul <[email protected]> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by: Sayak Paul <[email protected]> * [Wuerstchen] fix fp16 training and correct lora args (huggingface#6245) fix fp16 training Co-authored-by: Sayak Paul <[email protected]> * [docs] fix: animatediff docs (huggingface#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (huggingface#6046)" This reverts commit 4c7e983. * Revert "[Wuerstchen] fix fp16 training and correct lora args (huggingface#6245)" This reverts commit 0bb9cf0. * Revert "[docs] fix: animatediff docs (huggingface#6339)" This reverts commit 11659a6. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by: Suraj Patil <[email protected]> Co-authored-by: Younes Belkada <[email protected]> Co-authored-by: Fabio Rigano <[email protected]> Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: apolinário <[email protected]> Co-authored-by: Charchit Sharma <[email protected]> Co-authored-by: Aryan V S <[email protected]> Co-authored-by: dg845 <[email protected]> Co-authored-by: Kashif Rasul <[email protected]>
* Clean up comments in LCM(-LoRA) distillation scripts. * Calculate predicted source noise noise_pred correctly for all prediction_types. * make style * apply suggestions from review --------- Co-authored-by: Sayak Paul <[email protected]>
* add: script to train lcm lora for sdxl with 🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by: Younes Belkada <[email protected]> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (huggingface#5911) * Add support for IPAdapterFull Co-authored-by: Patrick von Platen <[email protected]> --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * Fix a bug in `add_noise` function (huggingface#6085) * fix * copies --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (huggingface#6100) add widget * [Advanced Training Script] Fix pipe example (huggingface#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (huggingface#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by: Sayak Paul <[email protected]> * IP adapter support for most pipelines (huggingface#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from huggingface#6145 Co-authored-by: dg845 <[email protected]> * [Peft] fix saving / loading when unet is not "unet" (huggingface#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by: Sayak Paul <[email protected]> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by: Sayak Paul <[email protected]> * [Wuerstchen] fix fp16 training and correct lora args (huggingface#6245) fix fp16 training Co-authored-by: Sayak Paul <[email protected]> * [docs] fix: animatediff docs (huggingface#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (huggingface#6046)" This reverts commit 4c7e983. * Revert "[Wuerstchen] fix fp16 training and correct lora args (huggingface#6245)" This reverts commit 0bb9cf0. * Revert "[docs] fix: animatediff docs (huggingface#6339)" This reverts commit 11659a6. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by: Suraj Patil <[email protected]> Co-authored-by: Younes Belkada <[email protected]> Co-authored-by: Fabio Rigano <[email protected]> Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: apolinário <[email protected]> Co-authored-by: Charchit Sharma <[email protected]> Co-authored-by: Aryan V S <[email protected]> Co-authored-by: dg845 <[email protected]> Co-authored-by: Kashif Rasul <[email protected]>
What does this PR do?
This PR cleans up and improves the comments in the LCM and LCM-LoRA distillation scripts. Includes a few small bug fixes.
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
@patil-suraj
@sayakpaul