Skip to content

[LoRA] Freezing the model weights #2245

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Feb 9, 2023
Merged

Conversation

erkams
Copy link
Contributor

@erkams erkams commented Feb 5, 2023

As in original lora repository freezing the weights of the models helps us to use less memory

Freeze the model weights since we don't need to calculate grads for them.
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Feb 5, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense to me! @patil-suraj @williamberman could you also take a quick look?

unet.requires_grad_(False)
vae.requires_grad_(False)

params = itertools.chain(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just to text_encoder.requires_grad_(False) ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, , this way of freezing is only needed for the textual inversion
here we can use text_encoder.requires_grad_(False)

@patrickvonplaten
Copy link
Contributor

Also cc @sayakpaul - I think we can freeze everything here just like we do for dreambooth_lora

@sayakpaul
Copy link
Member

Also cc @sayakpaul - I think we can freeze everything here just like we do for dreambooth_lora

+1

Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the PR, looks good, just left one comment about freezing text encoder.

unet.requires_grad_(False)
vae.requires_grad_(False)

params = itertools.chain(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, , this way of freezing is only needed for the textual inversion
here we can use text_encoder.requires_grad_(False)

Copy link
Member

@pcuenca pcuenca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes complete sense to me!

@patrickvonplaten patrickvonplaten merged commit 1be7df0 into huggingface:main Feb 9, 2023
@patrickvonplaten
Copy link
Contributor

Thanks a lot @erkams !

AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* [LoRA] Freezing the model weights

Freeze the model weights since we don't need to calculate grads for them.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <[email protected]>

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Suraj Patil <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants