-
Notifications
You must be signed in to change notification settings - Fork 6.6k
[LoRA] add: test to check if peft loras are loadable in non-peft envs. #6400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
younesbelkada
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Would it makes sense to test logits as well? that way we can make sure in the future no PR will break inference correctness. Here if I got it right we only test weights loading + dummy inference but not inference correctness - but this is already great so feel free to merge it I would say
|
@younesbelkada just did. PTAL. |
younesbelkada
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work, thanks a lot @sayakpaul ! 🚀
|
@younesbelkada feel free to look at the script with which I generated the LoRA files: https://huggingface.co/datasets/diffusers/notebooks/blob/main/check_logits_with_serialization_peft_lora.py. I think they emulate how it's done in the training scripts. But I'd appreciate another set of 👁️s. |
|
I think we might need a check to ensure this doesn't run in the PEFT environment as well right? |
|
Why can't it run on |
|
Why run the test in both the PEFT and non-PEFT environment/runners if it is only meant to check that we can load PEFT LoRAs in a non-PEFT environment? |
|
Oh okay. I think we're covering on that front: diffusers/.github/workflows/pr_tests.yml Line 100 in 1754602
All tests from https://github.com/huggingface/diffusers/blob/main/tests/lora/test_lora_layers_peft.p (which is tested in this workflow), always require the PEFT backend:
Does this make sense? |
patrickvonplaten
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea!
huggingface#6400) * add: test to check if peft loras are loadable in non-peft envs. * add torch_device approrpiately. * fix: get_dummy_inputs(). * test logits. * rename * debug * debug * fix: generator * new assertion values after fixing the seed. * shape * remove print statements and settle this. * to update values. * change values when lora config is initialized under a fixed seed. * update colab link * update notebook link * sanity restored by getting the exact same values without peft.
What does this PR do?
Adds a test to ensure LoRA trained using
peft(emulating how it's done in our trainers) can be fully loaded withoutpeft.Helps to keep our sanity in check :D
The dummy components were obtained using this Colab:
https://huggingface.co/datasets/diffusers/notebooks/blob/main/check_logits_with_serialization_peft_lora.py. This script also spits out the logits which are tested in this PR.
The basic idea is the following.
If we do:
And then if you do (with
peftuninstalled):outputsandoutputs_no_peftshould match. This is exactly what we test here.Cc: @apolinario