Skip to content

Conversation

@sayakpaul
Copy link
Member

@sayakpaul sayakpaul commented Dec 30, 2023

What does this PR do?

Adds a test to ensure LoRA trained using peft (emulating how it's done in our trainers) can be fully loaded without peft.

Helps to keep our sanity in check :D

The dummy components were obtained using this Colab:
https://huggingface.co/datasets/diffusers/notebooks/blob/main/check_logits_with_serialization_peft_lora.py. This script also spits out the logits which are tested in this PR.

The basic idea is the following.

If we do:

sd_pipeline = DiffusionPipeline.from_pretrained(...)

torch.manual_seed(0)
unet_lora_config = LoraConfig(...)

torch.manual_seed(0)
text_encoder_lora_config = LoraConfig(...)

sd_pipeline.unet.add_adapter(unet_lora_config)
sd_pipeline.text_encoder.add_adapter(unet_lora_config)

outputs = sd_pipeline(**pipeline_inputs, generator=torch.manual_seed(0), output_type="np").images

And then if you do (with peft uninstalled):

sd_pipeline = DiffusionPipeline.from_pretrained(...)

# LoRA was obtained using what's shown in the script linked above. 
sd_pipeline.load_lora_weights(...)
outputs_no_peft = sd_pipeline(**pipeline_inputs, generator=torch.manual_seed(0), output_type="np").images

outputs and outputs_no_peft should match. This is exactly what we test here.

Cc: @apolinario

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Would it makes sense to test logits as well? that way we can make sure in the future no PR will break inference correctness. Here if I got it right we only test weights loading + dummy inference but not inference correctness - but this is already great so feel free to merge it I would say

@sayakpaul
Copy link
Member Author

@younesbelkada just did. PTAL.

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work, thanks a lot @sayakpaul ! 🚀

@sayakpaul
Copy link
Member Author

sayakpaul commented Dec 30, 2023

@younesbelkada feel free to look at the script with which I generated the LoRA files: https://huggingface.co/datasets/diffusers/notebooks/blob/main/check_logits_with_serialization_peft_lora.py.

I think they emulate how it's done in the training scripts. But I'd appreciate another set of 👁️s.

@DN6
Copy link
Collaborator

DN6 commented Jan 2, 2024

I think we might need a check to ensure this doesn't run in the PEFT environment as well right?

@sayakpaul
Copy link
Member Author

Why can't it run on peft environment? Could you elaborate?

@DN6
Copy link
Collaborator

DN6 commented Jan 2, 2024

Why run the test in both the PEFT and non-PEFT environment/runners if it is only meant to check that we can load PEFT LoRAs in a non-PEFT environment?

@sayakpaul
Copy link
Member Author

sayakpaul commented Jan 2, 2024

Oh okay. I think we're covering on that front:

python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
doesn't run PEFT LoRA stuff.

All tests from https://github.com/huggingface/diffusers/blob/main/tests/lora/test_lora_layers_peft.p (which is tested in this workflow), always require the PEFT backend:

@require_peft_backend

Does this make sense?

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea!

@sayakpaul sayakpaul merged commit 2e4dc3e into main Jan 3, 2024
@sayakpaul sayakpaul deleted the add-tests-peft-lora-loadable-no-peft branch January 3, 2024 04:27
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
huggingface#6400)

* add: test to check if peft loras are loadable in non-peft envs.

* add torch_device approrpiately.

* fix: get_dummy_inputs().

* test logits.

* rename

* debug

* debug

* fix: generator

* new assertion values after fixing the seed.

* shape

* remove print statements and settle this.

* to update values.

* change values when lora config is initialized under a fixed seed.

* update colab link

* update notebook link

* sanity restored by getting the exact same values without peft.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants