Skip to content

Conversation

@Gothos
Copy link
Contributor

@Gothos Gothos commented Sep 2, 2024

What does this PR do?

Fixes XLabs LoRA loading for flux:

from xlabs' DoubleStreamBlockLoraProcessor:

img_qkv = attn.img_attn.qkv(img_modulated) + self.qkv_lora1(img_modulated) * self.lora_weight
txt_qkv = attn.txt_attn.qkv(txt_modulated) + self.qkv_lora2(txt_modulated) * self.lora_weight

Seems to imply qkv_lora1 applies to the image latents
while in diffusers' _convert_xlabs_flux_lora_to_diffusers:

elif "processor.qkv_lora1" in old_key and "up" not in old_key:
                handle_qkv(
                    old_state_dict,
                    new_state_dict,
                    old_key,
                    [
                        f"transformer.transformer_blocks.{block_num}.attn.add_q_proj",
                        f"transformer.transformer_blocks.{block_num}.attn.add_k_proj",
                        f"transformer.transformer_blocks.{block_num}.attn.add_v_proj",
                    ],
                )

and looking at FluxAttnProcessor2_0:

            encoder_hidden_states_query_proj = attn.add_q_proj(encoder_hidden_states)
            encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states)
            encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states)

Which seems to imply qkv_lora1 weights are being applied to the text-embed activations.
This PR applies them correctly.

Some results with images: (before lora, current method, after)
image
image
cc: @sayakpaul @apolinario
Apologies for the messy commits, but only the proper file should have been changed.

Before submitting

  • [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [ X] Did you read the contributor guideline?
  • [ X] Did you read our philosophy doc (important for complex PRs)?
  • [X ] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.

@yiyixuxu yiyixuxu requested a review from sayakpaul September 3, 2024 00:09
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot. Cc: @apolinario for awareness.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@sayakpaul sayakpaul merged commit 1c1ccaa into huggingface:main Sep 3, 2024
sayakpaul added a commit that referenced this pull request Dec 23, 2024
* Fix ```from_single_file``` for xl_inpaint

* Add basic flux inpaint pipeline

* style, quality, stray print

* Fix stray changes

* Add inpainting model support

* Change lora conversion for xlabs

* Fix stray changes

* Apply suggestions from code review

* style

---------

Co-authored-by: Sayak Paul <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants