Skip to content

Fix running LoRA with xformers #2286

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Feb 13, 2023
Merged

Conversation

bddppq
Copy link
Contributor

@bddppq bddppq commented Feb 8, 2023

#2247
#2124

from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
pipe.unet.load_attn_procs(lora_dir)
pipe.to('cuda:2')

prompt = 'a cute cat'

image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0]
image.save('1.png')

pipe.enable_xformers_memory_efficient_attention()
image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0]
image.save('2.png')

pipe.disable_xformers_memory_efficient_attention()
image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0]
image.save('3.png')

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Feb 8, 2023

The documentation is not available anymore as the PR was closed or merged.

@patrickvonplaten
Copy link
Contributor

Great job @bddppq ! This looks exactly like it should :-)

@patrickvonplaten
Copy link
Contributor

Could we maybe add one quick tests that ensures that the two implementations are identical. It should be pretty easy to add as this test:

  1. Copy-paste this test:
    def test_lora_on_off(self):
    and give it a new name with "xformers" in it
  2. Verify that xformers on/off gives same results

That would be amazing!

Also happy to do it if you're too busy #1877

@patrickvonplaten patrickvonplaten merged commit 5d4f59e into huggingface:main Feb 13, 2023
@patrickvonplaten
Copy link
Contributor

Thanks a lot!

@bddppq bddppq deleted the lora-xformers branch February 24, 2023 04:25
mengfei25 pushed a commit to mengfei25/diffusers that referenced this pull request Mar 27, 2023
* Fix running LoRA with xformers

* support disabling xformers

* reformat

* Add test
@juancopi81
Copy link
Contributor

I found this issue while noticing that my inference times also increased after loading a LoRA. I am using a LoRA from civitai:

pipe.load_lora_weights("/path/to/lora/", weight_name = 'add_detail.safetensors')

Is it normal that now inference takes longer? Should I load the model differently? It is not clear for me from the example if I should first run the model without xformers and then activate it.

Thank you very much!

@juancopi81
Copy link
Contributor

Nevermind! I found

pipe.fuse_lora()

and this worked!

yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* Fix running LoRA with xformers

* support disabling xformers

* reformat

* Add test
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* Fix running LoRA with xformers

* support disabling xformers

* reformat

* Add test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants