-
Notifications
You must be signed in to change notification settings - Fork 6.3k
Allow lora from pipeline #2129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow lora from pipeline #2129
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pcuenca @patil-suraj @hysts could you take a look here? In order to use LoRA from the pipeline, we need to allow one to pass cross_attention_kwargs
to the pipeline call function.
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
With xformers, this appears to return an error in the most recent
|
@patrickvonplaten seems similar to #2334 (comment) no? |
Hmm no I think here the wrong |
@patrickvonplaten Does the LoRA stay in the pipe after an inference? We keep the pipe on memory and use it for multiple inferences. It seems that the previously loaded LoRA is kept in the memory for the next inference when Also diffusers only supports one LoRA at a time, right? |
Hey @thihamin, That's right, the previously loaded LoRA is kept if no new attn processor is loaded. |
Will there be support in Diffusers for 2 or more LoRA's anytime soon? |
Hey @kirit93, could you maybe open a feature request for this? :-) |
#2613 |
Hi, have you found a solution to use a lora safetensor ? |
* [LoRA] All to use in inference with pipeline * [LoRA] allow cross attention kwargs passed to pipeline * finish
* [LoRA] All to use in inference with pipeline * [LoRA] allow cross attention kwargs passed to pipeline * finish
Should allow the following:
Also cc @sayakpaul @apolinario