-
Notifications
You must be signed in to change notification settings - Fork 6.1k
[2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipeline #2779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The documentation is not available anymore as the PR was closed or merged. |
@nipunjindal thank you! so it seems like the process for Do you happen to know what those lines are meant to do? fac = torch.sqrt(beta_prod_t)
sample = pred_original_sample * (fac) + latents * (1 - fac) and why does it use the cutouts + spherical loss rather than the loss from the original paper? Also, it probably worth adding a few side-by-side image of w/o classifier guidance vs w classifier guidance (for different classifier guidance scales). |
What would be the process for including the ancestral samplers as well? I find these work best when using clip guidance using the sdk from stability.ai |
if isinstance(self.scheduler, LMSDiscreteScheduler): | ||
sigma = self.scheduler.sigmas[index] | ||
# the model input needs to be scaled to match the continuous ODE formulation in K-LMS | ||
latent_model_input = latents / ((sigma**2 + 1) ** 0.5) | ||
else: | ||
latent_model_input = latents |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind elaborating on this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks correct! This should not be done in the pipeline :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, well done 🔥
Could you maybe also include a few lines about this support in the README as well since it improves the results (efficiency-wise at least). Do you think that would make sense?
@patrickvonplaten could you give it a quick review too?
The process would not differ much. You will need to consider adding an ancestral sampler in the condition (such as this). If it's not immediately compatible, then some changes might be necessary. An example: diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py Line 344 in 92e1164
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok to merge for me
…line (huggingface#2779) [2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipelines Co-authored-by: njindal <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
…line (huggingface#2779) [2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipelines Co-authored-by: njindal <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
Enabled DPMSolverMultistepScheduler in CLIP-guided pipeline.
Issue: #2737
Here is code to test the changes:
Within 10 steps able to get good result with new scheduler
