-
Notifications
You must be signed in to change notification settings - Fork 6.1k
Support views batch for panorama #3632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The documentation is not available anymore as the PR was closed or merged. |
cc @sayakpaul can you take a look? |
@@ -474,6 +474,7 @@ def __call__( | |||
width: Optional[int] = 2048, | |||
num_inference_steps: int = 50, | |||
guidance_scale: float = 7.5, | |||
view_batch_size: int = 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also add an entry for this arg
in the docstrings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
Show resolved
Hide resolved
@@ -508,6 +510,9 @@ def __call__( | |||
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > | |||
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, | |||
usually at the expense of lower image quality. | |||
view_batch_size (`int`, *optional*, defaults to 1): | |||
The batch size to denoise splited views. For some GPUs with high performance, higher view batch size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The batch size to denoise splited views. For some GPUs with high performance, higher view batch size | |
The batch size to denoise split views. For some GPUs with high performance, higher view batch size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I think there's just one thing pending -- mention this argument from the doc and comment about the potential speedups.
Good to merge for me whenever you want @sayakpaul |
Thanks for the PR @Isotr0py, looks good! |
Added a "tip" on |
* support views batch for panorama * add entry for the new argument * format entry for the new argument * add view_batch_size test * fix batch test and a boundary condition * add more docstrings * fix a typos * fix typos * add: entry to the doc about view_batch_size. * Revert "add: entry to the doc about view_batch_size." This reverts commit a36aeaa. * add a tip on . --------- Co-authored-by: Sayak Paul <[email protected]>
* support views batch for panorama * add entry for the new argument * format entry for the new argument * add view_batch_size test * fix batch test and a boundary condition * add more docstrings * fix a typos * fix typos * add: entry to the doc about view_batch_size. * Revert "add: entry to the doc about view_batch_size." This reverts commit a36aeaa. * add a tip on . --------- Co-authored-by: Sayak Paul <[email protected]>
For P100 GPU, the speed was faster with
view_batch_size = 8
thanview_batch_size = 1
test code