Skip to content

Conversation

@hysts
Copy link
Contributor

@hysts hysts commented Aug 24, 2022

Close #172

Hi, @patrickvonplaten
I think fda6308 is what you initially had in mind. But I think it would be better if we could set other tqdm parameters as well. It's because I think it would be better to use leave=False instead of disabling tqdm entirely sometimes, especially when using the DDPM scheduler with 1000 steps to generate images.
So I suggest 3aa44e8. What do you think?

Usage:

pipeline = DDPMPipeline.from_pretrained('CompVis/ldm-celebahq-256')
pipeline.to('cuda:0')

# if one wants to set `leave=False`
pipeline.set_progress_bar_config(leave=False)

# if one wants to disable `tqdm`
pipeline.set_progress_bar_config(disable=True)

BTW, I'm not familiar with the design of the diffusers yet and have a question. I added __init__ to DiffusionPipeline class, but is this OK, or is there any other better/supposed way?

@hysts
Copy link
Contributor Author

hysts commented Aug 24, 2022

Oh, I just noticed #172 (comment). But I guess this is already addressed in my PR?

extra_step_kwargs["eta"] = eta

for i, t in tqdm(enumerate(self.scheduler.timesteps)):
for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for enumerate and tqdm, if the order is enumerate(tqdm(...)), we don't have to pass total.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree that's what it states in the tqdm docs: https://pypi.org/project/tqdm/#faq-and-known-issues

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Aug 24, 2022

The documentation is not available anymore as the PR was closed or merged.

@hysts
Copy link
Contributor Author

hysts commented Aug 24, 2022

I think #226 and #236 are related to this PR and they would conflict with this.


config_name = "model_index.json"

def __init__(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we remove the __init__ function here? It's usually much easier to read components when there is no __init__ function :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main reason here is that adding an __init__ to a class makes the class automatically harder to understand e.g. if you abstract from this class and the class doesn't have an init you're given much more freedom with your "parent" class

captured = capsys.readouterr()
assert "10/10" in captured.err, "Progress bar has to be displayed"

ddpm.set_progress_bar_config(disable=True)
Copy link
Contributor

@patrickvonplaten patrickvonplaten Aug 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simple way to turn off the progress bar

@patrickvonplaten patrickvonplaten linked an issue Aug 30, 2022 that may be closed by this pull request
Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks a lot for the PR @hysts ❤️

Copy link
Member

@anton-l anton-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for working on this @hysts @patrickvonplaten


ddpm = DDPMPipeline(model, scheduler).to(torch_device)
ddpm(output_type="numpy")["sample"]
captured = capsys.readouterr()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice test, gonna remember this ;)

@patrickvonplaten
Copy link
Contributor

Sorry for having fiddled into your PR quite a bit here @hysts - thanks a mille for the contribution ❤️

@patrickvonplaten patrickvonplaten merged commit 5e84353 into huggingface:main Aug 30, 2022
natolambert pushed a commit that referenced this pull request Sep 7, 2022
* Refactor progress bar of pipeline __call__

* Make any tqdm configs available

* remove init

* add some tests

* remove file

* finish

* make style

* improve progress bar test

Co-authored-by: Patrick von Platen <[email protected]>
PhaneeshB pushed a commit to nod-ai/diffusers that referenced this pull request Mar 1, 2023
* Add debug log of torch_model_blacklist.txt

* Add make_fx for torch model

* Update torch_model_blacklists.txt

* Add some Xfails
PhaneeshB pushed a commit to nod-ai/diffusers that referenced this pull request Mar 1, 2023
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* Refactor progress bar of pipeline __call__

* Make any tqdm configs available

* remove init

* add some tests

* remove file

* finish

* make style

* improve progress bar test

Co-authored-by: Patrick von Platen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fix progress bar in Stable Diffusion pipeline Add options to disable/hide progress bar

5 participants