-
Notifications
You must be signed in to change notification settings - Fork 6.1k
Add Differential Diffusion to HunyuanDiT. #9040
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this, looks good to me! I think we can merge this once you add your name and contrib to the community README file. Also, seems like at place of the code, the style guide is not followed. Normally, these could be fixed with make style
if it was a pipeline in core diffusers. However, since this is a community pipeline, you can run styling with:
ruff check examples/community/pipeline_hunyuandit_differential_img2img.py --fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this, looks good to me! I think we can merge this once you add your name and contrib to the community README file. Also, seems like at place of the code, the style guide is not followed. Normally, these could be fixed with make style
if it was a pipeline in core diffusers. However, since this is a community pipeline, you can run styling with:
ruff check examples/community/pipeline_hunyuandit_differential_img2img.py --fix
@a-r-r-o-w I have fixed the style issues and added the details to the markdown file. If all looks good, you can go ahead and merge this request. |
@MnCSSJ4x Could you revert all the other changes apart from adding your name and contribution to community README? If you'd like to refactor, you can do it in a separate PR as it's out of scope for this one. Please keep the changes here limited |
Sure. I'll try to revert in fix. I feel some tool might have auto refactored it. |
@a-r-r-o-w Can you please check and let me know if it's ok now? Apologies for bothering you with such trivial issues. |
@a-r-r-o-w Thanks for the command. It should be resolved now. |
@MnCSSJ4x Looking good implementation-wise. The quality tests seems to be failing. Could you run |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Upon running ruff check examples scripts src tests utils benchmarks setup.py --fix
src/diffusers/configuration_utils.py:679:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
677 | if field.name in self._flax_internal_args:
678 | continue
679 | if type(field.default) == dataclasses._MISSING_TYPE:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
680 | default_kwargs[field.name] = None
681 | else:
|
tests/models/test_modeling_common.py:338:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
337 | model.set_default_attn_processor()
338 | assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
339 | with torch.no_grad():
340 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:346:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
345 | model.enable_npu_flash_attention()
346 | assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
347 | with torch.no_grad():
348 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:354:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
353 | model.set_attn_processor(AttnProcessorNPU())
354 | assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
355 | with torch.no_grad():
356 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:389:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
388 | model.set_default_attn_processor()
389 | assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
390 | with torch.no_grad():
391 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:397:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
396 | model.enable_xformers_memory_efficient_attention()
397 | assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
398 | with torch.no_grad():
399 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:405:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
404 | model.set_attn_processor(XFormersAttnProcessor())
405 | assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
406 | with torch.no_grad():
407 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:433:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
431 | return
432 |
433 | assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
434 | with torch.no_grad():
435 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:441:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
440 | model.set_default_attn_processor()
441 | assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
442 | with torch.no_grad():
443 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:449:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
448 | model.set_attn_processor(AttnProcessor2_0())
449 | assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
450 | with torch.no_grad():
451 | if self.forward_requires_fresh_args:
|
tests/models/test_modeling_common.py:457:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
456 | model.set_attn_processor(AttnProcessor())
457 | assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
458 | with torch.no_grad():
459 | if self.forward_requires_fresh_args:
|
tests/pipelines/controlnet/test_controlnet_sdxl.py:1022:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
1021 | controlnet = ControlNetModel.from_unet(unet, conditioning_channels=4)
1022 | assert type(controlnet.mid_block) == UNetMidBlock2D
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
1023 | assert controlnet.conditioning_channels == 4
|
tests/pipelines/test_pipelines_common.py:777:21: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
775 | if hasattr(component, "attn_processors"):
776 | assert all(
777 | type(proc) == AttnProcessor for proc in component.attn_processors.values()
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
778 | ), "`from_pipe` changed the attention processor in original pipeline."
|
tests/schedulers/test_schedulers.py:827:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
825 | scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
826 |
827 | assert type(scheduler) == type(scheduler_loaded)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
828 |
829 | # Reset repo
|
tests/schedulers/test_schedulers.py:838:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
836 | scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
837 |
838 | assert type(scheduler) == type(scheduler_loaded)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
839 |
840 | # Reset repo
|
tests/schedulers/test_schedulers.py:854:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
852 | scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
853 |
854 | assert type(scheduler) == type(scheduler_loaded)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
855 |
856 | # Reset repo
|
tests/schedulers/test_schedulers.py:865:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
863 | scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
864 |
865 | assert type(scheduler) == type(scheduler_loaded)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
866 |
867 | # Reset repo
|
Found 17 errors.
make: *** [style] Error 1 |
Is your ruff version same as the one in our setup.py? I remember seeing something in the past due to incompatible ruff versions |
Hi, yes the version was different. Fixed it and ran the command However, Got some error text
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution and bearing with our reviews! This is a very strong good-first-issue finish 🎉
* Add Differential Pipeline. * Fix Styling Issue using ruff -fix * Add details to Contributing.md * Revert "Fix Styling Issue using ruff -fix" This reverts commit d347de1. * Revert "Revert "Fix Styling Issue using ruff -fix"" This reverts commit ce7c3ff. * Revert README changes * Restore README.md * Update README.md * Resolved Comments: * Fix Readme based on review * Fix formatting after make style --------- Co-authored-by: Aryan <[email protected]>
What does this PR do?
Adds Differential Diffusion to HunyuanDIT.
Fixes Partially #8924(HunyuanDiT Only)
Before submitting
How to test:
Gradient
A colab notebook demonstrating all results can be found here. Depth Maps have also been added in the same colab.
Who can review?
@a-r-r-o-w @DN6