Skip to content

Add Differential Diffusion to HunyuanDiT. #9040

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Aug 8, 2024

Conversation

MnCSSJ4x
Copy link
Contributor

@MnCSSJ4x MnCSSJ4x commented Aug 1, 2024

What does this PR do?

Adds Differential Diffusion to HunyuanDIT.

Fixes Partially #8924(HunyuanDiT Only)

Before submitting

How to test:

Gradient

import torch
from diffusers import FlowMatchEulerDiscreteScheduler
from diffusers.utils import load_image
from PIL import Image
from torchvision import transforms

from pipeline_hunyuandit_differential_img2img import (
    HunyuanDiTDifferentialImg2ImgPipeline,
)


pipe = HunyuanDiTDifferentialImg2ImgPipeline.from_pretrained(
    "Tencent-Hunyuan/HunyuanDiT-Diffusers", torch_dtype=torch.float16
).to("cuda")


source_image = load_image(
    "https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/20240329211129_4024911930.png"
)
map = load_image(
    "https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/gradient_mask_2.png"
)
prompt = "a green pear"
negative_prompt = "blurry"

image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    image=source_image,
    num_inference_steps=28,
    guidance_scale=4.5,
    strength=1.0,
    map=map,
).images[0]
Gradient Input Output
Gradient Input Output

A colab notebook demonstrating all results can be found here. Depth Maps have also been added in the same colab.

Who can review?

@a-r-r-o-w @DN6

@MnCSSJ4x MnCSSJ4x changed the title Add Differential Pipeline. Add Differential Diffusion to HunyuanDiT. Aug 1, 2024
@MnCSSJ4x MnCSSJ4x marked this pull request as ready for review August 1, 2024 13:21
Copy link
Member

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for working on this, looks good to me! I think we can merge this once you add your name and contrib to the community README file. Also, seems like at place of the code, the style guide is not followed. Normally, these could be fixed with make style if it was a pipeline in core diffusers. However, since this is a community pipeline, you can run styling with:
ruff check examples/community/pipeline_hunyuandit_differential_img2img.py --fix

Copy link
Member

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for working on this, looks good to me! I think we can merge this once you add your name and contrib to the community README file. Also, seems like at place of the code, the style guide is not followed. Normally, these could be fixed with make style if it was a pipeline in core diffusers. However, since this is a community pipeline, you can run styling with:
ruff check examples/community/pipeline_hunyuandit_differential_img2img.py --fix

@MnCSSJ4x
Copy link
Contributor Author

MnCSSJ4x commented Aug 3, 2024

@a-r-r-o-w I have fixed the style issues and added the details to the markdown file. If all looks good, you can go ahead and merge this request.

@a-r-r-o-w
Copy link
Member

@MnCSSJ4x Could you revert all the other changes apart from adding your name and contribution to community README? If you'd like to refactor, you can do it in a separate PR as it's out of scope for this one. Please keep the changes here limited

@MnCSSJ4x
Copy link
Contributor Author

MnCSSJ4x commented Aug 3, 2024

@MnCSSJ4x Could you revert all the other changes apart from adding your name and contribution to community README? If you'd like to refactor, you can do it in a separate PR as it's out of scope for this one. Please keep the changes here limited

Sure. I'll try to revert in fix. I feel some tool might have auto refactored it.

@MnCSSJ4x
Copy link
Contributor Author

MnCSSJ4x commented Aug 3, 2024

@a-r-r-o-w Can you please check and let me know if it's ok now? Apologies for bothering you with such trivial issues.

@a-r-r-o-w
Copy link
Member

a-r-r-o-w commented Aug 3, 2024

No problem, but there are still a large number of differences. Please run git restore -s main examples/community/README.md first, and then add your name to the table, and an example of how to run Hunyuan DiffDiff. You will know the PR is ready if it contains only additions here. Currently, it contains many deletions due to additional changes
image

@MnCSSJ4x
Copy link
Contributor Author

MnCSSJ4x commented Aug 3, 2024

@a-r-r-o-w Thanks for the command. It should be resolved now.

@a-r-r-o-w
Copy link
Member

@MnCSSJ4x Looking good implementation-wise. The quality tests seems to be failing. Could you run make style and ensure that the tests pass locally too.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@MnCSSJ4x
Copy link
Contributor Author

MnCSSJ4x commented Aug 3, 2024

@MnCSSJ4x Looking good implementation-wise. The quality tests seems to be failing. Could you run make style and ensure that the tests pass locally too.

Upon running make style I get the following and am unable to track back where exactly things are going wrong or if I introduced this issues.

ruff check examples scripts src tests utils benchmarks setup.py --fix
src/diffusers/configuration_utils.py:679:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
677 |             if field.name in self._flax_internal_args:
678 |                 continue
679 |             if type(field.default) == dataclasses._MISSING_TYPE:
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
680 |                 default_kwargs[field.name] = None
681 |             else:
    |

tests/models/test_modeling_common.py:338:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
337 |         model.set_default_attn_processor()
338 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
339 |         with torch.no_grad():
340 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:346:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
345 |         model.enable_npu_flash_attention()
346 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
347 |         with torch.no_grad():
348 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:354:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
353 |         model.set_attn_processor(AttnProcessorNPU())
354 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
355 |         with torch.no_grad():
356 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:389:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
388 |         model.set_default_attn_processor()
389 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
390 |         with torch.no_grad():
391 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:397:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
396 |         model.enable_xformers_memory_efficient_attention()
397 |         assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
398 |         with torch.no_grad():
399 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:405:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
404 |         model.set_attn_processor(XFormersAttnProcessor())
405 |         assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
406 |         with torch.no_grad():
407 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:433:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
431 |             return
432 | 
433 |         assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
434 |         with torch.no_grad():
435 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:441:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
440 |         model.set_default_attn_processor()
441 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
442 |         with torch.no_grad():
443 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:449:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
448 |         model.set_attn_processor(AttnProcessor2_0())
449 |         assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
450 |         with torch.no_grad():
451 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:457:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
456 |         model.set_attn_processor(AttnProcessor())
457 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
458 |         with torch.no_grad():
459 |             if self.forward_requires_fresh_args:
    |

tests/pipelines/controlnet/test_controlnet_sdxl.py:1022:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
     |
1021 |         controlnet = ControlNetModel.from_unet(unet, conditioning_channels=4)
1022 |         assert type(controlnet.mid_block) == UNetMidBlock2D
     |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
1023 |         assert controlnet.conditioning_channels == 4
     |

tests/pipelines/test_pipelines_common.py:777:21: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
775 |             if hasattr(component, "attn_processors"):
776 |                 assert all(
777 |                     type(proc) == AttnProcessor for proc in component.attn_processors.values()
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
778 |                 ), "`from_pipe` changed the attention processor in original pipeline."
    |

tests/schedulers/test_schedulers.py:827:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
825 |         scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
826 | 
827 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
828 | 
829 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:838:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
836 |         scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
837 | 
838 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
839 | 
840 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:854:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
852 |         scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
853 | 
854 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
855 | 
856 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:865:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
863 |         scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
864 | 
865 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
866 | 
867 |         # Reset repo
    |

Found 17 errors.
make: *** [style] Error 1 

@a-r-r-o-w
Copy link
Member

Is your ruff version same as the one in our setup.py? I remember seeing something in the past due to incompatible ruff versions

@MnCSSJ4x
Copy link
Contributor Author

MnCSSJ4x commented Aug 5, 2024

Is your ruff version same as the one in our setup.py? I remember seeing something in the past due to incompatible ruff versions

Hi, yes the version was different. Fixed it and ran the command make style, I think the formatting was fixed and I pushed the same.

However, Got some error text

ruff check examples scripts src tests utils benchmarks setup.py --fix
ruff format examples scripts src tests utils benchmarks setup.py
1 file reformatted, 1036 files left unchanged
doc-builder style src/diffusers docs/source --max_len 119
make: doc-builder: No such file or directory
make: *** [style] Error 1

Copy link
Member

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution and bearing with our reviews! This is a very strong good-first-issue finish 🎉

@a-r-r-o-w a-r-r-o-w merged commit 1fcb811 into huggingface:main Aug 8, 2024
8 checks passed
sayakpaul pushed a commit that referenced this pull request Dec 23, 2024
* Add Differential Pipeline.

* Fix Styling Issue using ruff -fix

* Add details to Contributing.md

* Revert "Fix Styling Issue using ruff -fix"

This reverts commit d347de1.

* Revert "Revert "Fix Styling Issue using ruff -fix""

This reverts commit ce7c3ff.

* Revert README changes

* Restore README.md

* Update README.md

* Resolved Comments:

* Fix Readme based on review

* Fix formatting after make style

---------

Co-authored-by: Aryan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants