forked from huggingface/diffusers
-
Notifications
You must be signed in to change notification settings - Fork 10
V0.17 #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
V0.17 #9
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Post release * fix more
* [docs] only mention one stage * add blurb on auto accepting --------- Co-authored-by: William Berman <[email protected]>
Write model card in controlnet training script.
* [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * Review comments * [Review comment]: Add is_torchsde_available() * [Review comment]: Test and docs * [Review comment] * [Review comment] * [Review comment] * [Review comment] * [Review comment] --------- Co-authored-by: njindal <[email protected]>
[Slow Test]: Cuda test fixes Co-authored-by: njindal <[email protected]>
Remove required from tracker_project_name. As observed by https://github.com/off99555 in huggingface#2695 (comment), it already has a default value.
…n_block (huggingface#3210) * removed unnecessary parameters from get_up_block and get_down_block functions * adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions --------- Co-authored-by: Sayak Paul <[email protected]>
Update repaint.mdx accomodate to huggingface#1701
Co-authored-by: multimodalart <[email protected]>
The pipeline is built as `pipe` but then used as `pipeline`.
…ingface#3114) * config fixes * deprecate get_input_dims
…eline"" (huggingface#3265) Revert "Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (huggingface#3201)" This reverts commit 91a2a80.
Co-authored-by: yiyixuxu <[email protected]>
* [docs] add notes for stateful model changes * Update docs/source/en/optimization/fp16.mdx Co-authored-by: Pedro Cuenca <[email protected]> * link to accelerate docs for discarding hooks --------- Co-authored-by: Pedro Cuenca <[email protected]>
…huggingface#3180) * 👽 qol improvements for LoRA. * better function name? * fix: LoRA weight loading with the new format. * address Patrick's comments. * Apply suggestions from code review Co-authored-by: Patrick von Platen <[email protected]> * change wording around encouraging the use of load_lora_weights(). * fix: function name. --------- Co-authored-by: Patrick von Platen <[email protected]>
* EDICT pipeline initial commit - Starting point taking from https://github.com/Joqsan/edict-diffusion * refactor __init__() method * minor refactoring * refactor scheduler code - remove scheduler and move its methods to the EDICTPipeline class * make CFG optional - refactor encode_prompt(). - include optional generator for sampling with vae. - minor variable renaming * add EDICT pipeline description to README.md * replace preprocess() with VaeImageProcessor * run make style and make quality commands --------- Co-authored-by: Patrick von Platen <[email protected]>
* zh translated docs update * update _toctree
Fix typos
…uggingface#3125) * try multi controlnet inpaint * multi controlnet inpaint * multi controlnet inpaint
…e#3272) * Update Dreambooth README * Adapt all docs as well * automatically write model card * fix * make style
* Update Pix2PixZero Auto-correlation Loss * Add Stable Diffusion DiffEdit pipeline * Add draft documentation and import code * Bugfixes and refactoring * Add option to not decode latents in the inversion process * Harmonize preprocessing * Revert "Update Pix2PixZero Auto-correlation Loss" This reverts commit b218062. * Update annotations * rename `compute_mask` to `generate_mask` * Update documentation * Update docs * Update Docs * Fix copy * Change shape of output latents to batch first * Update docs * Add first draft for tests * Bugfix and update tests * Add `cross_attention_kwargs` support for all pipeline methods * Fix Copies * Add support for PIL image latents Add support for mask broadcasting Update docs and tests Align `mask` argument to `mask_image` Remove height and width arguments * Enable MPS Tests * Move example docstrings * Fix test * Fix test * fix pipeline inheritance * Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline * Register modules set to `None` in config for `test_save_load_optional_components` * Move fixed logic to specific test class * Clean changes to other pipelines * Update new tests to coordinate with huggingface#2953 * Update slow tests for better results * Safety to avoid potential problems with torch.inference_mode * Add reference in SD Pipeline Overview * Fix tests again * Enforce determinism in noise for generate_mask * Fix copies * Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16` * Add LoraLoaderMixin and update `prepare_image_latents` * clean up repeat and reg * bugfix * Remove invalid args from docs Suppress spurious warning by repeating image before latent to mask gen
* add constant lr with rules * add constant with rules in TYPE_TO_SCHEDULER_FUNCTION * add constant lr rate with rule * hotfix code quality * fix doc style * change name constant_with_rules to piecewise constant
* Allow disabling torch 2_0 attention * make style * Update src/diffusers/models/attention.py
add link to training script Co-authored-by: yiyixuxu <[email protected]>
The note-seq package throws an error on import because the default installed version of Ipython is not compatible with python 3.8 which we run in the CI. https://github.com/huggingface/diffusers/actions/runs/4830121056/jobs/8605954838#step:7:9
A pipeline object stores the results in `images` not in `sample`. Current code blocks don't work.
* fix more * Fix more * fix more * Apply suggestions from code review * fix * make style * make fix-copies * fix * make sure torch compile * Clean * fix test
* load a1111 lora * fix * apply feedback * fix
Correct multi gpu
* Update Compel documentation for textual inversions * Fix typo
…ggingface#3675) * fix: assertion. * assertion fix.
* part 2 of api fixes * move randn_tensor * add to toctree * apply feedback * more feedback
VaeImageProcessor.preprocess refactor * refactored VaeImageProcessor - allow passing optional height and width argument to resize() - add convert_to_rgb * refactored prepare_latents method for img2img pipelines so that if we pass latents directly as image input, it will not encode it again * added a test in test_pipelines_common.py to test latents as image inputs * refactored img2img pipelines that accept latents as image: - controlnet img2img, stable diffusion img2img , instruct_pix2pix --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: Pedro Cuenca <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
* move activation dispatches into helper function * tests
fix link to load_lora_weights
…e#3649) * merge undoable-monkeypatch * remove TEXT_ENCODER_TARGET_MODULES, refactoring * move create_lora_weight_file
* feat: add lora attention processor for pt 2.0. * explicit context manager for SDPA. * switch to flash attention * make shapes compatible to work optimally with SDPA. * fix: circular import problem. * explicitly specify the flash attention kernel in sdpa * fall back to efficient attention context manager. * remove explicit dispatch. * fix: removed processor. * fix: remove optional from type annotation. * feat: make changes regarding LoRAAttnProcessor2_0. * remove confusing warning. * formatting. * relax tolerance for PT 2.0 * fix: loading message. * remove unnecessary logging. * add: entry to the docs. * add: network_alpha argument. * relax tolerance.
* refactor x4 upscaler * style * copies --------- Co-authored-by: yiyixuxu <yixu310@gmail,com>
- use the correct Prior hub model id - use the new names in KandinskyPriorPipelineOutput
* support views batch for panorama * add entry for the new argument * format entry for the new argument * add view_batch_size test * fix batch test and a boundary condition * add more docstrings * fix a typos * fix typos * add: entry to the doc about view_batch_size. * Revert "add: entry to the doc about view_batch_size." This reverts commit a36aeaa. * add a tip on . --------- Co-authored-by: Sayak Paul <[email protected]>
* Add draft for lora text encoder scale * Improve naming * fix: training dreambooth lora script. * Apply suggestions from code review * Update examples/dreambooth/train_dreambooth_lora.py * Apply suggestions from code review * Apply suggestions from code review * add lora mixin when fit * add lora mixin when fit * add lora mixin when fit * fix more * fix more --------- Co-authored-by: Sayak Paul <[email protected]>
…face#3664) * Implement option for rescaling betas to zero terminal SNR * Implement rescale classifier free guidance in pipeline_stable_diffusion.py * focus on DDIM * make style * make style * make style * make style * Apply suggestions from Peter Lin * Apply suggestions from Peter Lin * make style * Apply suggestions from code review * Apply suggestions from code review * make style * make style --------- Co-authored-by: MaxWe00 <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
[tests] Run slow matrix sequentially.
* Fix custom releases * make style
* Fix loading * make style
howsmyanimeprofilepicture
approved these changes
Jun 10, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approve 합니다.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.