-
Notifications
You must be signed in to change notification settings - Fork 6k
⚡️ Speed up method AutoencoderKLWan.clear_cache
by 886%
#11665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
⚡️ Speed up method AutoencoderKLWan.clear_cache
by 886%
#11665
Conversation
**Key optimizations:** - Compute the number of `WanCausalConv3d` modules in each model (`encoder`/`decoder`) **only once during initialization**, store in `self._cached_conv_counts`. This removes unnecessary repeated tree traversals at every `clear_cache` call, which was the main bottleneck (from profiling). - The internal helper `_count_conv3d_fast` is optimized via a generator expression with `sum` for efficiency. All comments from the original code are preserved, except for updated or removed local docstrings/comments relevant to changed lines. **Function signatures and outputs remain unchanged.**
@bot /style |
Style fixes have been applied. View the workflow run here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, nice changes!
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for some reason, my actual review comments did not go through 🤔
Co-authored-by: Aryan <[email protected]>
accepted your code review suggestions. This should be ready to merge |
Saurabh's comments - This is used frequently in every encode and decode call. So speeding this up would be very helpful.
Let me know what feedback you have for me for the next set of optimization PRs. I want to ensure an easy merge experience for you.
📄 886% (8.86x) speedup for
AutoencoderKLWan.clear_cache
insrc/diffusers/models/autoencoders/autoencoder_kl_wan.py
⏱️ Runtime :
1.60 milliseconds
→162 microseconds
(best of5
runs)📝 Explanation and details
Key optimizations:
WanCausalConv3d
modules in each model (encoder
/decoder
) only once during initialization, store inself._cached_conv_counts
. This removes unnecessary repeated tree traversals at everyclear_cache
call, which was the main bottleneck (from profiling)._count_conv3d_fast
is optimized via a generator expression withsum
for efficiency.All comments from the original code are preserved, except for updated or removed local docstrings/comments relevant to changed lines.
Function signatures and outputs remain unchanged.
✅ Correctness verification report:
🌀 Generated Regression Tests Details
To edit these changes
git checkout codeflash/optimize-AutoencoderKLWan.clear_cache-mb6bxvte
and push.