-
Notifications
You must be signed in to change notification settings - Fork 31.3k
Update SAM/SAM HQ attention implementation + fix Cuda sync issues #39386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update SAM/SAM HQ attention implementation + fix Cuda sync issues #39386
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
| if sparse_prompt_embeddings.sum().item() != 0: | ||
| if sparse_prompt_embeddings is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was causing cuda sync issue and does not seem to be necessary if we set sparse_prompt_embeddings to None when it's not provided in the prompt encoder
|
run-slow: sam, sam_hq |
vasqu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial thoughts since I worked quite a lot on similar attention refactors myself
| attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) | ||
| if attention_mask is not None: | ||
| attn_weights = attn_weights + attention_mask | ||
| attn_weights = torch.softmax(attn_weights, dim=-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second softmax confuses me tbh (and based on that I have some doubts that sdpa will match well to eager 👀 )
Is it really necessary or could we just use one softmax after the mask has been applied? And there is also a dtype difference between the two (fp32 vs dtype of the weights). Maybe we could inherit the eager attention from somewhere else like llama or bart?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right it is weird, I reproduced the eager attention implementation that was there before the refactor (see below) without paying too much attention to it, but it might just be that the attention_mask is never used here so this path is never taken and didn't cause any issue. I'll check and remove it if that's the case, thanks for pointing it out!
| { | ||
| (None, None): [-13.1695, -14.6201, -14.8989], | ||
| ("cuda", 8): [-13.1668, -14.6182, -14.8970], | ||
| ("cuda", 8): [-7.6769, -9.6935, -9.8773], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those differences seem quite high tbh, same as below - was it broken before? Otherwise, I would assume that the refactor didn't work as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes meant to add a comment for this but forgot. I'm getting the same result on main so the refactor shouldn't be the issue here, but probably a previous PR. I'll investigate :)
|
Thank you @vasqu ! Made the changes. Should be ready for a final review @Cyrilvallez :) |
vasqu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just last comment and ig the tests were broken lol
| attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) | ||
| if attention_mask is not None: | ||
| attn_weights = attn_weights + attention_mask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be mask then softmax :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course 🤦Thanks
Cyrilvallez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Always happy to refactor more models! And thanks @vasqu for double-checking! Just unsure whether we should return the attention_weights as well in the Attention modules, can you confirm it's not an overlook?
|
|
||
| attn_output, _ = attention_interface( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We never want the attention outputs here?
| vision_hidden_states=vision_outputs.hidden_states if pixel_values is not None else None, | ||
| vision_attentions=vision_outputs.attentions if pixel_values is not None else None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we do need them sometimes here, so they should be returned explicitly from the Attention module no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added!
|
[For maintainers] Suggested jobs to run (before merge) run-slow: sam, sam_hq |
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
…ggingface#39386) * update attention implementation and improve inference speed * modular sam_hq + fix integration tests on A10 * fixup * fix after review * softmax in correct place * return attn_weights in sam/sam_hq
What does this PR do?
Update SAM and SAM HQ attention implementation, and fix two lines that were causing unnecessary cuda sync when profiling with torch profiler.
Mostly split from #32317 as PR is becoming huge