Skip to content

Conversation

@yonigozlan
Copy link
Member

@yonigozlan yonigozlan commented Jul 13, 2025

What does this PR do?

Update SAM and SAM HQ attention implementation, and fix two lines that were causing unnecessary cuda sync when profiling with torch profiler.
Mostly split from #32317 as PR is becoming huge

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yonigozlan yonigozlan changed the title Update SAM attention implementation + fix Cuda sync issues Update SAM/SAM HQ attention implementation + fix Cuda sync issues Jul 13, 2025
Comment on lines -509 to +500
if sparse_prompt_embeddings.sum().item() != 0:
if sparse_prompt_embeddings is not None:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was causing cuda sync issue and does not seem to be necessary if we set sparse_prompt_embeddings to None when it's not provided in the prompt encoder

@yonigozlan
Copy link
Member Author

run-slow: sam, sam_hq

Copy link
Contributor

@vasqu vasqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial thoughts since I worked quite a lot on similar attention refactors myself

Comment on lines 192 to 195
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
if attention_mask is not None:
attn_weights = attn_weights + attention_mask
attn_weights = torch.softmax(attn_weights, dim=-1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second softmax confuses me tbh (and based on that I have some doubts that sdpa will match well to eager 👀 )

Is it really necessary or could we just use one softmax after the mask has been applied? And there is also a dtype difference between the two (fp32 vs dtype of the weights). Maybe we could inherit the eager attention from somewhere else like llama or bart?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right it is weird, I reproduced the eager attention implementation that was there before the refactor (see below) without paying too much attention to it, but it might just be that the attention_mask is never used here so this path is never taken and didn't cause any issue. I'll check and remove it if that's the case, thanks for pointing it out!

{
(None, None): [-13.1695, -14.6201, -14.8989],
("cuda", 8): [-13.1668, -14.6182, -14.8970],
("cuda", 8): [-7.6769, -9.6935, -9.8773],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those differences seem quite high tbh, same as below - was it broken before? Otherwise, I would assume that the refactor didn't work as expected.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes meant to add a comment for this but forgot. I'm getting the same result on main so the refactor shouldn't be the issue here, but probably a previous PR. I'll investigate :)

@yonigozlan
Copy link
Member Author

Thank you @vasqu ! Made the changes. Should be ready for a final review @Cyrilvallez :)

@yonigozlan yonigozlan requested a review from Cyrilvallez July 14, 2025 15:26
Copy link
Contributor

@vasqu vasqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just last comment and ig the tests were broken lol

Comment on lines 192 to 194
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
if attention_mask is not None:
attn_weights = attn_weights + attention_mask
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be mask then softmax :D

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course 🤦Thanks

Copy link
Member

@Cyrilvallez Cyrilvallez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Always happy to refactor more models! And thanks @vasqu for double-checking! Just unsure whether we should return the attention_weights as well in the Attention modules, can you confirm it's not an overlook?

Comment on lines 262 to 263

attn_output, _ = attention_interface(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We never want the attention outputs here?

Comment on lines +1501 to +1502
vision_hidden_states=vision_outputs.hidden_states if pixel_values is not None else None,
vision_attentions=vision_outputs.attentions if pixel_values is not None else None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we do need them sometimes here, so they should be returned explicitly from the Attention module no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added!

@github-actions
Copy link
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: sam, sam_hq

@yonigozlan yonigozlan merged commit 433d2a2 into huggingface:main Jul 18, 2025
19 checks passed
zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request Jul 22, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
…ggingface#39386)

* update attention implementation and improve inference speed

* modular sam_hq + fix integration tests on A10

* fixup

* fix after review

* softmax in correct place

* return attn_weights in sam/sam_hq
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants