-
Notifications
You must be signed in to change notification settings - Fork 70
Support gqa in aten spda #2408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Support gqa in aten spda #2408
Conversation
Signed-off-by: Justin Chu <[email protected]>
❌ 6 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
Signed-off-by: Justin Chu <[email protected]>
Signed-off-by: Justin Chu <[email protected]>
return _aten_scaled_dot_product_attention_bool_mask_onnx( | ||
query, key, value, attn_mask, scale, dropout_p, enable_gqa=enable_gqa | ||
) |
Check failure
Code scanning / CodeQL
Wrong name for an argument in a call
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 5 days ago
To fix the issue, the keyword argument enable_gqa
should be removed from the call to _aten_scaled_dot_product_attention_bool_mask_onnx
on line 1994. This ensures that the function is called with only the parameters it supports. The removal of enable_gqa
will not affect the functionality of _aten_scaled_dot_product_attention_bool_mask_onnx
, as it does not use this argument.
-
Copy modified line R1995
@@ -1994,3 +1994,3 @@ | ||
return _aten_scaled_dot_product_attention_bool_mask_onnx( | ||
query, key, value, attn_mask, scale, dropout_p, enable_gqa=enable_gqa | ||
query, key, value, attn_mask, scale, dropout_p | ||
) |
Signed-off-by: Justin Chu <[email protected]>
Fix pytorch/pytorch#151762