Commit 7768afb
committed
Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
huggingface/transformers#255981 parent 611a5a8 commit 7768afb
File tree
1 file changed
+1
-0
lines changed- applications/Colossal-LLaMA-2/colossal_llama2/utils
1 file changed
+1
-0
lines changedLines changed: 1 addition & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
65 | 65 | | |
66 | 66 | | |
67 | 67 | | |
| 68 | + | |
68 | 69 | | |
69 | 70 | | |
70 | 71 | | |
| |||
0 commit comments