You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pull Request resolved: #10623
Pull attention creation out of Transformer/TransformerBlock. Instead, pass the layers into Transformer.
The motivation is to customize linear layers in attention for LoRA (eg. make wq into a LoraLinear instead of a regular linear). In the next diff (D73517350), we pull wq,wk,wv,wo out of the attention and pass those in as well.
This allows us to customize attention parameters without passing in ModelArgs and doing the customization deep inside attention.py.
I think this modularizes our attention/transformer components, though also means that users have to do some more work to construct the attention layers and pass it to transformer.
It follows the torchtune structure more closely, eg. https://github.com/pytorch/torchtune/blob/main/torchtune/models/llama3_2/_component_builders.py#L221
Previously here: D73474110
ghstack-source-id: 282118266
@exported-using-ghexport
Differential Revision: [D73538697](https://our.internmc.facebook.com/intern/diff/D73538697/)
0 commit comments