-
Notifications
You must be signed in to change notification settings - Fork 249
Open
Labels
Compile / AOTIIssues related to AOT Inductor and torch compileIssues related to AOT Inductor and torch compileenhancementNew feature or requestNew feature or requesttriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🚀 The feature, motivation and pitch
Need work to enable:
python3 torchchat.py export llama3.2-11B --output-aoti-package-path exportedModels/llama3_2_artifacts.pt2
Features to be added:
- Run AOTI export on preprocess, vision encoder and text decoder separately.
- This requires dependency on torchtune's exportable modules and perform module transformation needed.
- Package them into a single pt2 file.
Alternatives
No response
Additional context
No response
RFC (Optional)
No response
Metadata
Metadata
Assignees
Labels
Compile / AOTIIssues related to AOT Inductor and torch compileIssues related to AOT Inductor and torch compileenhancementNew feature or requestNew feature or requesttriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Type
Projects
Status
Staging