Skip to content

Extend optimize_for_ort to cover passes #2274

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

titaiwangms
Copy link
Contributor

@titaiwangms titaiwangms commented May 5, 2025

Fix #2261

A draft for discussion. We should cover all post-processing the model shipping needs

Copy link

codecov bot commented May 5, 2025

Codecov Report

Attention: Patch coverage is 33.33333% with 4 lines in your changes missing coverage. Please review.

Project coverage is 73.77%. Comparing base (ac87a1c) to head (aa1e4a3).

Files with missing lines Patch % Lines
onnxscript/rewriter/ort_fusions/_core.py 33.33% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2274      +/-   ##
==========================================
- Coverage   73.78%   73.77%   -0.01%     
==========================================
  Files         235      235              
  Lines       30936    30939       +3     
  Branches     3494     3494              
==========================================
  Hits        22825    22825              
- Misses       6911     6914       +3     
  Partials     1200     1200              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@gramalingam
Copy link
Collaborator

Please also consider whether this method should be optimize in-place or not. I think we can make it in-place now that shape-inference itself is in-place.

Comment on lines 131 to 137
# Apply the ORT pattern rewrite rules.
rewrite(model, ORT_PATTERN_REWRITE_RULES)

# TODO(exporter team): Fold transpose into initializers
# Apply the ORT optimization passes.
# https://github.com/microsoft/onnxruntime/blob/74dcf7e296639095dfa55d31336998b6f719ed76/onnxruntime/python/tools/transformers/dynamo_onnx_helper.py#L172
common_passes.ClearMetadataAndDocStringPass()(model)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may put all the passes into a pass manager like we do in optimize()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@justinchuby
Copy link
Collaborator

Please also consider whether this method should be optimize in-place or not. I think we can make it in-place now that shape-inference itself is in-place.

I think making it out-of-place is safer, in case we have passes in the future that need to be functional?

# https://github.com/microsoft/onnxruntime/blob/74dcf7e296639095dfa55d31336998b6f719ed76/onnxruntime/python/tools/transformers/dynamo_onnx_helper.py#L172
common_passes.ClearMetadataAndDocStringPass(),
# https://github.com/microsoft/onnxruntime/blob/74dcf7e296639095dfa55d31336998b6f719ed76/onnxruntime/python/tools/transformers/dynamo_onnx_helper.py#L139
common_passes.LiftConstantsToInitializersPass(lift_all_constants=False, size_limit=1),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have another pass called LiftSubgraphInitializersToMainGraphPass. Do we know if it's needed in genAI? @kunal-vaishnavi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the pass logic is in DynamoOnnxHelper, then it is used for ONNX Runtime GenAI.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't really produce graphs with subgraph initializers. I think we are ok either way

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

Higher level API for post-processing/optimization
4 participants