Skip to content

Conversation

@xadupre
Copy link
Member

@xadupre xadupre commented Aug 6, 2025

Similar to #2464. Does not support all the cases but we can add them in other PRs.

@codecov
Copy link

codecov bot commented Aug 6, 2025

Codecov Report

❌ Patch coverage is 8.88889% with 41 lines in your changes missing coverage. Please review.
✅ Project coverage is 69.90%. Comparing base (b2d94fe) to head (7043030).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
onnxscript/function_libs/torch_lib/ops/core.py 8.88% 41 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2477      +/-   ##
==========================================
- Coverage   70.00%   69.90%   -0.11%     
==========================================
  Files         215      215              
  Lines       25992    26035      +43     
  Branches     2606     2614       +8     
==========================================
+ Hits        18196    18199       +3     
- Misses       6896     6936      +40     
  Partials      900      900              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements the repeat_interleave operation for the torch library, adding support for both scalar and tensor variants. The implementation handles different cases where the repeat count is either an integer or a tensor, with optional dimension specification.

  • Adds two new functions: aten_repeat_interleave_int for scalar repeats and aten_repeat_interleave_Tensor for tensor repeats
  • Includes comprehensive end-to-end tests covering integer repeats, tensor repeats, and tensor repeats with no dimension specified
  • Comments out the TorchLibOpInfo entries temporarily with a note about splitting into two cases

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
onnxscript/function_libs/torch_lib/ops/core.py Implements the core repeat_interleave functionality with two variants for scalar and tensor inputs
tests/function_libs/torch_lib/e2e_ops_tests.py Adds three comprehensive test cases covering different repeat_interleave scenarios
tests/function_libs/torch_lib/ops_test_data.py Comments out TorchLibOpInfo entries with explanation about splitting into separate cases

@justinchuby
Copy link
Collaborator

Also: Quantco/ndonnx#161 (comment)

Signed-off-by: xadupre <[email protected]>
torch.arange(4, dtype=torch.float32).reshape((2, 2)),
torch.tensor([1, 2, 3, 2], dtype=torch.int64),
)
onnx_program = torch.onnx.export(

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning test

This assignment to 'onnx_program' is unnecessary as it is
redefined
before this value is used.
@justinchuby
Copy link
Collaborator

I think we can simplify the index computation logic like previous comments suggested.

@justinchuby
Copy link
Collaborator

justinchuby commented Aug 26, 2025

And then Expand can be leveraged to simplify the graph

final_shape = op.Concat(
op.Shape(self, start=0, end=dim),
op.Constant(value_ints=[-1]),
op.Shape(self, start=dim + 1),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest using pos_dim instead of dim ... otherwise, dim+1 can cause problems when dim == -1

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to test-cases for negative dim, including -1.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@xadupre xadupre enabled auto-merge (squash) August 30, 2025 08:47
if dim is None:
# flatten
self = op.Reshape(self, [-1])
rk = 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
rk = 1
rank = 1

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@xadupre xadupre merged commit 8974f5e into microsoft:main Sep 2, 2025
32 checks passed
@github-project-automation github-project-automation bot moved this from Todo to Done in ONNX Script Review Board Sep 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

4 participants