Skip to content

Add Op (_upsample_bilinear2d_aa, _upsample_bicubic2d_aa) | feat(torchlib) #1259

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

xiaowuhu
Copy link
Contributor

@xiaowuhu xiaowuhu commented Jan 26, 2024

It seems that the antialias method is different between ONNX and PyTorch, so we can just compare the shape, instead of the value.

Below is the difference between ONXN and PyTorch:

# ONNX
import numpy as np
self = np.array([[[[2,1,1,1],
                   [1,1,1,1],
                   [1,1,1,1],
                   [1,1,1,1]]]]).astype(np.float32)
print(self.shape)
output_size = np.array([1,1]).astype(np.int64)
align_corners = True
r = aten__upsample_bicubic2d_aa(self, output_size, align_corners)
print(r)

ONXN output = [[[[1.390625]]]]

# PyTorch
import torch as t
r = t.ops.aten._upsample_bicubic2d_aa(t.tensor(self), t.tensor(output_size), align_corners)
print(r)

Torch output = tensor([[[[2.2656]]]])

I also tried some other parameters combination but none of them can match with torch.

@xiaowuhu xiaowuhu changed the title AddOp(_upsample_bilinear2d_aa, _upsample_bicubic2d_aa) | feat(TorchLib) Add Op (_upsample_bilinear2d_aa, _upsample_bicubic2d_aa) | feat(torchlib) Jan 26, 2024
Copy link

codecov bot commented Jan 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (ce3eb4a) 78.68% compared to head (778b799) 78.85%.
Report is 8 commits behind head on main.

❗ Current head 778b799 differs from pull request most recent head cf4f4af. Consider uploading reports for the commit cf4f4af to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1259      +/-   ##
==========================================
+ Coverage   78.68%   78.85%   +0.17%     
==========================================
  Files         119      119              
  Lines       15762    15700      -62     
  Branches     2486     2481       -5     
==========================================
- Hits        12403    12381      -22     
+ Misses       2950     2911      -39     
+ Partials      409      408       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

github-actions bot commented Jan 26, 2024

Test Results

     24 files  ±     0      24 suites  ±0   1h 41m 53s ⏱️ + 11m 9s
 11 405 tests +     6   8 439 ✅ +    4    2 952 💤 ±     0   14 ❌ +2 
274 768 runs  +16 882  63 102 ✅ +4 302  211 460 💤 +12 578  206 ❌ +2 

For more details on these failures, see this check.

Results for commit cf4f4af. ± Comparison against base commit 457e52e.

This pull request removes 29 and adds 35 tests. Note that renamed tests count towards both.
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_327_aten_upsample_bilinear2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_328_aten_upsample_bicubic2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_329_aten_upsample_bicubic2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_330_aten_upsample_linear1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_331_aten_upsample_nearest1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_332_aten_upsample_nearest2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_333_aten_upsample_nearest3d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_334_aten_upsample_trilinear3d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_335_aten_ones_like
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_336_aten_roll
…
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_327_aten__upsample_bilinear2d_aa
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_328_aten_upsample_bilinear2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_329_aten_upsample_bicubic2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_330_aten_upsample_bicubic2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_331_aten__upsample_bicubic2d_aa
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_332_aten_upsample_linear1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_333_aten_upsample_nearest1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_334_aten_upsample_nearest2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_335_aten_upsample_nearest3d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_336_aten_upsample_trilinear3d
…

♻️ This comment has been updated with latest results.

@justinchuby
Copy link
Collaborator

It would be better if we match values because the values should be deterministic. Do we know how PyTorch does it?

@xiaowuhu
Copy link
Contributor Author

It would be better if we match values because the values should be deterministic. Do we know how PyTorch does it?

please see the description for this PR. I add comparison between onnx and torch.

@justinchuby
Copy link
Collaborator

Would it be helpful to consult the PyTorch implementation? I suspect we need additional processing to implement antialiasing.

@justinchuby
Copy link
Collaborator

From our discussion: understanding the PyTorch implementation proved to be harder than anticipated (https://github.com/pytorch/pytorch/blob/bcf35c6ae62bb6560befa3550e37a8283944e5f4/aten/src/ATen/native/cpu/UpSampleKernel.cpp#L2009). We will seek additional help for this.

gramalingam pushed a commit that referenced this pull request Jun 18, 2025
…ilinear2d_aa functions (#2383)

This PR implements the missing anti-aliasing (AA) variants of upsample
functions that were requested in issue #1159:

- `aten__upsample_bicubic2d_aa` - bicubic 2D upsampling with
anti-aliasing
- `aten__upsample_bilinear2d_aa` - bilinear 2D upsampling with
anti-aliasing

## Changes Made

### Core Implementation
- **Modified helper functions** to support anti-aliasing:
- Added `antialias` parameter (default=0) to
`_aten_upsample_output_size()`
  - Added `antialias` parameter (default=0) to `_aten_upsample_scales()`
  - Maintains backward compatibility with existing code

- **Implemented AA functions** with same signatures as regular variants:
  ```python
def aten__upsample_bicubic2d_aa(self, output_size, align_corners,
scales_h=None, scales_w=None)
def aten__upsample_bilinear2d_aa(self, output_size, align_corners,
scales_h=None, scales_w=None)
  ```
  Both functions pass `antialias=1` to enable ONNX Resize anti-aliasing.

### Test Configuration
- **Added OpInfo entries** in `extra_opinfo.py` for both AA functions
- **Added TorchLibOpInfo entries** in `ops_test_data.py` with
`compare_shape_only_for_output=(0,)` since ONNX and PyTorch use
different anti-aliasing algorithms

## Technical Details

The AA variants use the same underlying logic as regular upsample
functions but enable anti-aliasing in the ONNX Resize operation. As
noted in the original issue discussion, ONNX and PyTorch implement
different anti-aliasing methods, so tests compare shapes rather than
exact values.

Example usage:
```python
import numpy as np
from onnxscript.function_libs.torch_lib.ops.nn import aten__upsample_bicubic2d_aa

# Create test input
input_tensor = np.array([[[[2,1,1,1], [1,1,1,1], [1,1,1,1], [1,1,1,1]]]]).astype(np.float32)
output_size = np.array([1,1]).astype(np.int64)

# Use AA upsampling
result = aten__upsample_bicubic2d_aa(input_tensor, output_size, align_corners=True)
print(result)  # Output: [[[[1.390625]]]]
```

## Testing Results
- ✅ All new AA function tests pass (2 passed, 1 skipped as expected for
trace-only functions)
- ✅ All existing upsample function tests continue to pass - no
regressions
- ✅ Functions produce expected different output when AA is enabled vs
disabled
- ✅ Helper functions work correctly with both `antialias=0` and
`antialias=1`

This implementation matches the approach from the previous PR #1259 and
completes the upsample function suite requested in the issue.

Fixes #1159. Fixes pytorch/pytorch#128818

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <[email protected]>
Co-authored-by: justinchuby <[email protected]>
Co-authored-by: titaiwangms <[email protected]>
Co-authored-by: Justin Chu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

2 participants