-
Notifications
You must be signed in to change notification settings - Fork 70
Add Op (_upsample_bilinear2d_aa, _upsample_bicubic2d_aa) | feat(torchlib) #1259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1259 +/- ##
==========================================
+ Coverage 78.68% 78.85% +0.17%
==========================================
Files 119 119
Lines 15762 15700 -62
Branches 2486 2481 -5
==========================================
- Hits 12403 12381 -22
+ Misses 2950 2911 -39
+ Partials 409 408 -1 ☔ View full report in Codecov by Sentry. |
Test Results 24 files ± 0 24 suites ±0 1h 41m 53s ⏱️ + 11m 9s For more details on these failures, see this check. Results for commit cf4f4af. ± Comparison against base commit 457e52e. This pull request removes 29 and adds 35 tests. Note that renamed tests count towards both.
♻️ This comment has been updated with latest results. |
It would be better if we match values because the values should be deterministic. Do we know how PyTorch does it? |
please see the description for this PR. I add comparison between onnx and torch. |
Would it be helpful to consult the PyTorch implementation? I suspect we need additional processing to implement antialiasing. |
From our discussion: understanding the PyTorch implementation proved to be harder than anticipated (https://github.com/pytorch/pytorch/blob/bcf35c6ae62bb6560befa3550e37a8283944e5f4/aten/src/ATen/native/cpu/UpSampleKernel.cpp#L2009). We will seek additional help for this. |
…ilinear2d_aa functions (#2383) This PR implements the missing anti-aliasing (AA) variants of upsample functions that were requested in issue #1159: - `aten__upsample_bicubic2d_aa` - bicubic 2D upsampling with anti-aliasing - `aten__upsample_bilinear2d_aa` - bilinear 2D upsampling with anti-aliasing ## Changes Made ### Core Implementation - **Modified helper functions** to support anti-aliasing: - Added `antialias` parameter (default=0) to `_aten_upsample_output_size()` - Added `antialias` parameter (default=0) to `_aten_upsample_scales()` - Maintains backward compatibility with existing code - **Implemented AA functions** with same signatures as regular variants: ```python def aten__upsample_bicubic2d_aa(self, output_size, align_corners, scales_h=None, scales_w=None) def aten__upsample_bilinear2d_aa(self, output_size, align_corners, scales_h=None, scales_w=None) ``` Both functions pass `antialias=1` to enable ONNX Resize anti-aliasing. ### Test Configuration - **Added OpInfo entries** in `extra_opinfo.py` for both AA functions - **Added TorchLibOpInfo entries** in `ops_test_data.py` with `compare_shape_only_for_output=(0,)` since ONNX and PyTorch use different anti-aliasing algorithms ## Technical Details The AA variants use the same underlying logic as regular upsample functions but enable anti-aliasing in the ONNX Resize operation. As noted in the original issue discussion, ONNX and PyTorch implement different anti-aliasing methods, so tests compare shapes rather than exact values. Example usage: ```python import numpy as np from onnxscript.function_libs.torch_lib.ops.nn import aten__upsample_bicubic2d_aa # Create test input input_tensor = np.array([[[[2,1,1,1], [1,1,1,1], [1,1,1,1], [1,1,1,1]]]]).astype(np.float32) output_size = np.array([1,1]).astype(np.int64) # Use AA upsampling result = aten__upsample_bicubic2d_aa(input_tensor, output_size, align_corners=True) print(result) # Output: [[[[1.390625]]]] ``` ## Testing Results - ✅ All new AA function tests pass (2 passed, 1 skipped as expected for trace-only functions) - ✅ All existing upsample function tests continue to pass - no regressions - ✅ Functions produce expected different output when AA is enabled vs disabled - ✅ Helper functions work correctly with both `antialias=0` and `antialias=1` This implementation matches the approach from the previous PR #1259 and completes the upsample function suite requested in the issue. Fixes #1159. Fixes pytorch/pytorch#128818 <!-- START COPILOT CODING AGENT TIPS --> --- 💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more [Copilot coding agent tips](https://gh.io/copilot-coding-agent-tips) in the docs. --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: justinchuby <[email protected]> Co-authored-by: titaiwangms <[email protected]> Co-authored-by: Justin Chu <[email protected]>
It seems that the antialias method is different between ONNX and PyTorch, so we can just compare the shape, instead of the value.
Below is the difference between ONXN and PyTorch:
ONXN output = [[[[1.390625]]]]
Torch output = tensor([[[[2.2656]]]])
I also tried some other parameters combination but none of them can match with torch.