Skip to content

Issue 2350 - support of all padding modes with tensors #2368

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 30, 2020

Conversation

vfdev-5
Copy link
Collaborator

@vfdev-5 vfdev-5 commented Jun 30, 2020

Fixes #2350

Description:

  • Added code to support "edge", "reflect" padding modes with functional_tensor.pad
  • Added tests
  • Updated docs

@vfdev-5 vfdev-5 marked this pull request as ready for review June 30, 2020 09:53
@codecov
Copy link

codecov bot commented Jun 30, 2020

Codecov Report

Merging #2368 into master will increase coverage by 0.03%.
The diff coverage is 77.77%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2368      +/-   ##
==========================================
+ Coverage   68.49%   68.52%   +0.03%     
==========================================
  Files          93       93              
  Lines        7655     7670      +15     
  Branches     1177     1182       +5     
==========================================
+ Hits         5243     5256      +13     
  Misses       2075     2075              
- Partials      337      339       +2     
Impacted Files Coverage Δ
torchvision/transforms/functional_tensor.py 65.45% <77.77%> (+2.12%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 446eac6...009c6a5. Read the comment docs.

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR Victor!

I have a couple of comments, let me know what you think

Comment on lines 23 to 24
if pil_tensor.dtype != tensor.dtype:
pil_tensor = pil_tensor.to(tensor.dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which case led you to have to perform a casting here, is it for the extra float32 and float64 types?

I think it might be preferable to avoid doing implicit casts in this function, and instead perform the cast directly in the caller -- it's more explicit.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I can cast explicitly before instead of this function 👍


out_dtype = img.dtype
need_cast = False
if img.dtype not in (torch.float32, torch.float64):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could probably be optimized because constant padding supports uint8 types as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, will fix !

Copy link
Collaborator Author

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review ! I'll update it accordingly

Comment on lines 23 to 24
if pil_tensor.dtype != tensor.dtype:
pil_tensor = pil_tensor.to(tensor.dtype)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I can cast explicitly before instead of this function 👍


out_dtype = img.dtype
need_cast = False
if img.dtype not in (torch.float32, torch.float64):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, will fix !

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks a lot!

@fmassa fmassa merged commit 6fe11d5 into pytorch:master Jun 30, 2020
@vfdev-5 vfdev-5 deleted the vfdev-5/issue-2350 branch June 30, 2020 13:34
de-vri-es pushed a commit to fizyr-forks/torchvision that referenced this pull request Aug 4, 2020
* [WIP] functional_tensor supports more padding modes

* [WIP] Support all padding modes

* Removed wip symmetric mode

* Improvements according to the review
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support other modes of padding for torch Tensors
2 participants