Skip to content

Commit a56db8f

Browse files
committed
docs for resize
1 parent 897ac9c commit a56db8f

File tree

2 files changed

+10
-0
lines changed

2 files changed

+10
-0
lines changed

torchvision/transforms/functional.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -343,6 +343,11 @@ def resize(img: Tensor, size: List[int], interpolation: InterpolationMode = Inte
343343
If the image is torch Tensor, it is expected
344344
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
345345
346+
.. warning::
347+
The output image might be different depending on its type: the interpolation of PIL images
348+
and tensors is slightly different, which may lead to significant differences in the performance
349+
of a network. Therefore, it is preferable to train and serve a model with the same input types.
350+
346351
Args:
347352
img (PIL Image or Tensor): Image to be resized.
348353
size (sequence or int): Desired output size. If size is a sequence like

torchvision/transforms/transforms.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -229,6 +229,11 @@ class Resize(torch.nn.Module):
229229
If the image is torch Tensor, it is expected
230230
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
231231
232+
.. warning::
233+
The output image might be different depending on its type: the interpolation of PIL images
234+
and tensors is slightly different, which may lead to significant differences in the performance
235+
of a network. Therefore, it is preferable to train and serve a model with the same input types.
236+
232237
Args:
233238
size (sequence or int): Desired output size. If size is a sequence like
234239
(h, w), output size will be matched to this. If size is an int,

0 commit comments

Comments
 (0)