Skip to content

Commit 6928b0d

Browse files
committed
address comment: describe antialiasing
1 parent a56db8f commit 6928b0d

File tree

2 files changed

+8
-6
lines changed

2 files changed

+8
-6
lines changed

torchvision/transforms/functional.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -344,9 +344,10 @@ def resize(img: Tensor, size: List[int], interpolation: InterpolationMode = Inte
344344
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
345345
346346
.. warning::
347-
The output image might be different depending on its type: the interpolation of PIL images
348-
and tensors is slightly different, which may lead to significant differences in the performance
349-
of a network. Therefore, it is preferable to train and serve a model with the same input types.
347+
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
348+
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
349+
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
350+
types.
350351
351352
Args:
352353
img (PIL Image or Tensor): Image to be resized.

torchvision/transforms/transforms.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -230,9 +230,10 @@ class Resize(torch.nn.Module):
230230
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
231231
232232
.. warning::
233-
The output image might be different depending on its type: the interpolation of PIL images
234-
and tensors is slightly different, which may lead to significant differences in the performance
235-
of a network. Therefore, it is preferable to train and serve a model with the same input types.
233+
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
234+
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
235+
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
236+
types.
236237
237238
Args:
238239
size (sequence or int): Desired output size. If size is a sequence like

0 commit comments

Comments
 (0)