diff --git a/README.md b/README.md index 90e5e2f..c44848e 100644 --- a/README.md +++ b/README.md @@ -639,7 +639,7 @@ We will need three inputs. Since we're using the SSD300 variant, the images would need to be sized at `300, 300` pixels and in the RGB format. -Remember, we're using a VGG-16 base pretrained on ImageNet that is already available in PyTorch's `torchvision` module. [This page](https://pytorch.org/docs/master/torchvision/models.html) details the preprocessing or transformation we would need to perform in order to use this model – pixel values must be in the range [0,1] and we must then normalize the image by the mean and standard deviation of the ImageNet images' RGB channels. +Remember, we're using a VGG-16 base pretrained on ImageNet that is already available in PyTorch's `torchvision` module. [This page](https://pytorch.org/docs/stable/torchvision/models.html) details the preprocessing or transformation we would need to perform in order to use this model – pixel values must be in the range [0,1] and we must then normalize the image by the mean and standard deviation of the ImageNet images' RGB channels. ```python mean = [0.485, 0.456, 0.406]