Skip to content

Commit 34c8a9d

Browse files
twakchsasank
authored andcommitted
typos (pytorch#178)
1 parent 77fd1f5 commit 34c8a9d

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

intermediate_source/spatial_transformer_tutorial.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,20 +9,20 @@
99
In this tutorial, you will learn how to augment your network using
1010
a visual attention mechanism called spatial transformer
1111
networks. You can read more about the spatial transformer
12-
networks in `DeepMind paper <https://arxiv.org/abs/1506.02025>`__
12+
networks in the `DeepMind paper <https://arxiv.org/abs/1506.02025>`__
1313
1414
Spatial transformer networks are a generalization of differentiable
1515
attention to any spatial transformation. Spatial transformer networks
16-
(STN for short) allows a neural network to learn how to do spatial
17-
transformations to the input image in order to enhance the geometric
16+
(STN for short) allow a neural network to learn how to perform spatial
17+
transformations on the input image in order to enhance the geometric
1818
invariance of the model.
19-
For example it can crop a region of interest, scale and correct
20-
the orientation of an image. It can be a useful mechanism because CNN
21-
are not invariant to rotation and scale and more generally : affine
19+
For example, it can crop a region of interest, scale and correct
20+
the orientation of an image. It can be a useful mechanism because CNNs
21+
are not invariant to rotation and scale and more general affine
2222
transformations.
2323
2424
One of the best things about STN is the ability to simply plug it into
25-
any existing CNN with very little modifications.
25+
any existing CNN with very little modification.
2626
"""
2727
# License: BSD
2828
# Author: Ghassen Hamrouni
@@ -76,7 +76,7 @@
7676
# the spatial transformations that enhances the global accuracy.
7777
# - The grid generator generates a grid of coordinates in the input
7878
# image corresponding to each pixel from the output image.
79-
# - The sampler uses the parameters of the transformation and apply
79+
# - The sampler uses the parameters of the transformation and applies
8080
# it to the input image.
8181
#
8282
# .. figure:: /_static/img/stn/stn-arch.png
@@ -133,7 +133,7 @@ def forward(self, x):
133133
# transform the input
134134
x = self.stn(x)
135135

136-
# Perform the usual froward pass
136+
# Perform the usual forward pass
137137
x = F.relu(F.max_pool2d(self.conv1(x), 2))
138138
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
139139
x = x.view(-1, 320)

0 commit comments

Comments
 (0)