Skip to content

Commit 6ab3a37

Browse files
authored
Corrected model.resnet50() spelling (#1139)
Spelling mistake led to errors for beginners.
1 parent de2571f commit 6ab3a37

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

intermediate_source/model_parallel_tutorial.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ def forward(self, x):
8686
#
8787
# It is also possible to run an existing single-GPU module on multiple GPUs
8888
# with just a few lines of changes. The code below shows how to decompose
89-
# ``torchvision.models.reset50()`` to two GPUs. The idea is to inherit from
89+
# ``torchvision.models.resnet50()`` to two GPUs. The idea is to inherit from
9090
# the existing ``ResNet`` module, and split the layers to two GPUs during
9191
# construction. Then, override the ``forward`` method to stitch two
9292
# sub-networks by moving the intermediate outputs accordingly.
@@ -136,7 +136,7 @@ def forward(self, x):
136136
#
137137
# Let us run an experiment to get a more quantitative view of the execution
138138
# time. In this experiment, we train ``ModelParallelResNet50`` and the existing
139-
# ``torchvision.models.reset50()`` by running random inputs and labels through
139+
# ``torchvision.models.resnet50()`` by running random inputs and labels through
140140
# them. After the training, the models will not produce any useful predictions,
141141
# but we can get a reasonable understanding of the execution times.
142142

0 commit comments

Comments
 (0)