File tree 1 file changed +2
-2
lines changed
1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -86,7 +86,7 @@ def forward(self, x):
86
86
#
87
87
# It is also possible to run an existing single-GPU module on multiple GPUs
88
88
# with just a few lines of changes. The code below shows how to decompose
89
- # ``torchvision.models.reset50 ()`` to two GPUs. The idea is to inherit from
89
+ # ``torchvision.models.resnet50 ()`` to two GPUs. The idea is to inherit from
90
90
# the existing ``ResNet`` module, and split the layers to two GPUs during
91
91
# construction. Then, override the ``forward`` method to stitch two
92
92
# sub-networks by moving the intermediate outputs accordingly.
@@ -136,7 +136,7 @@ def forward(self, x):
136
136
#
137
137
# Let us run an experiment to get a more quantitative view of the execution
138
138
# time. In this experiment, we train ``ModelParallelResNet50`` and the existing
139
- # ``torchvision.models.reset50 ()`` by running random inputs and labels through
139
+ # ``torchvision.models.resnet50 ()`` by running random inputs and labels through
140
140
# them. After the training, the models will not produce any useful predictions,
141
141
# but we can get a reasonable understanding of the execution times.
142
142
You can’t perform that action at this time.
0 commit comments