Skip to content

Commit 67a654b

Browse files
committed
fix some reST syntax warnings (#393)
1 parent b38343e commit 67a654b

File tree

3 files changed

+12
-4
lines changed

3 files changed

+12
-4
lines changed

advanced_source/torch_script_custom_ops.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Python and in their serialized form directly in C++.
2323
The following paragraphs give an example of writing a TorchScript custom op to
2424
call into `OpenCV <https://www.opencv.org>`_, a computer vision library written
2525
in C++. We will discuss how to work with tensors in C++, how to efficiently
26-
convert them to third party tensor formats (in this case, OpenCV ``Mat``s), how
26+
convert them to third party tensor formats (in this case, OpenCV ``Mat`` s), how
2727
to register your operator with the TorchScript runtime and finally how to
2828
compile the operator and use it in Python and C++.
2929

@@ -1018,7 +1018,7 @@ expects from a module), this route can be slightly quirky. That said, all you
10181018
need is a ``setup.py`` file in place of the ``CMakeLists.txt`` which looks like
10191019
this:
10201020
1021-
.. code-block::
1021+
.. code-block:: python
10221022
10231023
from setuptools import setup
10241024
from torch.utils.cpp_extension import BuildExtension, CppExtension
@@ -1081,7 +1081,7 @@ This will produce a shared library called ``warp_perspective.so``, which we can
10811081
pass to ``torch.ops.load_library`` as we did earlier to make our operator
10821082
visible to TorchScript:
10831083
1084-
.. code-block::
1084+
.. code-block:: python
10851085
10861086
>>> import torch
10871087
>>> torch.ops.load_library("warp_perspective.so")

beginner_source/blitz/cifar10_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ def imshow(img):
108108

109109
########################################################################
110110
# 2. Define a Convolutional Neural Network
111-
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
111+
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
112112
# Copy the neural network from the Neural Networks section before and modify it to
113113
# take 3-channel images (instead of 1-channel images as it was defined).
114114

beginner_source/nn_tutorial.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -322,6 +322,7 @@ def forward(self, xb):
322322
# Previously for our training loop we had to update the values for each parameter
323323
# by name, and manually zero out the grads for each parameter separately, like this:
324324
# ::
325+
#
325326
# with torch.no_grad():
326327
# weights -= weights.grad * lr
327328
# bias -= bias.grad * lr
@@ -334,6 +335,7 @@ def forward(self, xb):
334335
# and less prone to the error of forgetting some of our parameters, particularly
335336
# if we had a more complicated model:
336337
# ::
338+
#
337339
# with torch.no_grad():
338340
# for p in model.parameters(): p -= p.grad * lr
339341
# model.zero_grad()
@@ -408,12 +410,14 @@ def forward(self, xb):
408410
#
409411
# This will let us replace our previous manually coded optimization step:
410412
# ::
413+
#
411414
# with torch.no_grad():
412415
# for p in model.parameters(): p -= p.grad * lr
413416
# model.zero_grad()
414417
#
415418
# and instead use just:
416419
# ::
420+
#
417421
# opt.step()
418422
# opt.zero_grad()
419423
#
@@ -476,12 +480,14 @@ def get_model():
476480
###############################################################################
477481
# Previously, we had to iterate through minibatches of x and y values separately:
478482
# ::
483+
#
479484
# xb = x_train[start_i:end_i]
480485
# yb = y_train[start_i:end_i]
481486
#
482487
#
483488
# Now, we can do these two steps together:
484489
# ::
490+
#
485491
# xb,yb = train_ds[i*bs : i*bs+bs]
486492
#
487493

@@ -516,12 +522,14 @@ def get_model():
516522
###############################################################################
517523
# Previously, our loop iterated over batches (xb, yb) like this:
518524
# ::
525+
#
519526
# for i in range((n-1)//bs + 1):
520527
# xb,yb = train_ds[i*bs : i*bs+bs]
521528
# pred = model(xb)
522529
#
523530
# Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader:
524531
# ::
532+
#
525533
# for xb,yb in train_dl:
526534
# pred = model(xb)
527535

0 commit comments

Comments
 (0)