Skip to content

fix some reST syntax warnings (#393) #394

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions advanced_source/torch_script_custom_ops.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Python and in their serialized form directly in C++.
The following paragraphs give an example of writing a TorchScript custom op to
call into `OpenCV <https://www.opencv.org>`_, a computer vision library written
in C++. We will discuss how to work with tensors in C++, how to efficiently
convert them to third party tensor formats (in this case, OpenCV ``Mat``s), how
convert them to third party tensor formats (in this case, OpenCV ``Mat`` s), how
to register your operator with the TorchScript runtime and finally how to
compile the operator and use it in Python and C++.

Expand Down Expand Up @@ -1018,7 +1018,7 @@ expects from a module), this route can be slightly quirky. That said, all you
need is a ``setup.py`` file in place of the ``CMakeLists.txt`` which looks like
this:

.. code-block::
.. code-block:: python

from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension
Expand Down Expand Up @@ -1081,7 +1081,7 @@ This will produce a shared library called ``warp_perspective.so``, which we can
pass to ``torch.ops.load_library`` as we did earlier to make our operator
visible to TorchScript:

.. code-block::
.. code-block:: python

>>> import torch
>>> torch.ops.load_library("warp_perspective.so")
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/blitz/cifar10_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ def imshow(img):

########################################################################
# 2. Define a Convolutional Neural Network
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# Copy the neural network from the Neural Networks section before and modify it to
# take 3-channel images (instead of 1-channel images as it was defined).

Expand Down
8 changes: 8 additions & 0 deletions beginner_source/nn_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -322,6 +322,7 @@ def forward(self, xb):
# Previously for our training loop we had to update the values for each parameter
# by name, and manually zero out the grads for each parameter separately, like this:
# ::
#
# with torch.no_grad():
# weights -= weights.grad * lr
# bias -= bias.grad * lr
Expand All @@ -334,6 +335,7 @@ def forward(self, xb):
# and less prone to the error of forgetting some of our parameters, particularly
# if we had a more complicated model:
# ::
#
# with torch.no_grad():
# for p in model.parameters(): p -= p.grad * lr
# model.zero_grad()
Expand Down Expand Up @@ -408,12 +410,14 @@ def forward(self, xb):
#
# This will let us replace our previous manually coded optimization step:
# ::
#
# with torch.no_grad():
# for p in model.parameters(): p -= p.grad * lr
# model.zero_grad()
#
# and instead use just:
# ::
#
# opt.step()
# opt.zero_grad()
#
Expand Down Expand Up @@ -476,12 +480,14 @@ def get_model():
###############################################################################
# Previously, we had to iterate through minibatches of x and y values separately:
# ::
#
# xb = x_train[start_i:end_i]
# yb = y_train[start_i:end_i]
#
#
# Now, we can do these two steps together:
# ::
#
# xb,yb = train_ds[i*bs : i*bs+bs]
#

Expand Down Expand Up @@ -516,12 +522,14 @@ def get_model():
###############################################################################
# Previously, our loop iterated over batches (xb, yb) like this:
# ::
#
# for i in range((n-1)//bs + 1):
# xb,yb = train_ds[i*bs : i*bs+bs]
# pred = model(xb)
#
# Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader:
# ::
#
# for xb,yb in train_dl:
# pred = model(xb)

Expand Down