Skip to content

Commit edceb74

Browse files
authored
Merge branch 'main' into add-new-theme
2 parents 44c4ac9 + 5f17335 commit edceb74

File tree

4 files changed

+115
-40
lines changed

4 files changed

+115
-40
lines changed

README.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,16 @@ GALLERY_PATTERN="neural_style_transfer_tutorial.py" sphinx-build . _build
5757

5858
The `GALLERY_PATTERN` variable respects regular expressions.
5959

60+
## Spell Check
61+
You can run pyspelling to check for spelling errors in the tutorials. To check only Python files, run pyspelling -n python. To check only .rst files, use pyspelling -n reST. Currently, .rst spell checking is limited to the beginner/ directory. Contributions to enable spell checking in other directories are welcome!
62+
63+
64+
```
65+
pyspelling # full check (~3 mins)
66+
pyspelling -n python # Python files only
67+
pyspelling -n reST # reST files (only beginner/ dir currently included)
68+
```
69+
6070

6171
## About contributing to PyTorch Documentation and Tutorials
6272
* You can find information about contributing to PyTorch documentation in the

docathon-leaderboard.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,50 @@
1+
# 🎉 PyTorch Docathon Leaderboard 2025 🎉
2+
3+
This is the list of the docathon contributors that have participated and contributed to the PyTorch H1 2025 docathon. A big shout out to everyone who have participated!
4+
We have awarded points for each merged PR as follows:
5+
6+
* easy - 2 points
7+
* medium - 5 points
8+
* advanced - 10 points
9+
10+
We have granted half points (1, 2, and 5 respectively) for all additional PRs merged against the same issue.
11+
In some cases, we have awarded credit for the PRs that were not merged or issues that have been closed without a merged PR.
12+
13+
| Rank | Author | Points | PRs |
14+
|:---:|:------------|------:|:----|
15+
| 🥇 | [j-silv](https://github.com/j-silv) | 31 | [#155753](https://github.com/pytorch/pytorch/pull/155753), [#155659](https://github.com/pytorch/pytorch/pull/155659), [#155567](https://github.com/pytorch/pytorch/pull/155567), [#155540](https://github.com/pytorch/pytorch/pull/155540), [#155528](https://github.com/pytorch/pytorch/pull/155528), [#155198](https://github.com/pytorch/pytorch/pull/155198), [#155093](https://github.com/pytorch/pytorch/pull/155093), [#3389](https://github.com/pytorch/tutorials/pull/3389) |
16+
| 🥇 | [windsonsea](https://github.com/windsonsea) | 19 | [#155789](https://github.com/pytorch/pytorch/pull/155789), [#155520](https://github.com/pytorch/pytorch/pull/155520), [#156039](https://github.com/pytorch/pytorch/pull/156039), [#156009](https://github.com/pytorch/pytorch/pull/156009), [#155653](https://github.com/pytorch/pytorch/pull/155653) |
17+
| 🥇 | [kiszk](https://github.com/kiszk) | 16 | [#155762](https://github.com/pytorch/pytorch/pull/155762), [#155514](https://github.com/pytorch/pytorch/pull/155514), [#155351](https://github.com/pytorch/pytorch/pull/155351), [#155348](https://github.com/pytorch/pytorch/pull/155348), [#155347](https://github.com/pytorch/pytorch/pull/155347) |
18+
| 🥈 | [Rachel0619](https://github.com/Rachel0619) | 14 | [#155764](https://github.com/pytorch/pytorch/pull/155764), [#155482](https://github.com/pytorch/pytorch/pull/155482), [#3385](https://github.com/pytorch/tutorials/pull/3385), [#3381](https://github.com/pytorch/tutorials/pull/3381) |
19+
| 🥈 | [jafraustro](https://github.com/jafraustro) | 14 | [#155523](https://github.com/pytorch/pytorch/pull/155523), [#155369](https://github.com/pytorch/pytorch/pull/155369), [#133563](https://github.com/pytorch/pytorch/issues/133563), [#129446](https://github.com/pytorch/pytorch/issues/129446) |
20+
| 🥈 | [Dhia-naouali](https://github.com/Dhia-naouali) | 12 | [#155911](https://github.com/pytorch/pytorch/pull/155911), [#155840](https://github.com/pytorch/pytorch/pull/155840), [#155505](https://github.com/pytorch/pytorch/pull/155505) |
21+
| 🥈 | [loganthomas](https://github.com/loganthomas) | 12 | [#155702](https://github.com/pytorch/pytorch/pull/155702), [#155088](https://github.com/pytorch/pytorch/pull/155088), [#155649](https://github.com/pytorch/pytorch/pull/155649) |
22+
| 🥈 | [nirajkamal](https://github.com/nirajkamal) | 12 | [#155430](https://github.com/pytorch/pytorch/pull/155430), [#155228](https://github.com/pytorch/pytorch/pull/155228), [#3376](https://github.com/pytorch/tutorials/pull/3376) |
23+
| 🥉 | [Juliandlb](https://github.com/Juliandlb) | 10 | [#155987](https://github.com/pytorch/pytorch/pull/155987), [#155618](https://github.com/pytorch/pytorch/pull/155618) |
24+
| 🥉 | [ggsmith842](https://github.com/ggsmith842) | 7 | [#155767](https://github.com/pytorch/pytorch/pull/155767), [#155297](https://github.com/pytorch/pytorch/pull/155297) |
25+
| 🥉 | [ParagEkbote](https://github.com/ParagEkbote) | 7 | [#155683](https://github.com/pytorch/pytorch/pull/155683), [#155341](https://github.com/pytorch/pytorch/pull/155341) |
26+
|| [GdoongMathew](https://github.com/GdoongMathew) | 5 | [#155813](https://github.com/pytorch/pytorch/pull/155813) |
27+
|| [eromomon](https://github.com/eromomon) | 5 | [#155696](https://github.com/pytorch/pytorch/pull/155696) |
28+
|| [dggaytan](https://github.com/dggaytan) | 5 | [#155377](https://github.com/pytorch/pytorch/pull/155377) |
29+
|| [spzala](https://github.com/spzala) | 5 | [#155335](https://github.com/pytorch/pytorch/pull/155335) |
30+
|| [framoncg](https://github.com/framoncg) | 5 | [#155298](https://github.com/pytorch/pytorch/pull/155298) |
31+
|| [abhinav-TB](https://github.com/abhinav-TB) | 5 | [#155252](https://github.com/pytorch/pytorch/pull/155252) |
32+
|| [aagalleg](https://github.com/aagalleg) | 5 | [#155137](https://github.com/pytorch/pytorch/pull/155137) |
33+
|| [kiersten-stokes](https://github.com/kiersten-stokes) | 5 | [#155067](https://github.com/pytorch/pytorch/pull/155067) |
34+
|| [krishnakalyan3](https://github.com/krishnakalyan3) | 5 | [#3387](https://github.com/pytorch/tutorials/pull/3387) |
35+
|| [splion-360](https://github.com/splion-360) | 5 | [#3384](https://github.com/pytorch/tutorials/pull/3384) |
36+
|| [harshaljanjani](https://github.com/harshaljanjani) | 5 | [#3377](https://github.com/pytorch/tutorials/pull/3377) |
37+
|| [b-koopman](https://github.com/b-koopman) | 4 | [#155100](https://github.com/pytorch/pytorch/pull/155100), [#155889](https://github.com/pytorch/pytorch/pull/155889) |
38+
|| [thatgeeman](https://github.com/thatgeeman) | 4 | [#155404](https://github.com/pytorch/pytorch/pull/155404), [#156094](https://github.com/pytorch/pytorch/pull/156094) |
39+
|| [frost-intel](https://github.com/frost-intel) | 2 | [#3393](https://github.com/pytorch/tutorials/pull/3393) |
40+
|| [ANotFox](https://github.com/ANotFox) | 2 | [#155148](https://github.com/pytorch/pytorch/pull/155148) |
41+
|| [QasimKhan5x](https://github.com/QasimKhan5x) | 2 | [#155074](https://github.com/pytorch/pytorch/pull/155074) |
42+
|| [Ashish-Soni08](https://github.com/Ashish-Soni08) | 2 | [#3379](https://github.com/pytorch/tutorials/pull/3379) |
43+
|| [FORTFANOP](https://github.com/FORTFANOP) | 2 | [#3378](https://github.com/pytorch/tutorials/pull/3378) |
44+
|| [newtdms ](https://github.com/newtdms ) | 2 | [#155497](https://github.com/pytorch/pytorch/pull/155497) |
45+
|| [srini047](https://github.com/srini047) | 2 | [#155554](https://github.com/pytorch/pytorch/pull/155554) |
46+
47+
148
# 🎉 Docathon H1 2024 Leaderboard 🎉
249

350
This is the list of the docathon contributors that have participated and contributed to the PyTorch H1 2024 docathon.

intermediate_source/reinforcement_q_learning.py

Lines changed: 23 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,24 @@
9292
)
9393

9494

95+
# To ensure reproducibility during training, you can fix the random seeds
96+
# by uncommenting the lines below. This makes the results consistent across
97+
# runs, which is helpful for debugging or comparing different approaches.
98+
#
99+
# That said, allowing randomness can be beneficial in practice, as it lets
100+
# the model explore different training trajectories.
101+
102+
103+
# seed = 42
104+
# random.seed(seed)
105+
# torch.manual_seed(seed)
106+
# env.reset(seed=seed)
107+
# env.action_space.seed(seed)
108+
# env.observation_space.seed(seed)
109+
# if torch.cuda.is_available():
110+
# torch.cuda.manual_seed(seed)
111+
112+
95113
######################################################################
96114
# Replay Memory
97115
# -------------
@@ -253,13 +271,15 @@ def forward(self, x):
253271
# EPS_DECAY controls the rate of exponential decay of epsilon, higher means a slower decay
254272
# TAU is the update rate of the target network
255273
# LR is the learning rate of the ``AdamW`` optimizer
274+
256275
BATCH_SIZE = 128
257276
GAMMA = 0.99
258277
EPS_START = 0.9
259-
EPS_END = 0.05
260-
EPS_DECAY = 1000
278+
EPS_END = 0.01
279+
EPS_DECAY = 2500
261280
TAU = 0.005
262-
LR = 1e-4
281+
LR = 3e-4
282+
263283

264284
# Get number of actions from gym action space
265285
n_actions = env.action_space.n

recipes_source/recipes/profiler_recipe.py

Lines changed: 35 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -105,22 +105,24 @@
105105

106106
######################################################################
107107
# The output will look like (omitting some columns):
108-
109-
# --------------------------------- ------------ ------------ ------------ ------------
110-
# Name Self CPU CPU total CPU time avg # of Calls
111-
# --------------------------------- ------------ ------------ ------------ ------------
112-
# model_inference 5.509ms 57.503ms 57.503ms 1
113-
# aten::conv2d 231.000us 31.931ms 1.597ms 20
114-
# aten::convolution 250.000us 31.700ms 1.585ms 20
115-
# aten::_convolution 336.000us 31.450ms 1.573ms 20
116-
# aten::mkldnn_convolution 30.838ms 31.114ms 1.556ms 20
117-
# aten::batch_norm 211.000us 14.693ms 734.650us 20
118-
# aten::_batch_norm_impl_index 319.000us 14.482ms 724.100us 20
119-
# aten::native_batch_norm 9.229ms 14.109ms 705.450us 20
120-
# aten::mean 332.000us 2.631ms 125.286us 21
121-
# aten::select 1.668ms 2.292ms 8.988us 255
122-
# --------------------------------- ------------ ------------ ------------ ------------
123-
# Self CPU time total: 57.549m
108+
#
109+
# .. code-block:: sh
110+
#
111+
# --------------------------------- ------------ ------------ ------------ ------------
112+
# Name Self CPU CPU total CPU time avg # of Calls
113+
# --------------------------------- ------------ ------------ ------------ ------------
114+
# model_inference 5.509ms 57.503ms 57.503ms 1
115+
# aten::conv2d 231.000us 31.931ms 1.597ms 20
116+
# aten::convolution 250.000us 31.700ms 1.585ms 20
117+
# aten::_convolution 336.000us 31.450ms 1.573ms 20
118+
# aten::mkldnn_convolution 30.838ms 31.114ms 1.556ms 20
119+
# aten::batch_norm 211.000us 14.693ms 734.650us 20
120+
# aten::_batch_norm_impl_index 319.000us 14.482ms 724.100us 20
121+
# aten::native_batch_norm 9.229ms 14.109ms 705.450us 20
122+
# aten::mean 332.000us 2.631ms 125.286us 21
123+
# aten::select 1.668ms 2.292ms 8.988us 255
124+
# --------------------------------- ------------ ------------ ------------ ------------
125+
# Self CPU time total: 57.549m
124126
#
125127

126128
######################################################################
@@ -209,8 +211,6 @@
209211
# Self CPU time total: 23.015ms
210212
# Self CUDA time total: 11.666ms
211213
#
212-
######################################################################
213-
214214

215215
######################################################################
216216
# (Note: the first use of XPU profiling may bring an extra overhead.)
@@ -220,28 +220,26 @@
220220
#
221221
# .. code-block:: sh
222222
#
223-
#------------------------------------------------------- ------------ ------------ ------------ ------------ ------------
224-
# Name Self XPU Self XPU % XPU total XPU time avg # of Calls
225-
# ------------------------------------------------------- ------------ ------------ ------------ ------------ ------------
226-
# model_inference 0.000us 0.00% 2.567ms 2.567ms 1
227-
# aten::conv2d 0.000us 0.00% 1.871ms 93.560us 20
228-
# aten::convolution 0.000us 0.00% 1.871ms 93.560us 20
229-
# aten::_convolution 0.000us 0.00% 1.871ms 93.560us 20
230-
# aten::convolution_overrideable 1.871ms 72.89% 1.871ms 93.560us 20
231-
# gen_conv 1.484ms 57.82% 1.484ms 74.216us 20
232-
# aten::batch_norm 0.000us 0.00% 432.640us 21.632us 20
233-
# aten::_batch_norm_impl_index 0.000us 0.00% 432.640us 21.632us 20
234-
# aten::native_batch_norm 432.640us 16.85% 432.640us 21.632us 20
235-
# conv_reorder 386.880us 15.07% 386.880us 6.448us 60
236-
# ------------------------------------------------------- ------------ ------------ ------------ ------------ ------------
237-
# Self CPU time total: 712.486ms
238-
# Self XPU time total: 2.567ms
239-
223+
# ------------------------------ ------------ ------------ ------------ ------------ ------------
224+
# Name Self XPU Self XPU % XPU total XPU time avg # of Calls
225+
# ------------------------------ ------------ ------------ ------------ ------------ ------------
226+
# model_inference 0.000us 0.00% 2.567ms 2.567ms 1
227+
# aten::conv2d 0.000us 0.00% 1.871ms 93.560us 20
228+
# aten::convolution 0.000us 0.00% 1.871ms 93.560us 20
229+
# aten::_convolution 0.000us 0.00% 1.871ms 93.560us 20
230+
# aten::convolution_overrideable 1.871ms 72.89% 1.871ms 93.560us 20
231+
# gen_conv 1.484ms 57.82% 1.484ms 74.216us 20
232+
# aten::batch_norm 0.000us 0.00% 432.640us 21.632us 20
233+
# aten::_batch_norm_impl_index 0.000us 0.00% 432.640us 21.632us 20
234+
# aten::native_batch_norm 432.640us 16.85% 432.640us 21.632us 20
235+
# conv_reorder 386.880us 15.07% 386.880us 6.448us 60
236+
# ------------------------------ ------------ ------------ ------------ ------------ ------------
237+
# Self CPU time total: 712.486ms
238+
# Self XPU time total: 2.567ms
240239
#
241240

242-
243241
######################################################################
244-
# Note the occurrence of on-device kernels in the output (e.g. ``sgemm_32x32x32_NN``).
242+
# Note the occurrence of on-device kernels in the output (e.g. ``sgemm_32x32x32_NN`` for CUDA or ``gen_conv`` for XPU).
245243

246244
######################################################################
247245
# 4. Using profiler to analyze memory consumption

0 commit comments

Comments
 (0)