Skip to content

LSTM's -> LSTMs in equence_models_tutorial.py docs #1136

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 28, 2020

Conversation

adelevie
Copy link
Contributor

No description provided.

@netlify
Copy link

netlify bot commented Aug 25, 2020

Deploy preview for pytorch-tutorials-preview ready!

Built with commit 17a85b6

https://deploy-preview-1136--pytorch-tutorials-preview.netlify.app

@brianjo brianjo merged commit 3191c0c into pytorch:master Aug 28, 2020
brianjo added a commit that referenced this pull request Sep 24, 2020
* Fix typo (#1118)

In PyTorch tutorial, `torch` should be installed rather than `torchaudio`

* Recover the attributes of torch in memory_format_tutorial (#1112)

Co-authored-by: Brian Johnson <[email protected]>

* fix bugs for data_loading_tutorial and dcgan_faces_tutorial (#1092)

* Update autocast in dispatcher tutorial (#1128)

* draft

* fixes

* dont overrun the line

* Corrected model.resnet50() spelling (#1139)

Spelling mistake led to errors for beginners.

* Fix typo & Minor changes (#1138)

Thanks for the fixes @codingbowoo!

* Run win_test_worker manually (#1142)

Merging to clean up a build issue.

* Disable `pytorch_windows_builder_worker` config (#1143)

See #1141

* Update index.rst (#1140)

Fixed incorrect link.

* Update index.rst

Fix to broken link.

* LSTM's -> LSTMs in equence_models_tutorial.py docs (#1136)

Co-authored-by: Brian Johnson <[email protected]>

* Added Ray Tune Hyperparameter Tuning Tutorial (#1066)

* Added Ray Tune Hyperparameter Tuning Tutorial

* Use nightly ray release

* Fix checkpoint API

Co-authored-by: Brian Johnson <[email protected]>

* Fix typo in "Introduction to Pytorch" tutorial (in NLP tutorial) (#1145)

* Fix typo in "Introduction to Pytorch" tutorial (in Pytorch for NLP tutorials)

* Dummy commit, to restart CI

* Revert dummy commit, to restart CI

* Revert whitespace changes

* Install torch not torch vision (#1153)

Small update to recipe that instructs users to install `torch` not `torchaudio`

* Python recipe for automatic mixed precision (#1137)

* fdsa

* Tutorial runs

* clarify one scaler per convergence run

* adjust sizes, dont run illustrative sections

* satisfying ocd

* MORE

* fdsa

* details

* rephrase

* fix formatting

* move script to recipes

* hopefully moved to recipes

* fdsa

* add amp_tutorial to toctree

* amp_tutorial -> amp_recipe

* looks like backtick highlights dont render in card_description

* correct path for amp_recipe.html

* arch notes and saving/restoring

* formatting

* fdsa

* Clarify autograd-autocast interaction for custom ops

* touchups

Co-authored-by: Brian Johnson <[email protected]>

* Fix model to be properly exported to ONNX (#1144)

Co-authored-by: Brian Johnson <[email protected]>

* Dist rpc merge (#1158)

* Create distributed_rpc_profiling.rst

* Update recipes_index.rst

* Add files via upload

* Update recipes_index.rst

* Fix typo "asynchronizely" -> "asynchronously" (#1154)

* Update dist_overview with additional information. (#1155)

Summary: 1) Added DDP + RPC tutorial.
2) Added a pointer to PT Distributed CONTRIBUTING.md.

Test Plan: Verified by loading the page locally.

Reviewers: sentinel

Subscribers:

Tasks:

Tags:

Co-authored-by: pritam <[email protected]>

* Add Performance Tuning guide recipe (#1161)

* Performance Tuning Guide - initial commit

* Minor tweaks

* Switched profiling guide thumbnail to pytorch logo

* Converted Tuning Guide to 80 chars/line

* Split tuning guide into general, GPU-specific and distributed optimizations.

* WAR to fix generation of header for 1st section

* Minor fixes

* Implemented changes suggested during initial review

* Changed 'addition assignment' to 'addition'

* Removed sentences about 1 CPU core for DataParallel training

* Reordering of layers is recommended only for DDP(find_unused_parameters=True)

* Fixed formatting

* s/constructors/model constructors and s/match/roughly match

* Fixed typos

* A fix for one line comment when removing runnable code. (#1165)

Co-authored-by: v-jizhang <[email protected]>

Co-authored-by: Nikita Shulga <[email protected]>
Co-authored-by: guyang3532 <[email protected]>
Co-authored-by: mcarilli <[email protected]>
Co-authored-by: Sayantan Das <[email protected]>
Co-authored-by: 장보우 Bowoo Jang <[email protected]>
Co-authored-by: Alan deLevie <[email protected]>
Co-authored-by: krfricke <[email protected]>
Co-authored-by: Vijay Viswanathan <[email protected]>
Co-authored-by: J. Randall Hunt <[email protected]>
Co-authored-by: Thiago Crepaldi <[email protected]>
Co-authored-by: Peter Whidden <[email protected]>
Co-authored-by: Pritam Damania <[email protected]>
Co-authored-by: pritam <[email protected]>
Co-authored-by: Szymon Migacz <[email protected]>
Co-authored-by: Jinlin Zhang <[email protected]>
Co-authored-by: v-jizhang <[email protected]>
rodrigo-techera pushed a commit to Experience-Monks/tutorials that referenced this pull request Nov 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants