Skip to content

Commit 6b0ec10

Browse files
docs: include external pages (#17826)
* pull docs * local * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * ... * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * replace * strategies * 1.0.0 * skip * links * more --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent b2d5fdf commit 6b0ec10

File tree

16 files changed

+97
-252
lines changed

16 files changed

+97
-252
lines changed

.actions/assistant.py

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -431,6 +431,42 @@ def copy_replace_imports(
431431
source_dir, source_imports, target_imports, target_dir=target_dir, lightning_by=lightning_by
432432
)
433433

434+
@staticmethod
435+
def pull_docs_files(
436+
gh_user_repo: str,
437+
target_dir: str = "docs/source-pytorch/XXX",
438+
checkout: str = "tags/1.0.0",
439+
source_dir: str = "docs/source",
440+
) -> None:
441+
"""Pull docs pages from external source and append to local docs."""
442+
import zipfile
443+
444+
zip_url = f"https://github.com/{gh_user_repo}/archive/refs/{checkout}.zip"
445+
446+
with tempfile.TemporaryDirectory() as tmp:
447+
zip_file = os.path.join(tmp, "repo.zip")
448+
urllib.request.urlretrieve(zip_url, zip_file)
449+
450+
with zipfile.ZipFile(zip_file, "r") as zip_ref:
451+
zip_ref.extractall(tmp)
452+
453+
zip_dirs = [d for d in glob.glob(os.path.join(tmp, "*")) if os.path.isdir(d)]
454+
# check that the extracted archive has only repo folder
455+
assert len(zip_dirs) == 1
456+
repo_dir = zip_dirs[0]
457+
458+
ls_pages = glob.glob(os.path.join(repo_dir, source_dir, "*.rst"))
459+
ls_pages += glob.glob(os.path.join(repo_dir, source_dir, "**", "*.rst"))
460+
for rst in ls_pages:
461+
rel_rst = rst.replace(os.path.join(repo_dir, source_dir) + os.path.sep, "")
462+
rel_dir = os.path.dirname(rel_rst)
463+
os.makedirs(os.path.join(_PROJECT_ROOT, target_dir, rel_dir), exist_ok=True)
464+
new_rst = os.path.join(_PROJECT_ROOT, target_dir, rel_rst)
465+
if os.path.isfile(new_rst):
466+
logging.warning(f"Page {new_rst} already exists in the local tree so it will be skipped.")
467+
continue
468+
shutil.copy(rst, new_rst)
469+
434470

435471
if __name__ == "__main__":
436472
import jsonargparse

docs/source-pytorch/accelerators/hpu_basic.rst

Lines changed: 0 additions & 109 deletions
This file was deleted.

docs/source-pytorch/accelerators/hpu_intermediate.rst

Lines changed: 0 additions & 101 deletions
This file was deleted.

docs/source-pytorch/advanced/model_parallel.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -58,11 +58,11 @@ Cutting-edge and third-party Strategies
5858

5959
Cutting-edge Lightning strategies are being developed by third-parties outside of Lightning.
6060

61-
If you want to try some of the latest and greatest features for model-parallel training, check out the :doc:`Colossal-AI Strategy <./third_party/colossalai>` integration.
61+
If you want to try some of the latest and greatest features for model-parallel training, check out the :doc:`Colossal-AI Strategy <../integrations/strategies/colossalai>` integration.
6262

63-
Another integration is :doc:`Bagua Strategy <./third_party/bagua>`, deep learning training acceleration framework for PyTorch, with advanced distributed training algorithms and system optimizations.
63+
Another integration is :doc:`Bagua Strategy <../integrations/strategies/bagua>`, deep learning training acceleration framework for PyTorch, with advanced distributed training algorithms and system optimizations.
6464

65-
For training on unreliable mixed GPUs across the internet check out the :doc:`Hivemind Strategy <./third_party/hivemind>` integration.
65+
For training on unreliable mixed GPUs across the internet check out the :doc:`Hivemind Strategy <../integrations/strategies/hivemind>` integration.
6666

6767
----
6868

docs/source-pytorch/common/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
Save memory with half-precision <precision>
1717
../advanced/model_parallel
1818
Train on single or multiple GPUs <../accelerators/gpu>
19-
Train on single or multiple HPUs <../accelerators/hpu>
19+
Train on single or multiple HPUs <../integrations/hpu/index>
2020
Train on single or multiple IPUs <../accelerators/ipu>
2121
Train on single or multiple TPUs <../accelerators/tpu>
2222
Train on MPS <../accelerators/mps>
@@ -148,7 +148,7 @@ How-to Guides
148148
.. displayitem::
149149
:header: Train on single or multiple HPUs
150150
:description: Train models faster with HPU accelerators
151-
:button_link: ../accelerators/hpu.html
151+
:button_link: ../integrations/hpu/index.html
152152
:col_css: col-md-4
153153
:height: 180
154154

docs/source-pytorch/common_usecases.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ Customize and extend Lightning for things like custom hardware or distributed st
123123
:header: Train on single or multiple HPUs
124124
:description: Train models faster with HPUs.
125125
:col_css: col-md-12
126-
:button_link: accelerators/hpu.html
126+
:button_link: integrations/hpu/index.html
127127
:height: 100
128128

129129
.. displayitem::

0 commit comments

Comments
 (0)