Skip to content

Commit 8274fd9

Browse files
committed
fix: Update notebooks with new library name Torch-TensorRT
Signed-off-by: Dheeraj Peri <[email protected]>
1 parent 65ffaef commit 8274fd9

File tree

6 files changed

+82
-96
lines changed

6 files changed

+82
-96
lines changed

notebooks/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Jupyter demo notebooks
2-
This folder contains demo notebooks for TRTorch.
2+
This folder contains demo notebooks for Torch-TensorRT.
33

44
## 1. Requirements
55

@@ -8,19 +8,19 @@ The most convenient way to run these notebooks is via a docker container, which
88
First, clone the repository:
99

1010
```
11-
git clone https://github.com/NVIDIA/TRTorch
11+
git clone https://github.com/NVIDIA/Torch-TensorRT
1212
```
1313

14-
Next, build the NVIDIA TRTorch container (from repo root):
14+
Next, build the NVIDIA Torch-TensorRT container (from repo root):
1515

1616
```
17-
docker build -t trtorch -f ./docker/Dockerfile.21.06 .
17+
docker build -t torch_tensorrt -f ./docker/Dockerfile.21.06 .
1818
```
1919

2020
Then launch the container with:
2121

2222
```
23-
docker run --runtime=nvidia -it --rm --ipc=host --net=host trtorch
23+
docker run --runtime=nvidia -it --rm --ipc=host --net=host torch_tensorrt
2424
```
2525

2626
Within the docker interactive bash session, start Jupyter with
@@ -38,14 +38,14 @@ in, for example:
3838
```http://[host machine]:8888/?token=aae96ae9387cd28151868fee318c3b3581a2d794f3b25c6b```
3939

4040

41-
Within the container, the notebooks themselves are located at `/workspace/TRTorch/notebooks`.
41+
Within the container, the notebooks themselves are located at `/workspace/Torch-TensorRT/notebooks`.
4242

4343
## 2. Notebook list
4444

4545
- [lenet-getting-started.ipynb](lenet-getting-started.ipynb): simple example on a LeNet network.
46-
- [ssd-object-detection-demo.ipynb](ssd-object-detection-demo.ipynb): demo for compiling a pretrained SSD model using TRTorch.
46+
- [ssd-object-detection-demo.ipynb](ssd-object-detection-demo.ipynb): demo for compiling a pretrained SSD model using Torch-TensorRT.
4747
- [Resnet50-example.ipynb](Resnet50-example.ipynb): demo on the ResNet-50 network.
48-
- [vgg-qat.ipynb](vgg-qat.ipynb): Quantization Aware Trained models in INT8 using TRTorch
48+
- [vgg-qat.ipynb](vgg-qat.ipynb): Quantization Aware Trained models in INT8 using Torch-TensorRT
4949

5050
```python
5151

notebooks/Resnet50-example.ipynb

Lines changed: 18 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
"source": [
2929
"<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n",
3030
"\n",
31-
"# TRTorch Getting Started - ResNet 50"
31+
"# Torch-TensorRT Getting Started - ResNet 50"
3232
]
3333
},
3434
{
@@ -41,7 +41,7 @@
4141
"\n",
4242
"When deploying on NVIDIA GPUs TensorRT, NVIDIA's Deep Learning Optimization SDK and Runtime is able to take models from any major framework and specifically tune them to perform better on specific target hardware in the NVIDIA family be it an A100, TITAN V, Jetson Xavier or NVIDIA's Deep Learning Accelerator. TensorRT performs a couple sets of optimizations to achieve this. TensorRT fuses layers and tensors in the model graph, it then uses a large kernel library to select implementations that perform best on the target GPU. TensorRT also has strong support for reduced operating precision execution which allows users to leverage the Tensor Cores on Volta and newer GPUs as well as reducing memory and computation footprints on device.\n",
4343
"\n",
44-
"TRTorch is a compiler that uses TensorRT to optimize TorchScript code, compiling standard TorchScript modules into ones that internally run with TensorRT optimizations. This enables you to continue to remain in the PyTorch ecosystem, using all the great features PyTorch has such as module composability, its flexible tensor implementation, data loaders and more. TRTorch is available to use with both PyTorch and LibTorch."
44+
"Torch-TensorRT is a compiler that uses TensorRT to optimize TorchScript code, compiling standard TorchScript modules into ones that internally run with TensorRT optimizations. This enables you to continue to remain in the PyTorch ecosystem, using all the great features PyTorch has such as module composability, its flexible tensor implementation, data loaders and more. Torch-TensorRT is available to use with both PyTorch and LibTorch."
4545
]
4646
},
4747
{
@@ -50,13 +50,13 @@
5050
"source": [
5151
"### Learning objectives\n",
5252
"\n",
53-
"This notebook demonstrates the steps for compiling a TorchScript module with TRTorch on a pretrained ResNet-50 network, and running it to test the speedup obtained.\n",
53+
"This notebook demonstrates the steps for compiling a TorchScript module with Torch-TensorRT on a pretrained ResNet-50 network, and running it to test the speedup obtained.\n",
5454
"\n",
5555
"## Content\n",
5656
"1. [Requirements](#1)\n",
5757
"1. [ResNet-50 Overview](#2)\n",
5858
"1. [Creating TorchScript modules](#3)\n",
59-
"1. [Compiling with TRTorch](#4)\n",
59+
"1. [Compiling with Torch-TensorRT](#4)\n",
6060
"1. [Conclusion](#5)"
6161
]
6262
},
@@ -375,7 +375,7 @@
375375
}
376376
],
377377
"source": [
378-
"# Model benchmark without TRTorch/TensorRT\n",
378+
"# Model benchmark without Torch-TensorRT\n",
379379
"model = resnet50_model.eval().to(\"cuda\")\n",
380380
"benchmark(model, input_shape=(128, 3, 224, 224), nruns=100)"
381381
]
@@ -387,12 +387,12 @@
387387
"<a id=\"3\"></a>\n",
388388
"## 3. Creating TorchScript modules\n",
389389
"\n",
390-
"To compile with TRTorch, the model must first be in **TorchScript**. TorchScript is a programming language included in PyTorch which removes the Python dependency normal PyTorch models have. This conversion is done via a JIT compiler which given a PyTorch Module will generate an equivalent TorchScript Module. There are two paths that can be used to generate TorchScript: **Tracing** and **Scripting**. \n",
390+
"To compile with Torch-TensorRT, the model must first be in **TorchScript**. TorchScript is a programming language included in PyTorch which removes the Python dependency normal PyTorch models have. This conversion is done via a JIT compiler which given a PyTorch Module will generate an equivalent TorchScript Module. There are two paths that can be used to generate TorchScript: **Tracing** and **Scripting**. \n",
391391
"\n",
392392
"- Tracing follows execution of PyTorch generating ops in TorchScript corresponding to what it sees. \n",
393393
"- Scripting does an analysis of the Python code and generates TorchScript, this allows the resulting graph to include control flow which tracing cannot do. \n",
394394
"\n",
395-
"Tracing is more likely to compile successfully with TRTorch due to simplicity (though both systems are supported). We start with an example of the traced model in TorchScript."
395+
"Tracing is more likely to compile successfully with Torch-TensorRT due to simplicity (though both systems are supported). We start with an example of the traced model in TorchScript."
396396
]
397397
},
398398
{
@@ -470,7 +470,7 @@
470470
"metadata": {},
471471
"source": [
472472
"<a id=\"4\"></a>\n",
473-
"## 4. Compiling with TRTorch"
473+
"## 4. Compiling with Torch-TensorRT"
474474
]
475475
},
476476
{
@@ -479,7 +479,7 @@
479479
"source": [
480480
"TorchScript modules behave just like normal PyTorch modules and are intercompatible. From TorchScript we can now compile a TensorRT based module. This module will still be implemented in TorchScript but all the computation will be done in TensorRT.\n",
481481
"\n",
482-
"As mentioned earlier, we start with an example of TRTorch compilation with the traced model.\n",
482+
"As mentioned earlier, we start with an example of Torch-TensorRT compilation with the traced model.\n",
483483
"\n",
484484
"Note that we show benchmarking results of two precisions: FP32 (single precision) and FP16 (half precision)."
485485
]
@@ -497,12 +497,12 @@
497497
"metadata": {},
498498
"outputs": [],
499499
"source": [
500-
"import trtorch\n",
500+
"import torch_tensorrt\n",
501501
"\n",
502502
"# The compiled module will have precision as specified by \"op_precision\".\n",
503503
"# Here, it will have FP16 precision.\n",
504-
"trt_model_fp32 = trtorch.compile(traced_model, {\n",
505-
" \"inputs\": [trtorch.Input((128, 3, 224, 224))],\n",
504+
"trt_model_fp32 = torch_tensorrt.compile(traced_model, {\n",
505+
" \"inputs\": [torch_tensorrt.Input((128, 3, 224, 224))],\n",
506506
" \"enabled_precisions\": {torch.float32}, # Run with FP32\n",
507507
" \"workspace_size\": 1 << 20\n",
508508
"})\n",
@@ -554,12 +554,12 @@
554554
"metadata": {},
555555
"outputs": [],
556556
"source": [
557-
"import trtorch\n",
557+
"import torch_tensorrt\n",
558558
"\n",
559559
"# The compiled module will have precision as specified by \"op_precision\".\n",
560560
"# Here, it will have FP16 precision.\n",
561-
"trt_model = trtorch.compile(traced_model, {\n",
562-
" \"inputs\": [trtorch.Input((128, 3, 224, 224))],\n",
561+
"trt_model = torch_tensorrt.compile(traced_model, {\n",
562+
" \"inputs\": [torch_tensorrt.Input((128, 3, 224, 224))],\n",
563563
" \"enabled_precisions\": {torch.float, torch.half}, # Run with FP16\n",
564564
" \"workspace_size\": 1 << 20\n",
565565
"})\n"
@@ -604,18 +604,11 @@
604604
"<a id=\"5\"></a>\n",
605605
"## 5. Conclusion\n",
606606
"\n",
607-
"In this notebook, we have walked through the complete process of compiling TorchScript models with TRTorch for ResNet-50 model and test the performance impact of the optimization. With TRTorch, we observe a speedup of **1.4X** with FP32, and **3.0X** with FP16.\n",
607+
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT for ResNet-50 model and test the performance impact of the optimization. With Torch-TensorRT, we observe a speedup of **1.4X** with FP32, and **3.0X** with FP16.\n",
608608
"\n",
609609
"### What's next\n",
610-
"Now it's time to try TRTorch on your own model. Fill out issues at https://github.com/NVIDIA/TRTorch. Your involvement will help future development of TRTorch.\n"
610+
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
611611
]
612-
},
613-
{
614-
"cell_type": "code",
615-
"execution_count": null,
616-
"metadata": {},
617-
"outputs": [],
618-
"source": []
619612
}
620613
],
621614
"metadata": {
@@ -634,7 +627,7 @@
634627
"name": "python",
635628
"nbconvert_exporter": "python",
636629
"pygments_lexer": "ipython3",
637-
"version": "3.8.10"
630+
"version": "3.6.13"
638631
}
639632
},
640633
"nbformat": 4,

notebooks/WORKSPACE.notebook

100755100644
Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
workspace(name = "TRTorch")
1+
workspace(name = "Torch-TensorRT")
22

33
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
44
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
@@ -85,11 +85,11 @@ new_local_repository(
8585
# Testing Dependencies (optional - comment out on aarch64)
8686
#########################################################################
8787
pip3_import(
88-
name = "trtorch_py_deps",
88+
name = "torch_tensorrt_py_deps",
8989
requirements = "//py:requirements.txt"
9090
)
9191

92-
load("@trtorch_py_deps//:requirements.bzl", "pip_install")
92+
load("@torch_tensorrt_py_deps//:requirements.bzl", "pip_install")
9393
pip_install()
9494

9595
pip3_import(
@@ -99,4 +99,3 @@ pip3_import(
9999

100100
load("@py_test_deps//:requirements.bzl", "pip_install")
101101
pip_install()
102-

notebooks/lenet-getting-started.ipynb

100755100644
Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
"source": [
2929
"<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n",
3030
"\n",
31-
"# TRTorch Getting Started - LeNet"
31+
"# Torch-TensorRT Getting Started - LeNet"
3232
]
3333
},
3434
{
@@ -41,7 +41,7 @@
4141
"\n",
4242
"When deploying on NVIDIA GPUs TensorRT, NVIDIA's Deep Learning Optimization SDK and Runtime is able to take models from any major framework and specifically tune them to perform better on specific target hardware in the NVIDIA family be it an A100, TITAN V, Jetson Xavier or NVIDIA's Deep Learning Accelerator. TensorRT performs a couple sets of optimizations to achieve this. TensorRT fuses layers and tensors in the model graph, it then uses a large kernel library to select implementations that perform best on the target GPU. TensorRT also has strong support for reduced operating precision execution which allows users to leverage the Tensor Cores on Volta and newer GPUs as well as reducing memory and computation footprints on device.\n",
4343
"\n",
44-
"TRTorch is a compiler that uses TensorRT to optimize TorchScript code, compiling standard TorchScript modules into ones that internally run with TensorRT optimizations. This enables you to continue to remain in the PyTorch ecosystem, using all the great features PyTorch has such as module composability, its flexible tensor implementation, data loaders and more. TRTorch is available to use with both PyTorch and LibTorch."
44+
"Torch-TensorRT is a compiler that uses TensorRT to optimize TorchScript code, compiling standard TorchScript modules into ones that internally run with TensorRT optimizations. This enables you to continue to remain in the PyTorch ecosystem, using all the great features PyTorch has such as module composability, its flexible tensor implementation, data loaders and more. Torch-TensorRT is available to use with both PyTorch and LibTorch."
4545
]
4646
},
4747
{
@@ -50,12 +50,12 @@
5050
"source": [
5151
"### Learning objectives\n",
5252
"\n",
53-
"This notebook demonstrates the steps for compiling a TorchScript module with TRTorch on a simple LeNet network. \n",
53+
"This notebook demonstrates the steps for compiling a TorchScript module with Torch-TensorRT on a simple LeNet network. \n",
5454
"\n",
5555
"## Content\n",
5656
"1. [Requirements](#1)\n",
5757
"1. [Creating TorchScript modules](#2)\n",
58-
"1. [Compiling with TRTorch](#3)"
58+
"1. [Compiling with Torch-TensorRT](#3)"
5959
]
6060
},
6161
{
@@ -423,7 +423,7 @@
423423
"metadata": {},
424424
"source": [
425425
"<a id=\"3\"></a>\n",
426-
"## 3. Compiling with TRTorch"
426+
"## 3. Compiling with Torch-TensorRT"
427427
]
428428
},
429429
{
@@ -432,7 +432,7 @@
432432
"source": [
433433
"### TorchScript traced model\n",
434434
"\n",
435-
"First, we compile the TorchScript traced model with TRTorch. Notice the performance impact."
435+
"First, we compile the TorchScript traced model with Torch-TensorRT. Notice the performance impact."
436436
]
437437
},
438438
{
@@ -441,11 +441,11 @@
441441
"metadata": {},
442442
"outputs": [],
443443
"source": [
444-
"import trtorch\n",
444+
"import torch_tensorrt\n",
445445
"\n",
446446
"# We use a batch-size of 1024, and half precision\n",
447447
"compile_settings = {\n",
448-
" \"inputs\": [trtorch.Input(\n",
448+
" \"inputs\": [torch_tensorrt.Input(\n",
449449
" min_shape=[1024, 1, 32, 32],\n",
450450
" opt_shape=[1024, 1, 33, 33],\n",
451451
" max_shape=[1024, 1, 34, 34],\n",
@@ -454,7 +454,7 @@
454454
" \"enabled_precisions\": {torch.float, torch.half} # Run with FP16\n",
455455
"}\n",
456456
"\n",
457-
"trt_ts_module = trtorch.compile(traced_model, compile_settings)\n",
457+
"trt_ts_module = torch_tensorrt.compile(traced_model, compile_settings)\n",
458458
"\n",
459459
"input_data = torch.randn((1024, 1, 32, 32))\n",
460460
"input_data = input_data.half().to(\"cuda\")\n",
@@ -501,7 +501,7 @@
501501
"source": [
502502
"### TorchScript script model\n",
503503
"\n",
504-
"Next, we compile the TorchScript script model with TRTorch. Notice the performance impact."
504+
"Next, we compile the TorchScript script model with Torch-TensorRT. Notice the performance impact."
505505
]
506506
},
507507
{
@@ -510,11 +510,11 @@
510510
"metadata": {},
511511
"outputs": [],
512512
"source": [
513-
"import trtorch\n",
513+
"import torch_tensorrt\n",
514514
"\n",
515515
"# We use a batch-size of 1024, and half precision\n",
516516
"compile_settings = {\n",
517-
" \"inputs\": [trtorch.Input(\n",
517+
" \"inputs\": [torch_tensorrt.Input(\n",
518518
" min_shape=[1024, 1, 32, 32],\n",
519519
" opt_shape=[1024, 1, 33, 33],\n",
520520
" max_shape=[1024, 1, 34, 34],\n",
@@ -523,7 +523,7 @@
523523
" \"enabled_precisions\": {torch.float, torch.half} # Run with FP16\n",
524524
"}\n",
525525
"\n",
526-
"trt_script_module = trtorch.compile(script_model, compile_settings)\n",
526+
"trt_script_module = torch_tensorrt.compile(script_model, compile_settings)\n",
527527
"\n",
528528
"input_data = torch.randn((1024, 1, 32, 32))\n",
529529
"input_data = input_data.half().to(\"cuda\")\n",
@@ -570,10 +570,10 @@
570570
"source": [
571571
"## Conclusion\n",
572572
"\n",
573-
"In this notebook, we have walked through the complete process of compiling TorchScript models with TRTorch and test the performance impact of the optimization.\n",
573+
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT and test the performance impact of the optimization.\n",
574574
"\n",
575575
"### What's next\n",
576-
"Now it's time to try TRTorch on your own model. Fill out issues at https://github.com/NVIDIA/TRTorch. Your involvement will help future development of TRTorch.\n"
576+
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
577577
]
578578
}
579579
],
@@ -593,7 +593,7 @@
593593
"name": "python",
594594
"nbconvert_exporter": "python",
595595
"pygments_lexer": "ipython3",
596-
"version": "3.8.10"
596+
"version": "3.6.13"
597597
}
598598
},
599599
"nbformat": 4,

0 commit comments

Comments
 (0)