You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/int8/ptq/README.md
+44-7Lines changed: 44 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -120,25 +120,62 @@ This is a short example application that shows how to use TRTorch to perform pos
120
120
## Prerequisites
121
121
122
122
1. Download CIFAR10 Dataset Binary version ([https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz))
123
-
2. Train a network on CIFAR10 (see `training/` for a VGG16 recipie)
123
+
2. Train a network on CIFAR10 (see `training/` for a VGG16 recipe)
124
124
3. Export model to torchscript
125
125
126
-
## Compilation
126
+
## Compilation using bazel
127
127
128
128
``` shell
129
-
bazel build //cpp/ptq --compilation_mode=opt
129
+
bazel run //cpp/ptq --compilation_mode=opt <path-to-module> <path-to-cifar10>
130
130
```
131
131
132
132
If you want insight into what is going under the hood or need debug symbols
133
133
134
134
```shell
135
-
bazel build //cpp/ptq --compilation_mode=dbg
135
+
bazel run //cpp/ptq --compilation_mode=dbg<path-to-module><path-to-cifar10>
136
136
```
137
137
138
-
## Usage
138
+
This will build a binary named `ptq` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/qat/` directory. Optionally you can add this to `$PATH` environment variable to run `ptq` from anywhere on your system.
139
139
140
-
```shell
141
-
ptq <path-to-module><path-to-cifar10>
140
+
## Compilation using Makefile
141
+
142
+
1) Download releases of <ahref="https://pytorch.org">LibTorch</a>, <ahref="https://github.com/NVIDIA/TRTorch/releases">TRTorch </a>and <ahref="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.
143
+
144
+
```sh
145
+
cd examples/trtorchrt_example/deps
146
+
# Download latest TRTorch release tar file (libtrtorch.tar.gz) from https://github.com/NVIDIA/TRTorch/releases
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where TRTorch is located `<path_to_TRTORCH>`.
171
+
172
+
By default it is set to `../../../`. If your TRTorch directory structure is different, please set `ROOT_DIR` accordingly.
bazel run //cpp/qat --compilation_mode=opt<path-to-module><path-to-cifar10>
24
24
```
25
25
26
26
If you want insight into what is going under the hood or need debug symbols
27
27
28
28
```shell
29
-
bazel build //cpp/qat --compilation_mode=dbg
29
+
bazel run //cpp/qat --compilation_mode=dbg<path-to-module><path-to-cifar10>
30
30
```
31
31
32
32
This will build a binary named `qat` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/qat/` directory. Optionally you can add this to `$PATH` environment variable to run `qat` from anywhere on your system.
33
33
34
+
## Compilation using Makefile
35
+
36
+
1) Download releases of <ahref="https://pytorch.org">LibTorch</a>, <ahref="https://github.com/NVIDIA/TRTorch/releases">TRTorch </a>and <ahref="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.
37
+
38
+
```sh
39
+
cd examples/trtorchrt_example/deps
40
+
# Download latest TRTorch release tar file (libtrtorch.tar.gz) from https://github.com/NVIDIA/TRTorch/releases
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where TRTorch is located `<path_to_TRTORCH>`.
65
+
66
+
By default it is set to `../../../`. If your TRTorch directory structure is different, please set `ROOT_DIR` accordingly.
0 commit comments