You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tests/README.md
+23-1Lines changed: 23 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,9 @@
1
1
# Tests
2
2
3
-
Right now there are two types of tests. Converter level tests and Module level tests.
3
+
Currently, following tests are supported:
4
+
1. Converter level tests
5
+
2. Module level tests
6
+
3. Accuracy tests
4
7
5
8
The goal of Converter tests are to tests individual converters againsts specific subgraphs. The current tests in `core/conveters` are good examples on how to write these tests. In general every converter should have at least 1 test. More may be required if the operation has switches that change the behavior of the op.
`--jobs=4` is useful and is sometimes required to prevent too many processes to use GPU memory and cause CUDA out of memory issues.
22
25
26
+
Additionally, accuracy tests are supported for Python backend using Nox. Please refer [setup_nox.sh](../docker/setup_nox.sh) for reference.
27
+
```
28
+
# To run complete Python accuracy + API tests
29
+
nox
30
+
31
+
# To list supported sessions
32
+
nox -l
33
+
```
34
+
35
+
Note: Supported Python tests
36
+
```
37
+
* download_datasets-3
38
+
* download_models-3
39
+
* train_model-3
40
+
* finetune_model-3
41
+
* ptq_test-3
42
+
* qat_test-3
43
+
* cleanup
44
+
```
23
45
### Testing using pre-built Torch-TensorRT library
24
46
25
47
Currently, the default strategy when we run all the tests (`bazel test //tests`) is to build the testing scripts along with the full Torch-TensorRT library (`libtorchtrt.so`) from scratch. This can lead to increased testing time and might not be needed incase you already have a pre-built Torch-TensorRT library that you want to link against.
0 commit comments