Skip to content

refactor: Restructured python perf benchmark tool #780

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion examples/benchmark/BUILD → tools/cpp_benchmark/BUILD
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
package(default_visibility = ["//visibility:public"])

cc_binary(
name = "benchmark",
name = "cpp_benchmark",
srcs = [
"main.cpp",
"timer.h",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ Run with bazel:
> Note: Make sure libtorch and TensorRT are in your LD_LIBRARY_PATH before running, if you need a location you can `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:[WORKSPACE ROOT]/bazel-Torch-TensorRT/external/libtorch/lib:[WORKSPACE ROOT]/bazel-Torch-TensorRT/external/tensorrt/lib`

``` sh
bazel run //cpp/benchmark --cxxopt="-DNDEBUG" --cxxopt="-DJIT" --cxxopt="-DTRT" -- [PATH TO JIT MODULE FILE] [INPUT SIZE (explicit batch)]
bazel run //tools/cpp_benchmark/int8/benchmark --cxxopt="-DNDEBUG" --cxxopt="-DJIT" --cxxopt="-DTRT" -- [PATH TO JIT MODULE FILE] [INPUT SIZE (explicit batch)]
```

For example:

``` shell
bazel run //cpp/benchmark --cxxopt="-DNDEBUG" --cxxopt="-DJIT" --cxxopt="-DTRT" -- $(realpath /tests/models/resnet50.jit.pt) "(32 3 224 224)"
bazel run //tools/cpp_benchmark/int8/benchmark --cxxopt="-DNDEBUG" --cxxopt="-DJIT" --cxxopt="-DTRT" -- $(realpath /tests/models/resnet50.jit.pt) "(32 3 224 224)"
```

### Options
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#include "examples/int8/datasets/cifar10.h"
#include "tools/cpp_benchmark/int8/datasets/cifar10.h"
#include "torch/data/example.h"
#include "torch/torch.h"
#include "torch/types.h"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ cc_binary(
],
deps = [
"//cpp:torch_tensorrt",
"//examples/int8/benchmark",
"//examples/int8/datasets:cifar10",
"//tools/cpp_benchmark/int8/benchmark",
"//tools/cpp_benchmark/int8/datasets:cifar10",
"@libtorch",
"@libtorch//:caffe2",
"@tensorrt//:nvinfer",
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ LibTorch provides a `Dataloader` and `Dataset` API which steamlines preprocessin
Here is an example interface of a `torch::Dataset` class for CIFAR10:

```C++
//examples/int8/ptq/datasets/cifar10.h
//tools/cpp_benchmark/int8/ptq/datasets/cifar10.h
#pragma once

#include "torch/data/datasets/base.h"
Expand Down Expand Up @@ -120,19 +120,19 @@ This is a short example application that shows how to use Torch-TensorRT to perf
## Prerequisites

1. Download CIFAR10 Dataset Binary version ([https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz))
2. Train a network on CIFAR10 (see `examples/int8/training/vgg16/README.md` for a VGG16 recipe)
2. Train a network on CIFAR10 (see `examples/training/vgg16/README.md` for a VGG16 recipe)
3. Export model to torchscript

## Compilation using bazel

``` shell
bazel run //examples/int8/ptq --compilation_mode=opt <path-to-module> <path-to-cifar10>
bazel run //tools/cpp_benchmark/int8/ptq --compilation_mode=opt <path-to-module> <path-to-cifar10>
```

If you want insight into what is going under the hood or need debug symbols

``` shell
bazel run //examples/int8/ptq --compilation_mode=dbg <path-to-module> <path-to-cifar10>
bazel run //tools/cpp_benchmark/int8/ptq --compilation_mode=dbg <path-to-module> <path-to-cifar10>
```

This will build a binary named `ptq` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/ptq/` directory. Optionally you can add this to `$PATH` environment variable to run `ptq` from anywhere on your system.
Expand Down Expand Up @@ -166,7 +166,7 @@ We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR`
By default it is set to `../../../`. If your Torch-TensorRT directory structure is different, please set `ROOT_DIR` accordingly.

```sh
cd examples/int8/ptq
cd tools/cpp_benchmark/int8/ptq
# This will generate a ptq binary
make ROOT_DIR=<PATH> CUDA_VERSION=11.1
./ptq <path-to-module> <path-to-cifar10>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
#include "torch_tensorrt/ptq.h"
#include "torch_tensorrt/torch_tensorrt.h"

#include "examples/int8/benchmark/benchmark.h"
#include "examples/int8/datasets/cifar10.h"
#include "tools/cpp_benchmark/int8/benchmark/benchmark.h"
#include "tools/cpp_benchmark//int8/datasets/cifar10.h"

#include <sys/stat.h>
#include <iostream>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ cc_binary(
],
deps = [
"//cpp:torch_tensorrt",
"//examples/int8/benchmark",
"//examples/int8/datasets:cifar10",
"//tools/cpp_benchmark/int8/benchmark",
"//tools/cpp_benchmark/int8/datasets:cifar10",
"@libtorch",
"@libtorch//:caffe2",
"@tensorrt//:nvinfer",
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -12,21 +12,21 @@ This is a short example application that shows how to use Torch-TensorRT to perf
## Prerequisites

1. Download CIFAR10 Dataset Binary version ([https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz))
2. Train a network on CIFAR10 and perform quantization aware training on it. Refer to `examples/int8/training/vgg16/README.md` for detailed instructions.
2. Train a network on CIFAR10 and perform quantization aware training on it. Refer to `examples/training/vgg16/README.md` for detailed instructions.
Export the QAT model to Torchscript.
3. Install NVIDIA's <a href="https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization">pytorch quantization toolkit</a>
4. TensorRT 8.0.1.6 or above

## Compilation using bazel

``` shell
bazel run //examples/int8/qat --compilation_mode=opt <path-to-module> <path-to-cifar10>
bazel run //tools/cpp_benchmark/int8/qat --compilation_mode=opt <path-to-module> <path-to-cifar10>
```

If you want insight into what is going under the hood or need debug symbols

``` shell
bazel run //examples/int8/qat --compilation_mode=dbg <path-to-module> <path-to-cifar10>
bazel run //tools/cpp_benchmark/int8/qat --compilation_mode=dbg <path-to-module> <path-to-cifar10>
```

This will build a binary named `qat` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/qat/` directory. Optionally you can add this to `$PATH` environment variable to run `qat` from anywhere on your system.
Expand Down Expand Up @@ -60,7 +60,7 @@ We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR`
By default it is set to `../../../`. If your Torch-TensorRT directory structure is different, please set `ROOT_DIR` accordingly.

```sh
cd examples/int8/qat
cd tools/cpp_benchmark/int8/qat
# This will generate a ptq binary
make ROOT_DIR=<PATH> CUDA_VERSION=11.1
./qat <path-to-module> <path-to-cifar10>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
#include "torch/torch.h"
#include "torch_tensorrt/torch_tensorrt.h"

#include "examples/int8/benchmark/benchmark.h"
#include "examples/int8/datasets/cifar10.h"
#include "tools/cpp_benchmark/int8/benchmark/benchmark.h"
#include "tools/cpp_benchmark/int8/datasets/cifar10.h"

#include <sys/stat.h>
#include <iostream>
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.