Skip to content

Refactor the CLI #778

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Feb 24, 2022
Merged

Refactor the CLI #778

merged 6 commits into from
Feb 24, 2022

Conversation

narendasan
Copy link
Collaborator

Description

This PR refactors the CLI to make it easier to add new features in the future. Utility functions have been split out into their own files

Type of change

Please delete options that are not relevant and/or add your own.

  • Refactor

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

@narendasan narendasan requested a review from peri044 December 20, 2021 21:04
@github-actions github-actions bot added component: api [C++] Issues re: C++ API documentation Improvements or additions to documentation labels Dec 20, 2021
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/cpp/bin/torchtrtc/main.cpp b/tmp/changes.txt
index a6560e7..edb5aab 100644
--- a/workspace/cpp/bin/torchtrtc/main.cpp
+++ b/tmp/changes.txt
@@ -53,11 +53,10 @@ int main(int argc, char** argv) {
      {"require-full-compilation"});

  args::ValueFlag<std::string> check_method_op_support(
-    parser,
-    "check-method-op-support",
-    "Check the support for end to end compilation of a specified method in the TorchScript module",
-    {"supported", "is-supported", "check-support", "check-method-op-support"}
-  );
+      parser,
+      "check-method-op-support",
+      "Check the support for end to end compilation of a specified method in the TorchScript module",
+      {"supported", "is-supported", "check-support", "check-method-op-support"});

  args::Flag disable_tf32(
      parser, "disable-tf32", "Prevent Float32 layers from using the TF32 data format", {"disable-tf32"});
@@ -198,7 +197,8 @@ int main(int argc, char** argv) {
  // Instead of compiling, just embed engine in a PyTorch module
  if (embed_engine) {
    auto device_str = args::get(device_type);
-    std::transform(device_str.begin(), device_str.end(), device_str.begin(), [](unsigned char c) { return std::tolower(c); });
+    std::transform(
+        device_str.begin(), device_str.end(), device_str.begin(), [](unsigned char c) { return std::tolower(c); });

    torchtrt::Device device;

@@ -227,7 +227,6 @@ int main(int argc, char** argv) {
    return 0;
  }

-
  std::vector<torchtrt::Input> ranges;
  for (const auto spec : args::get(input_shapes)) {
    ranges.push_back(torchtrtc::parserutil::parse_input(spec));
@@ -444,7 +443,8 @@ int main(int argc, char** argv) {
      }

      for (size_t i = 0; i < trt_results.size(); i++) {
-        if (!torchtrtc::accuracy::almost_equal(jit_results[i], trt_results[i].reshape_as(jit_results[i]), threshold_val)) {
+        if (!torchtrtc::accuracy::almost_equal(
+                jit_results[i], trt_results[i].reshape_as(jit_results[i]), threshold_val)) {
          std::ostringstream threshold_ss;
          threshold_ss << threshold_val;
          torchtrt::logging::log(
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/cpp/bin/torchtrtc/main.cpp b/tmp/changes.txt
index a6560e7..edb5aab 100644
--- a/workspace/cpp/bin/torchtrtc/main.cpp
+++ b/tmp/changes.txt
@@ -53,11 +53,10 @@ int main(int argc, char** argv) {
      {"require-full-compilation"});

  args::ValueFlag<std::string> check_method_op_support(
-    parser,
-    "check-method-op-support",
-    "Check the support for end to end compilation of a specified method in the TorchScript module",
-    {"supported", "is-supported", "check-support", "check-method-op-support"}
-  );
+      parser,
+      "check-method-op-support",
+      "Check the support for end to end compilation of a specified method in the TorchScript module",
+      {"supported", "is-supported", "check-support", "check-method-op-support"});

  args::Flag disable_tf32(
      parser, "disable-tf32", "Prevent Float32 layers from using the TF32 data format", {"disable-tf32"});
@@ -198,7 +197,8 @@ int main(int argc, char** argv) {
  // Instead of compiling, just embed engine in a PyTorch module
  if (embed_engine) {
    auto device_str = args::get(device_type);
-    std::transform(device_str.begin(), device_str.end(), device_str.begin(), [](unsigned char c) { return std::tolower(c); });
+    std::transform(
+        device_str.begin(), device_str.end(), device_str.begin(), [](unsigned char c) { return std::tolower(c); });

    torchtrt::Device device;

@@ -227,7 +227,6 @@ int main(int argc, char** argv) {
    return 0;
  }

-
  std::vector<torchtrt::Input> ranges;
  for (const auto spec : args::get(input_shapes)) {
    ranges.push_back(torchtrtc::parserutil::parse_input(spec));
@@ -444,7 +443,8 @@ int main(int argc, char** argv) {
      }

      for (size_t i = 0; i < trt_results.size(); i++) {
-        if (!torchtrtc::accuracy::almost_equal(jit_results[i], trt_results[i].reshape_as(jit_results[i]), threshold_val)) {
+        if (!torchtrtc::accuracy::almost_equal(
+                jit_results[i], trt_results[i].reshape_as(jit_results[i]), threshold_val)) {
          std::ostringstream threshold_ss;
          threshold_ss << threshold_val;
          torchtrt::logging::log(
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/py/test_api.py	(original)
+++ /workspace/tests/py/test_api.py	(reformatted)
@@ -32,9 +32,9 @@
    def test_compile_script(self):
        with torch.no_grad():
            trt_mod = torchtrt.ts.compile(self.scripted_model,
-                                      inputs=[self.input],
-                                      device=torchtrt.Device(gpu_id=0),
-                                      enabled_precisions={torch.float})
+                                          inputs=[self.input],
+                                          device=torchtrt.Device(gpu_id=0),
+                                          enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

@@ -49,9 +49,9 @@
    def test_compile_global_nn_mod(self):
        with torch.no_grad():
            trt_mod = torchtrt.compile(self.model,
-                                   inputs=[self.input],
-                                   device=torchtrt.Device(gpu_id=0),
-                                   enabled_precisions={torch.float})
+                                       inputs=[self.input],
+                                       device=torchtrt.Device(gpu_id=0),
+                                       enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

Reformatting /workspace/tests/py/test_trt_intercompatability.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/modules/hub.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/setup.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/modules/hub.py
--- /workspace/tests/py/test_api.py	(original)
+++ /workspace/tests/py/test_api.py	(reformatted)
@@ -32,9 +32,9 @@
    def test_compile_script(self):
        with torch.no_grad():
            trt_mod = torchtrt.ts.compile(self.scripted_model,
-                                      inputs=[self.input],
-                                      device=torchtrt.Device(gpu_id=0),
-                                      enabled_precisions={torch.float})
+                                          inputs=[self.input],
+                                          device=torchtrt.Device(gpu_id=0),
+                                          enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

@@ -49,9 +49,9 @@
    def test_compile_global_nn_mod(self):
        with torch.no_grad():
            trt_mod = torchtrt.compile(self.model,
-                                   inputs=[self.input],
-                                   device=torchtrt.Device(gpu_id=0),
-                                   enabled_precisions={torch.float})
+                                       inputs=[self.input],
+                                       device=torchtrt.Device(gpu_id=0),
+                                       enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

Reformatting /workspace/tests/py/test_trt_intercompatability.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_api.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/cpp/bin/torchtrtc/main.cpp b/tmp/changes.txt
index bf91386..2492de3 100644
--- a/workspace/cpp/bin/torchtrtc/main.cpp
+++ b/tmp/changes.txt
@@ -53,11 +53,10 @@ int main(int argc, char** argv) {
      {"require-full-compilation"});

  args::ValueFlag<std::string> check_method_op_support(
-    parser,
-    "check-method-op-support",
-    "Check the support for end to end compilation of a specified method in the TorchScript module",
-    {"supported", "is-supported", "check-support", "check-method-op-support"}
-  );
+      parser,
+      "check-method-op-support",
+      "Check the support for end to end compilation of a specified method in the TorchScript module",
+      {"supported", "is-supported", "check-support", "check-method-op-support"});

  args::Flag disable_tf32(
      parser, "disable-tf32", "Prevent Float32 layers from using the TF32 data format", {"disable-tf32"});
@@ -196,7 +195,8 @@ int main(int argc, char** argv) {
  // Instead of compiling, just embed engine in a PyTorch module
  if (embed_engine) {
    auto device_str = args::get(device_type);
-    std::transform(device_str.begin(), device_str.end(), device_str.begin(), [](unsigned char c) { return std::tolower(c); });
+    std::transform(
+        device_str.begin(), device_str.end(), device_str.begin(), [](unsigned char c) { return std::tolower(c); });

    torchtrt::Device device;

@@ -225,7 +225,6 @@ int main(int argc, char** argv) {
    return 0;
  }

-
  std::vector<torchtrt::Input> ranges;
  for (const auto spec : args::get(input_shapes)) {
    ranges.push_back(torchtrtc::parserutil::parse_input(spec));
@@ -438,7 +437,8 @@ int main(int argc, char** argv) {
      }

      for (size_t i = 0; i < trt_results.size(); i++) {
-        if (!torchtrtc::accuracy::almost_equal(jit_results[i], trt_results[i].reshape_as(jit_results[i]), threshold_val)) {
+        if (!torchtrtc::accuracy::almost_equal(
+                jit_results[i], trt_results[i].reshape_as(jit_results[i]), threshold_val)) {
          std::ostringstream threshold_ss;
          threshold_ss << threshold_val;
          torchtrt::logging::log(
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/setup.py
Reformatting /workspace/tests/py/test_trt_intercompatability.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/modules/hub.py
--- /workspace/tests/py/test_api.py	(original)
+++ /workspace/tests/py/test_api.py	(reformatted)
@@ -32,9 +32,9 @@
    def test_compile_script(self):
        with torch.no_grad():
            trt_mod = torchtrt.ts.compile(self.scripted_model,
-                                      inputs=[self.input],
-                                      device=torchtrt.Device(gpu_id=0),
-                                      enabled_precisions={torch.float})
+                                          inputs=[self.input],
+                                          device=torchtrt.Device(gpu_id=0),
+                                          enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

@@ -49,9 +49,9 @@
    def test_compile_global_nn_mod(self):
        with torch.no_grad():
            trt_mod = torchtrt.compile(self.model,
-                                   inputs=[self.input],
-                                   device=torchtrt.Device(gpu_id=0),
-                                   enabled_precisions={torch.float})
+                                       inputs=[self.input],
+                                       device=torchtrt.Device(gpu_id=0),
+                                       enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_api.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/setup.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/modules/hub.py
--- /workspace/tests/py/test_api.py	(original)
+++ /workspace/tests/py/test_api.py	(reformatted)
@@ -32,9 +32,9 @@
    def test_compile_script(self):
        with torch.no_grad():
            trt_mod = torchtrt.ts.compile(self.scripted_model,
-                                      inputs=[self.input],
-                                      device=torchtrt.Device(gpu_id=0),
-                                      enabled_precisions={torch.float})
+                                          inputs=[self.input],
+                                          device=torchtrt.Device(gpu_id=0),
+                                          enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

@@ -49,9 +49,9 @@
    def test_compile_global_nn_mod(self):
        with torch.no_grad():
            trt_mod = torchtrt.compile(self.model,
-                                   inputs=[self.input],
-                                   device=torchtrt.Device(gpu_id=0),
-                                   enabled_precisions={torch.float})
+                                       inputs=[self.input],
+                                       device=torchtrt.Device(gpu_id=0),
+                                       enabled_precisions={torch.float})
            same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
            self.assertTrue(same < 2e-2)

Reformatting /workspace/tests/py/test_trt_intercompatability.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_api.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

enable-precision

BREAKING CHANGE: This is a minor change but may cause scripts
using torchtrtc to fail. We are renaming enabled-precisions to
enable-precision since it makes more sense as the argument can
be repeated

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
extend later

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
BREAKING CHANGE: This PR removes `--max-batch-size` from the CLI
as it has no real functional effect

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan force-pushed the torchtrtc_cli_cleanup branch from 4f03982 to b663154 Compare February 24, 2022 01:11
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/cpp/bin/torchtrtc/main.cpp b/tmp/changes.txt
index d5f21e5..5d872ca 100644
--- a/workspace/cpp/bin/torchtrtc/main.cpp
+++ b/tmp/changes.txt
@@ -239,7 +239,6 @@ int main(int argc, char** argv) {
    compile_settings.debug = true;
  }

-
  if (allow_gpu_fallback) {
    compile_settings.device.allow_gpu_fallback = true;
  }
ERROR: Some files do not conform to style guidelines

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan force-pushed the torchtrtc_cli_cleanup branch from 8367805 to a182c0e Compare February 24, 2022 01:24
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/cpp/bin/torchtrtc/main.cpp b/tmp/changes.txt
index da0d122..4d733f2 100644
--- a/workspace/cpp/bin/torchtrtc/main.cpp
+++ b/tmp/changes.txt
@@ -237,7 +237,6 @@ int main(int argc, char** argv) {
    compile_settings.debug = true;
  }

-
  if (allow_gpu_fallback) {
    compile_settings.device.allow_gpu_fallback = true;
  }
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@narendasan narendasan dismissed peri044’s stale review February 24, 2022 01:29

Review comments are addressed

@narendasan narendasan merged commit b798c7f into master Feb 24, 2022
@narendasan narendasan deleted the torchtrtc_cli_cleanup branch February 24, 2022 01:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: api [C++] Issues re: C++ API documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants