Skip to content

Conversation

@narendasan
Copy link
Collaborator

Description

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

@github-actions github-actions bot added component: conversion Issues re: Conversion stage component: core Issues re: The core compiler labels Feb 4, 2021
@narendasan
Copy link
Collaborator Author

@inocsin I dont really understand the point of this change? Seems like its casting the input Tensor based on the already existing type of the Tensor and nothing else really changes.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@inocsin
Copy link
Contributor

inocsin commented Feb 5, 2021

@inocsin I dont really understand the point of this change? Seems like its casting the input Tensor based on the already existing type of the Tensor and nothing else really changes.

Some values in torchscript are stored as at::kLong and at::kDouble, but the tensorRT doesn't support those two type. So in conversion time, it will throw an error. That's why we must convert at::Double and at::kLong to trt support datatype to pass the conversion.

@narendasan
Copy link
Collaborator Author

Isnt this potentially dangerous? We would be truncating values in double precision to single precision and in this case it would be done silently

@inocsin
Copy link
Contributor

inocsin commented Feb 6, 2021

Isnt this potentially dangerous? We would be truncating values in double precision to single precision and in this case it would be done silently

Yeah, maybe we can throw a warning? But if we doesn't support the type conversion, the module cann't be compiled by trt

@narendasan
Copy link
Collaborator Author

Are tensors from PyTorch in your target module actually double precision? I think its better in that case that we catch this and fully error out telling users to put in a type cast to floating point so they make the choice explicitly.

@inocsin
Copy link
Contributor

inocsin commented Feb 10, 2021

Are tensors from PyTorch in your target module actually double precision? I think its better in that case that we catch this and fully error out telling users to put in a type cast to floating point so they make the choice explicitly.

Yeah, the original values are double precision. In my situation, some users may not want to convert precision explicitly, so that's why I think we should do this for them.

@narendasan
Copy link
Collaborator Author

We can add a setting to the CompileSpec that is something like

CompileSpec {
    ... 
    bool truncate_long_and_double = False;
}

So while by default you will error out, if you explicitly opt in, weights will be truncated for you. Then we can tell if say an accuracy issue may be the result of truncating or some other issue.

@narendasan
Copy link
Collaborator Author

I can PR into your branch the API to do this and the checks if you think this would be sufficient

@narendasan
Copy link
Collaborator Author

@peri044 says we should also put a warning in so even if someone opts in you get a message if it actually truncates.

@narendasan narendasan closed this Mar 29, 2021
@narendasan narendasan deleted the double_long_ival branch February 24, 2022 00:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

component: conversion Issues re: Conversion stage component: core Issues re: The core compiler

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants