Skip to content

Conversation

@inocsin
Copy link
Contributor

@inocsin inocsin commented Mar 19, 2021

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)
    Support truncate long/double to int/float with option, add CompileSpec support
    Please close the previous issue support Long/Double type IValue #322

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

@inocsin
Copy link
Contributor Author

inocsin commented Mar 19, 2021

@peri044 please review this PR

Copy link
Collaborator

@peri044 peri044 left a comment

auto tensor = ptr_.ivalue->toTensor();
if (tensor.scalar_type() == at::kLong && ctx->settings.truncate_long_and_double) {
weights = converters::Weights(ctx, tensor.toType(at::kInt));
LOG_WARNING("Truncate kLong to kInt for IValue");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warning: Truncating weight (constant in the graph) from kLong to kInt to indicate that only constants are affected.

LOG_WARNING("Truncate kLong to kInt for IValue");
} else if (tensor.scalar_type() == at::kDouble && ctx->settings.truncate_long_and_double) {
weights = converters::Weights(ctx, tensor.toType(at::kFloat));
LOG_WARNING("Truncate kDouble to kFloat for IValue");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here as well w.r.t warning message

if (isIValue()) {
auto weights = converters::Weights(ctx, ptr_.ivalue->toTensor());

auto tensor = ptr_.ivalue->toTensor();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also check these cases if the setting is not enable to give users a hint to turn on truncating

@narendasan narendasan mentioned this pull request Mar 19, 2021
6 tasks
@inocsin inocsin requested review from narendasan and peri044 March 22, 2021 05:58

auto tensor = ptr_.ivalue->toTensor();
if ((tensor.scalar_type() == at::kLong || tensor.scalar_type() == at::kDouble) && !ctx->settings.truncate_long_and_double) {
TRTORCH_CHECK(0, "Unable to freeze tensor of type kLong/kDouble into constant layer, try to compile model with truncate_long_and_double ON");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets just call the types: Int32, Int64, Float64, Float32 throughout the warning messages


auto tensor = ptr_.ivalue->toTensor();
if ((tensor.scalar_type() == at::kLong || tensor.scalar_type() == at::kDouble) && !ctx->settings.truncate_long_and_double) {
TRTORCH_CHECK(0, "Unable to freeze tensor of type kLong/kDouble into constant layer, try to compile model with truncate_long_and_double ON");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use TRTORCH_THROW_ERROR here

} else if (tensor.scalar_type() == at::kLong && ctx->settings.truncate_long_and_double) {
weights = converters::Weights(ctx, tensor.toType(at::kInt));
LOG_WARNING("Truncate kLong to kInt for IValue");
LOG_WARNING("Warning: Truncating weight (constant in the graph) from kLong to kInt to indicate that only constants are affected.");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@inocsin You can remove "to indicate that only constants are affected." This was just a message for you

@inocsin inocsin requested review from narendasan and peri044 March 23, 2021 08:50
Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly looks good to go, one comment and resolve comments

LOG_WARNING("Warning: Truncating weight (constant in the graph) from Int64 to Int32.");
} else if (tensor.scalar_type() == at::kDouble && ctx->settings.truncate_long_and_double) {
weights = converters::Weights(ctx, tensor.toType(at::kFloat));
LOG_WARNING("Warning: Truncating weight (constant in the graph) from Float64 to Float32.");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You dont need to say warning, the logging system will add that for you

@inocsin inocsin requested a review from narendasan March 25, 2021 03:09
@narendasan narendasan merged commit 8156465 into pytorch:master Mar 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants