-
Notifications
You must be signed in to change notification settings - Fork 364
Fix batchnorm affine false #866
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
88441a2
to
621a04a
Compare
Hi, Thanks for filing this! Would it be possible for you to add a test case verifying your fix? You can add it to the converters tests. If you need help we are happy to assist |
f0e2180
to
26ed9f1
Compare
In this case, How can I pass |
You should be able to just set it as a inline constant in the graph like this
instead of passing it to |
So likely your test graph will look like:
Where you provide tensors for input, running_var and running_mean like we do in other tests. |
@narendasan Test Done! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, changes wise this looks good. Can you rebase, then we will test and merge.
// BatchNorm(ch, affine=False) | ||
const auto graph = R"IR( | ||
graph(%0 : Tensor, | ||
%1: NoneType = prim::Constant(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just fyi, the nones in the graph inputs can be consolidated as a single %1: NoneType = prim::Constant()
in graph so that you don't need to pass these as arguments but this should be fine
Signed-off-by: root <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: root <[email protected]>
c348842
to
56a2043
Compare
@narendasan Rebased 👍 |
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Description
strict_types and max_batch_size
affine=False
Fixes
Type of change
Bug fix (non-breaking change which fixes an issue)
Before, Builderror causing missing variables, strict_types and max_batch_size on python parts
So I removed that vars
New feature (non-breaking change which adds functionality)
In
nn.BatchNorm2d(C, affine=False)
, gamma and beta set to None.And if
affine=True
gamma and beta areC
shape tensors.But in converter,
gamma = args[1].unwrapToTensor(at::full({shape}, 1, {options}));
follows input tensor shapes, not channel dim. And that wrong shapes occurs error.Checklist: