You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
This removes the resetting of grad attribute to zero, which is causing warnings as mentioned in #491 and #421 . Based on torch [documentation](https://pytorch.org/docs/stable/autograd.html#torch.autograd.grad), resetting of grad is only needed when using torch.autograd.backward, which accumulates results into the grad attribute for leaf nodes. Since we only utilize torch.autograd.grad (with only_inputs always set to True), the gradients obtained in Captum are never actually accumulated into grad attributes, so resetting the attribute is not actually necessary.
This also adds a test to confirm that the grad attribute is not altered when gradients are utilized through Saliency.
Pull Request resolved: #597
Reviewed By: bilalsal
Differential Revision: D26079970
Pulled By: vivekmig
fbshipit-source-id: f7ccee02a17f66ee75e2176f1b328672b057dbfa
0 commit comments