You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Make operations in torchvision.transforms.functional_tensor differential w.r.t. hyper-parameters, which is helpful for Faster AutoAugment search (hyper-parameters are learnable parameters via backward). (while keeping the backward compatibility to previous codes)
Some operations are not differential (e.g., Posterize), which might require users to write their own implementations.
Motivation, pitch
The main motivation is for research purpose. Faster Autoaugment proposes to search for augment architectures using a DARTS-like framework, and all magnitudes and weights are trainable parameters. This requires all operations to have gradients w.r.t. magnitudes. This idea provides a faster search strategy as state-of-the-art AutoAugment policy search algorithms.
This work has been maintained by autoalbument and applied on some industrial scenarios from their document claims.
I think adding the backward feature wrt magnitudes would be more convenient and support future research as well.