Skip to content

Some of torchvision transforms have very limiting argument checks #5646

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
FeryET opened this issue Mar 18, 2022 · 2 comments · Fixed by #5656
Closed

Some of torchvision transforms have very limiting argument checks #5646

FeryET opened this issue Mar 18, 2022 · 2 comments · Fixed by #5656

Comments

@FeryET
Copy link

FeryET commented Mar 18, 2022

🐛 Describe the bug

I am trying to write a pytorch script that is configurable by Hydra. Hydra uses OmegaConf config objects, such as ListConfig and DictConfig. One of the benefits of using these types of configurations is that I can easily make templates and share them, alongside them being very readable.

# main.py
from omegaconf import DictConfig, OmegaConf
import hydra
from hydra.utils import get_class, instantiate


@hydra.main(config_path="conf/", config_name="config")
def train(cfg: DictConfig):
    OmegaConf.register_new_resolver(name="get_cls", resolver=lambda cls: get_class(cls))
    instantiate(cfg)


if __name__ == "__main__":
    train()
# conf/config.yaml
transforms
    _target_: torchvision.tranforms.ColorJitter
    brightness: [0.5, 1.5]

By running python -m main given the configuration, this error will be thrown:

TypeError: Error instantiating 'torchvision.transforms.transforms.ColorJitter' : brightness should be a single number or a list/tuple with length 2.

When digging deeper, I came to this line:

elif isinstance(value, (tuple, list)) and len(value) == 2:

The problem with OmegaConf.ListConfig's instance in this case is that it creates a sequence, but it is not of either of List or Tuple, but aMutableSequence. Given that Hydra is also being rapidly developed by a Facebook team, I think it is indeed in the great interest of PyTorch to facilitate such uses with simple fixes.

A simple fix that can make like easier can be using types such as collections.Sequence for such type checking, that is also what is usually recommended by Python devs.

Versions

Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17

Python version: 3.7.11 (default, Jul 27 2021, 14:32:16)  [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-27-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] pytorch-lightning==1.5.10
[pip3] torch==1.11.0
[pip3] torchmetrics==0.7.2
[pip3] torchvision==0.12.0
[conda] blas                      1.0                         mkl  
[conda] cpuonly                   2.0                           0    pytorch
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2021.4.0           h06a4308_640  
[conda] mkl-service               2.4.0            py37h7f8727e_0  
[conda] mkl_fft                   1.3.1            py37hd3c417c_0  
[conda] mkl_random                1.2.2            py37h51133e4_0  
[conda] mypy-extensions           0.4.3                    pypi_0    pypi
[conda] numpy                     1.21.2           py37h20f2e39_0  
[conda] numpy-base                1.21.2           py37h79a1101_0  
[conda] pytorch                   1.11.0              py3.7_cpu_0    pytorch
[conda] pytorch-lightning         1.5.10                   pypi_0    pypi
[conda] pytorch-mutex             1.0                         cpu    pytorch
[conda] torchmetrics              0.7.2                    pypi_0    pypi
[conda] torchvision               0.12.0                 py37_cpu    pytorch

cc @vfdev-5 @datumbox

@FeryET
Copy link
Author

FeryET commented Mar 18, 2022

BTW if color jitter needs a revamp per #5528 I can do this + that if needed.

@pmeier
Copy link
Collaborator

pmeier commented Mar 21, 2022

I agree this is a problem in the current transforms, but I'm afraid we can't solve it there. The transforms need to be fully @torch.jit.script'able and only a subset of Python language is supported. In particular, generic Sequence's and Mapping's are not supported.

On the bright side as you already mentioned, we are currently revamping the transforms module. Due some other limitations, we are going to drop the JIT scriptability for the transforms and only keep it for the low-level kernels. Thus, we can relax these input checks. In fact, there is #5626 to keep track of that. For now, we are holding community contributions, since we are not 100% sure how we want to deal with one aspect that is also needed for ColorJitter. Still, if you are up to it, comment in #5528 and I will ping you when you can start working on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants