Skip to content

[BUG] EfficientFormer TypeError: expected TensorOptions #1878

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
NUS-Tim opened this issue Jul 22, 2023 · 1 comment
Closed

[BUG] EfficientFormer TypeError: expected TensorOptions #1878

NUS-Tim opened this issue Jul 22, 2023 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@NUS-Tim
Copy link

NUS-Tim commented Jul 22, 2023

Describe the Bug
TypeError occurred when implementing the EfficientFormer. The remaining tested models work well

To Reproduce
Run the below code in the training pipeline

class EfficientFormer(torch.nn.Module):
    def __init__(self):
        super(EfficientFormer, self).__init__()
        self.model = timm.create_model('efficientformer_l1.snap_dist_in1k', num_classes=2)

    def forward(self, x):
        for name, param in self.model.named_parameters():
            if 'head' not in name:
                param.requires_grad = False
            print(name, param.requires_grad)
        return self.model(x)

Expected Behavior

C:\Users\e0575844\Anaconda3\envs\ET\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:3191.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):
  File "C:\Users\A\Desktop\CCCC\train.py", line 116, in <module>
    main()
  File "C:\Users\A\Desktop\CCCC\train.py", line 112, in main
    train(**vars(args))
  File "C:\Users\A\Desktop\CCCC\train.py", line 14, in train
    model = model_sel(model, device)
  File "C:\Users\A\Desktop\CCCC\utils\selection.py", line 35, in model_sel
    model = model_dict[model_name]()
  File "C:\Users\A\Desktop\CCCC\utils\models.py", line 352, in __init__
    self.model = timm.create_model('efficientformer_l1.snap_dist_in1k', num_classes=2)
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\_factory.py", line 114, in create_model
    model = create_fn(
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 551, in efficientformer_l1
    return _create_efficientformer('efficientformer_l1', pretrained=pretrained, **dict(model_args, **kwargs))
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 537, in _create_efficientformer
    model = build_model_with_cfg(
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\_builder.py", line 381, in build_model_with_cfg
    model = model_cls(**kwargs)
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 389, in __init__
    stage = EfficientFormerStage(
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 320, in __init__
    MetaBlock1d(
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 220, in __init__
    self.token_mixer = Attention(dim)
  File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 70, in __init__
    self.register_buffer('attention_bias_idxs', torch.LongTensor(rel_pos))
TypeError: expected TensorOptions(dtype=__int64, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)) (got TensorOptions(dtype=__int64, device=cuda:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))

Desktop

  • Windows 10
  • timm==0.9.2
  • PyTorch==1.13.1+cu116, cuDNN==8302
@NUS-Tim NUS-Tim added the bug Something isn't working label Jul 22, 2023
@rwightman
Copy link
Collaborator

@NUS-Tim that's weird, tested on linux all the time, and do periodically check windows

Try changing line 70 to self.register_buffer('attention_bias_idxs', rel_pos.to(torch.long)) and see if the error is different?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants