Skip to content

Commit fee6d12

Browse files
authored
Fix ViT and Resnext docs (#6150)
1 parent 5f6e22d commit fee6d12

File tree

2 files changed

+16
-15
lines changed

2 files changed

+16
-15
lines changed

docs/source/models/resnext.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,3 +23,4 @@ more details about this class.
2323

2424
resnext50_32x4d
2525
resnext101_32x8d
26+
resnext101_64x4d

torchvision/models/vision_transformer.py

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -603,16 +603,16 @@ def vit_b_16(*, weights: Optional[ViT_B_16_Weights] = None, progress: bool = Tru
603603
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`_.
604604
605605
Args:
606-
weights (:class:`~torchvision.models.vision_transformer.ViT_B_16_Weights`, optional): The pretrained
607-
weights to use. See :class:`~torchvision.models.vision_transformer.ViT_B_16_Weights`
606+
weights (:class:`~torchvision.models.ViT_B_16_Weights`, optional): The pretrained
607+
weights to use. See :class:`~torchvision.models.ViT_B_16_Weights`
608608
below for more details and possible values. By default, no pre-trained weights are used.
609609
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
610610
**kwargs: parameters passed to the ``torchvision.models.vision_transformer.VisionTransformer``
611611
base class. Please refer to the `source code
612612
<https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py>`_
613613
for more details about this class.
614614
615-
.. autoclass:: torchvision.models.vision_transformer.ViT_B_16_Weights
615+
.. autoclass:: torchvision.models.ViT_B_16_Weights
616616
:members:
617617
"""
618618
weights = ViT_B_16_Weights.verify(weights)
@@ -636,16 +636,16 @@ def vit_b_32(*, weights: Optional[ViT_B_32_Weights] = None, progress: bool = Tru
636636
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`_.
637637
638638
Args:
639-
weights (:class:`~torchvision.models.vision_transformer.ViT_B_32_Weights`, optional): The pretrained
640-
weights to use. See :class:`~torchvision.models.vision_transformer.ViT_B_32_Weights`
639+
weights (:class:`~torchvision.models.ViT_B_32_Weights`, optional): The pretrained
640+
weights to use. See :class:`~torchvision.models.ViT_B_32_Weights`
641641
below for more details and possible values. By default, no pre-trained weights are used.
642642
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
643643
**kwargs: parameters passed to the ``torchvision.models.vision_transformer.VisionTransformer``
644644
base class. Please refer to the `source code
645645
<https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py>`_
646646
for more details about this class.
647647
648-
.. autoclass:: torchvision.models.vision_transformer.ViT_B_32_Weights
648+
.. autoclass:: torchvision.models.ViT_B_32_Weights
649649
:members:
650650
"""
651651
weights = ViT_B_32_Weights.verify(weights)
@@ -669,16 +669,16 @@ def vit_l_16(*, weights: Optional[ViT_L_16_Weights] = None, progress: bool = Tru
669669
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`_.
670670
671671
Args:
672-
weights (:class:`~torchvision.models.vision_transformer.ViT_L_16_Weights`, optional): The pretrained
673-
weights to use. See :class:`~torchvision.models.vision_transformer.ViT_L_16_Weights`
672+
weights (:class:`~torchvision.models.ViT_L_16_Weights`, optional): The pretrained
673+
weights to use. See :class:`~torchvision.models.ViT_L_16_Weights`
674674
below for more details and possible values. By default, no pre-trained weights are used.
675675
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
676676
**kwargs: parameters passed to the ``torchvision.models.vision_transformer.VisionTransformer``
677677
base class. Please refer to the `source code
678678
<https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py>`_
679679
for more details about this class.
680680
681-
.. autoclass:: torchvision.models.vision_transformer.ViT_L_16_Weights
681+
.. autoclass:: torchvision.models.ViT_L_16_Weights
682682
:members:
683683
"""
684684
weights = ViT_L_16_Weights.verify(weights)
@@ -702,16 +702,16 @@ def vit_l_32(*, weights: Optional[ViT_L_32_Weights] = None, progress: bool = Tru
702702
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`_.
703703
704704
Args:
705-
weights (:class:`~torchvision.models.vision_transformer.ViT_L_32_Weights`, optional): The pretrained
706-
weights to use. See :class:`~torchvision.models.vision_transformer.ViT_L_32_Weights`
705+
weights (:class:`~torchvision.models.ViT_L_32_Weights`, optional): The pretrained
706+
weights to use. See :class:`~torchvision.models.ViT_L_32_Weights`
707707
below for more details and possible values. By default, no pre-trained weights are used.
708708
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
709709
**kwargs: parameters passed to the ``torchvision.models.vision_transformer.VisionTransformer``
710710
base class. Please refer to the `source code
711711
<https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py>`_
712712
for more details about this class.
713713
714-
.. autoclass:: torchvision.models.vision_transformer.ViT_L_32_Weights
714+
.. autoclass:: torchvision.models.ViT_L_32_Weights
715715
:members:
716716
"""
717717
weights = ViT_L_32_Weights.verify(weights)
@@ -734,16 +734,16 @@ def vit_h_14(*, weights: Optional[ViT_H_14_Weights] = None, progress: bool = Tru
734734
`An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`_.
735735
736736
Args:
737-
weights (:class:`~torchvision.models.vision_transformer.ViT_H_14_Weights`, optional): The pretrained
738-
weights to use. See :class:`~torchvision.models.vision_transformer.ViT_H_14_Weights`
737+
weights (:class:`~torchvision.models.ViT_H_14_Weights`, optional): The pretrained
738+
weights to use. See :class:`~torchvision.models.ViT_H_14_Weights`
739739
below for more details and possible values. By default, no pre-trained weights are used.
740740
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
741741
**kwargs: parameters passed to the ``torchvision.models.vision_transformer.VisionTransformer``
742742
base class. Please refer to the `source code
743743
<https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py>`_
744744
for more details about this class.
745745
746-
.. autoclass:: torchvision.models.vision_transformer.ViT_H_14_Weights
746+
.. autoclass:: torchvision.models.ViT_H_14_Weights
747747
:members:
748748
"""
749749
weights = ViT_H_14_Weights.verify(weights)

0 commit comments

Comments
 (0)