Skip to content

Commit 036ff5d

Browse files
authored
Merge branch 'main' into trivialaugment_implementation
2 parents 425c52d + 3a7e5e3 commit 036ff5d

20 files changed

+417
-230
lines changed

.github/ISSUE_TEMPLATE/bug-report.md

Lines changed: 0 additions & 52 deletions
This file was deleted.

.github/ISSUE_TEMPLATE/bug-report.yml

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
name: 🐛 Bug Report
2+
description: Create a report to help us reproduce and fix the bug
3+
4+
body:
5+
- type: markdown
6+
attributes:
7+
value: >
8+
#### Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/pytorch/vision/issues?q=is%3Aissue+sort%3Acreated-desc+).
9+
- type: textarea
10+
attributes:
11+
label: 🐛 Describe the bug
12+
description: |
13+
Please provide a clear and concise description of what the bug is.
14+
15+
If relevant, add a minimal example so that we can reproduce the error by running the code. It is very important for he snippet to be as succinct (minimal) as possible, so please take time to trim down any irrelevant code to help us debug efficiently. We are going to copy-paste your code and we expect to get the same result as you did: avoid any external data, and include the relevant imports, etc. For example:
16+
17+
```python
18+
# All necessary imports at the beginning
19+
import torch
20+
import torchvision
21+
from torchvision.ops import nms
22+
23+
# A succinct reproducing example trimmed down to the essential parts:
24+
N = 5
25+
boxes = torch.rand(N, 4) # Note: the bug is here, we should enforce that x1 < x2 and y1 < y2!
26+
scores = torch.rand(N)
27+
nms(boxes, scores, iou_threshold=.9)
28+
```
29+
30+
If the code is too long (hopefully, it isn't), feel free to put it in a public gist and link it in the issue: https://gist.github.com.
31+
32+
Please also paste or describe the results you observe instead of the expected results. If you observe an error, please paste the error message including the **full** traceback of the exception. It may be relevant to wrap error messages in ```` ```triple quotes blocks``` ````.
33+
placeholder: |
34+
A clear and concise description of what the bug is.
35+
36+
```python
37+
Sample code to reproduce the problem
38+
```
39+
40+
```
41+
The error message you got, with the full traceback.
42+
````
43+
validations:
44+
required: true
45+
- type: textarea
46+
attributes:
47+
label: Versions
48+
description: |
49+
Please run the following and paste the output below.
50+
```sh
51+
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
52+
# For security purposes, please check the contents of collect_env.py before running it.
53+
python collect_env.py
54+
```
55+
validations:
56+
required: true
57+
- type: markdown
58+
attributes:
59+
value: >
60+
Thanks for contributing 🎉!

.github/ISSUE_TEMPLATE/config.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
blank_issues_enabled: true
2+
contact_links:
3+
- name: Usage questions
4+
url: https://discuss.pytorch.org/
5+
about: Ask questions and discuss with other torchvision community members

.github/ISSUE_TEMPLATE/documentation.md

Lines changed: 0 additions & 12 deletions
This file was deleted.
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
name: 📚 Documentation
2+
description: Report an issue related to https://pytorch.org/vision/stable/index.html
3+
4+
body:
5+
- type: textarea
6+
attributes:
7+
label: 📚 The doc issue
8+
description: >
9+
A clear and concise description of what content in https://pytorch.org/vision/stable/index.html is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new.
10+
validations:
11+
required: true
12+
- type: textarea
13+
attributes:
14+
label: Suggest a potential alternative/fix
15+
description: >
16+
Tell us how we could improve the documentation in this regard.
17+
- type: markdown
18+
attributes:
19+
value: >
20+
Thanks for contributing 🎉!

.github/ISSUE_TEMPLATE/feature-request.md

Lines changed: 0 additions & 27 deletions
This file was deleted.
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
name: 🚀 Feature request
2+
description: Submit a proposal/request for a new torchvision feature
3+
4+
body:
5+
- type: textarea
6+
attributes:
7+
label: 🚀 The feature
8+
description: >
9+
A clear and concise description of the feature proposal
10+
validations:
11+
required: true
12+
- type: textarea
13+
attributes:
14+
label: Motivation, pitch
15+
description: >
16+
Please outline the motivation for the proposal. Is your feature request related to a specific problem? e.g., *"I'm working on X and would like Y to be possible"*. If this is related to another GitHub issue, please link here too.
17+
validations:
18+
required: true
19+
- type: textarea
20+
attributes:
21+
label: Alternatives
22+
description: >
23+
A description of any alternative solutions or features you've considered, if any.
24+
- type: textarea
25+
attributes:
26+
label: Additional context
27+
description: >
28+
Add any other context or screenshots about the feature request.
29+
- type: markdown
30+
attributes:
31+
value: >
32+
Thanks for contributing 🎉!

.github/ISSUE_TEMPLATE/questions-help-support.md

Lines changed: 0 additions & 16 deletions
This file was deleted.

docs/source/transforms.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ number of channels, ``H`` and ``W`` are image height and width. A batch of
2424
Tensor Images is a tensor of ``(B, C, H, W)`` shape, where ``B`` is a number
2525
of images in the batch.
2626

27-
The expected range of the values of a tensor image is implicitely defined by
27+
The expected range of the values of a tensor image is implicitly defined by
2828
the tensor dtype. Tensor images with a float dtype are expected to have
2929
values in ``[0, 1)``. Tensor images with an integer dtype are expected to
3030
have values in ``[0, MAX_DTYPE]`` where ``MAX_DTYPE`` is the largest value
@@ -35,7 +35,7 @@ images of a given batch, but they will produce different transformations
3535
across calls. For reproducible transformations across calls, you may use
3636
:ref:`functional transforms <functional_transforms>`.
3737

38-
The following examples illustate the use of the available transforms:
38+
The following examples illustrate the use of the available transforms:
3939

4040
* :ref:`sphx_glr_auto_examples_plot_transforms.py`
4141

mypy.ini

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,6 @@ ignore_errors=True
2020

2121
ignore_errors = True
2222

23-
[mypy-torchvision.models.quantization.*]
24-
25-
ignore_errors = True
26-
2723
[mypy-torchvision.ops.*]
2824

2925
ignore_errors = True

torchvision/datasets/ucf101.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,9 @@ class UCF101(VisionDataset):
1414
UCF101 is an action recognition video dataset.
1515
This dataset consider every video as a collection of video clips of fixed size, specified
1616
by ``frames_per_clip``, where the step in frames between each clip is given by
17-
``step_between_clips``.
17+
``step_between_clips``. The dataset itself can be downloaded from the dataset website;
18+
annotations that ``annotation_path`` should be pointing to can be downloaded from `here
19+
<https://www.crcv.ucf.edu/data/UCF101/UCF101TrainTestSplits-RecognitionTask.zip>`.
1820
1921
To give an example, for 2 videos with 10 and 15 frames respectively, if ``frames_per_clip=5``
2022
and ``step_between_clips=5``, the dataset size will be (2 + 3) = 5, where the first two
@@ -26,7 +28,8 @@ class UCF101(VisionDataset):
2628
2729
Args:
2830
root (string): Root directory of the UCF101 Dataset.
29-
annotation_path (str): path to the folder containing the split files
31+
annotation_path (str): path to the folder containing the split files;
32+
see docstring above for download instructions of these files
3033
frames_per_clip (int): number of frames in a clip.
3134
step_between_clips (int, optional): number of frames between each clip.
3235
fold (int, optional): which fold to use. Should be between 1 and 3.

torchvision/io/__init__.py

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
import torch
2+
from typing import Any, Dict, Iterator
23

34
from ._video_opt import (
45
Timebase,
@@ -33,13 +34,13 @@
3334

3435
if _HAS_VIDEO_OPT:
3536

36-
def _has_video_opt():
37+
def _has_video_opt() -> bool:
3738
return True
3839

3940

4041
else:
4142

42-
def _has_video_opt():
43+
def _has_video_opt() -> bool:
4344
return False
4445

4546

@@ -99,7 +100,7 @@ class VideoReader:
99100
Currently available options include ``['video', 'audio']``
100101
"""
101102

102-
def __init__(self, path, stream="video"):
103+
def __init__(self, path: str, stream: str = "video") -> None:
103104
if not _has_video_opt():
104105
raise RuntimeError(
105106
"Not compiled with video_reader support, "
@@ -109,7 +110,7 @@ def __init__(self, path, stream="video"):
109110
)
110111
self._c = torch.classes.torchvision.Video(path, stream)
111112

112-
def __next__(self):
113+
def __next__(self) -> Dict[str, Any]:
113114
"""Decodes and returns the next frame of the current stream.
114115
Frames are encoded as a dict with mandatory
115116
data and pts fields, where data is a tensor, and pts is a
@@ -126,10 +127,10 @@ def __next__(self):
126127
raise StopIteration
127128
return {"data": frame, "pts": pts}
128129

129-
def __iter__(self):
130+
def __iter__(self) -> Iterator['VideoReader']:
130131
return self
131132

132-
def seek(self, time_s: float):
133+
def seek(self, time_s: float) -> 'VideoReader':
133134
"""Seek within current stream.
134135
135136
Args:
@@ -144,15 +145,15 @@ def seek(self, time_s: float):
144145
self._c.seek(time_s)
145146
return self
146147

147-
def get_metadata(self):
148+
def get_metadata(self) -> Dict[str, Any]:
148149
"""Returns video metadata
149150
150151
Returns:
151152
(dict): dictionary containing duration and frame rate for every stream
152153
"""
153154
return self._c.get_metadata()
154155

155-
def set_current_stream(self, stream: str):
156+
def set_current_stream(self, stream: str) -> bool:
156157
"""Set current stream.
157158
Explicitly define the stream we are operating on.
158159

0 commit comments

Comments
 (0)