Skip to content

Commit d030384

Browse files
authored
YOLOv5 Update (#183)
* YOLOv5 Update Updated demo, metrics and bug fix for ultralytics/yolov5#2050 * cleanup * update image * cleanup data comment * Update ultralytics_yolov5.md
1 parent c63f12f commit d030384

File tree

2 files changed

+28
-32
lines changed

2 files changed

+28
-32
lines changed

images/ultralytics_yolov5_img2.png

-27.3 KB
Loading

ultralytics_yolov5.md

Lines changed: 28 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -17,66 +17,62 @@ accelerator: cuda-optional
1717

1818
## Before You Start
1919

20-
Start from a working python environment with **Python>=3.8** and **PyTorch>=1.6** installed, as well as `PyYAML>=5.3` for reading YOLOv5 configuration files. To install PyTorch see [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/). To install dependencies:
20+
Start from a **Python>=3.8** environment with **PyTorch>=1.7** installed. To install PyTorch see [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/). To install YOLOv5 dependencies:
2121
```bash
22-
pip install -U PyYAML # install dependencies
22+
pip install -qr https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt # install dependencies
2323
```
2424

25+
2526
## Model Description
2627

27-
<img width="800" alt="YOLOv5 Models" src="https://user-images.githubusercontent.com/26833433/97808084-edfcb100-1c64-11eb-83eb-ffed43a0859f.png">
28+
<img width="800" alt="YOLOv5 Models" src="https://user-images.githubusercontent.com/26833433/103595982-ab986000-4eb1-11eb-8c57-4726261b0a88.png">
2829
&nbsp;
2930

30-
YOLOv5 is a family of compound-scaled object detection models trained on COCO 2017, and includes built-in functionality for Test Time Augmentation (TTA), Model Ensembling, Rectangular Inference, Hyperparameter Evolution.
31+
YOLOv5 is a family of compound-scaled object detection models trained on the COCO dataset, and includes simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite.
3132

32-
| Model | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Speed<sub>GPU</sub> | FPS<sub>GPU</sub> || params | FLOPS |
33-
|---------- |------ |------ |------ | -------- | ------| ------ |------ | :------: |
34-
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 37.0 | 37.0 | 56.2 | **2.4ms** | **416** || 7.5M | 13.2B
35-
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 44.3 | 44.3 | 63.2 | 3.4ms | 294 || 21.8M | 39.4B
36-
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 47.7 | 47.7 | 66.5 | 4.4ms | 227 || 47.8M | 88.1B
37-
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 49.2 | 49.2 | 67.7 | 6.9ms | 145 || 89.0M | 166.4B
38-
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/tag/v3.0) + TTA|**50.8**| **50.8** | **68.9** | 25.5ms | 39 || 89.0M | 354.3B
33+
| Model | size | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Speed<sub>V100</sub> | FPS<sub>V100</sub> || params | GFLOPS |
34+
|---------- |------ |------ |------ |------ | -------- | ------| ------ |------ | :------: |
35+
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases) |640 |36.8 |36.8 |55.6 |**2.2ms** |**455** ||7.3M |17.0
36+
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases) |640 |44.5 |44.5 |63.1 |2.9ms |345 ||21.4M |51.3
37+
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases) |640 |48.1 |48.1 |66.4 |3.8ms |264 ||47.0M |115.4
38+
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases) |640 |**50.1** |**50.1** |**68.7** |6.0ms |167 ||87.7M |218.8
39+
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases) + TTA |832 |**51.9** |**51.9** |**69.6** |24.9ms |40 ||87.7M |1005.3
3940

40-
<img src="https://user-images.githubusercontent.com/26833433/90187293-6773ba00-dd6e-11ea-8f90-cd94afc0427f.png" width="800">
41+
<img width="800" alt="YOLOv5 Performance" src="https://user-images.githubusercontent.com/26833433/103594689-455e0e00-4eae-11eb-9cdf-7d753e2ceeeb.png">
4142
** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
4243

4344

4445
## Load From PyTorch Hub
4546

46-
To load YOLOv5 from PyTorch Hub for inference with PIL, OpenCV, Numpy or PyTorch inputs:
47+
This simple example loads a pretrained **YOLOv5s** model from PyTorch Hub as `model` and passes two **image URLs** for batched inference.
48+
4749
```python
48-
import cv2
4950
import torch
50-
from PIL import Image
5151

5252
# Model
53-
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().autoshape() # for PIL/cv2/np inputs and NMS
53+
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
5454

5555
# Images
56-
for f in ['zidane.jpg', 'bus.jpg']: # download 2 images
57-
print(f'Downloading {f}...')
58-
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/' + f, f)
59-
img1 = Image.open('zidane.jpg') # PIL image
60-
img2 = cv2.imread('bus.jpg')[:, :, ::-1] # OpenCV image (BGR to RGB)
61-
imgs = [img1, img2] # batched list of images
56+
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
57+
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batched list of images
6258

6359
# Inference
64-
results = model(imgs, size=640) # includes NMS
60+
results = model(imgs)
6561

6662
# Results
67-
results.print() # print results to screen
68-
results.show() # display results
69-
results.save() # save as results1.jpg, results2.jpg... etc.
63+
results.print()
64+
results.save() # or .show()
7065

7166
# Data
72-
print('\n', results.xyxy[0]) # print img1 predictions
73-
# x1 (pixels) y1 (pixels) x2 (pixels) y2 (pixels) confidence class
74-
# tensor([[7.47613e+02, 4.01168e+01, 1.14978e+03, 7.12016e+02, 8.71210e-01, 0.00000e+00],
75-
# [1.17464e+02, 1.96875e+02, 1.00145e+03, 7.11802e+02, 8.08795e-01, 0.00000e+00],
76-
# [4.23969e+02, 4.30401e+02, 5.16833e+02, 7.20000e+02, 7.77376e-01, 2.70000e+01],
77-
# [9.81310e+02, 3.10712e+02, 1.03111e+03, 4.19273e+02, 2.86850e-01, 2.70000e+01]])
67+
print(results.xyxy[0]) # print img1 predictions (pixels)
68+
# x1 y1 x2 y2 confidence class
69+
# tensor([[7.50637e+02, 4.37279e+01, 1.15887e+03, 7.08682e+02, 8.18137e-01, 0.00000e+00],
70+
# [9.33597e+01, 2.07387e+02, 1.04737e+03, 7.10224e+02, 5.78011e-01, 0.00000e+00],
71+
# [4.24503e+02, 4.29092e+02, 5.16300e+02, 7.16425e+02, 5.68713e-01, 2.70000e+01]])
7872
```
7973

74+
For YOLOv5 PyTorch Hub inference with **PIL**, **OpenCV**, **Numpy** or **PyTorch** inputs please see the full [YOLOv5 PyTorch Hub Tutorial](https://github.com/ultralytics/yolov5/issues/36).
75+
8076

8177
## Citation
8278

0 commit comments

Comments
 (0)