Skip to content

Commit

Permalink
ultralytics 8.0.58 new SimpleClass, fixes and updates (ultralytics#…
Browse files Browse the repository at this point in the history
…1636)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Laughing <[email protected]>
  • Loading branch information
3 people authored Mar 26, 2023
1 parent ef03e67 commit ec10002
Show file tree
Hide file tree
Showing 30 changed files with 354 additions and 317 deletions.
5 changes: 3 additions & 2 deletions .github/workflows/greetings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,9 @@ jobs:
pr-message: |
👋 Hello @${{ github.actor }}, thank you for submitting a YOLOv8 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally.
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge main` locally.
- ✅ Verify all YOLOv8 Continuous Integration (CI) **checks are passing**.
- ✅ Update YOLOv8 [Docs](https://docs.ultralytics.com) for any new or updated features.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
See our [Contributing Guide](https://github.com/ultralytics/ultralytics/blob/main/CONTRIBUTING.md) for details and let us know if you have any questions!
Expand All @@ -33,7 +34,7 @@ jobs:
## Install
Pip install the `ultralytics` package including all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
Pip install the `ultralytics` package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
```bash
pip install ultralytics
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/links.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
with:
fail: true
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
args: --accept 429,999 --exclude twitter.com --verbose --no-progress './**/*.md' './**/*.html'
args: --accept 429,999 --exclude-loopback --exclude twitter.com --verbose --no-progress './**/*.md' './**/*.html'
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

Expand All @@ -33,6 +33,6 @@ jobs:
with:
fail: true
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
args: --accept 429,999 --exclude twitter.com,url.com --verbose --no-progress './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb'
args: --accept 429,999 --exclude-loopback --exclude twitter.com,url.com --verbose --no-progress './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb'
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
27 changes: 5 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ full documentation on training, validation, prediction and deployment.
<summary>Install</summary>

Pip install the ultralytics package including
all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
all [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
[**Python>=3.7**](https://www.python.org/) environment with
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).

Expand Down Expand Up @@ -105,28 +105,11 @@ success = model.export(format="onnx") # export the model to ONNX format
Ultralytics [release](https://github.com/ultralytics/assets/releases). See
YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python) for more examples.

#### Model Architectures

**NEW** YOLOv5u anchor free models are now available.

All supported model architectures can be found in the [Models](./ultralytics/models/) section.

#### Known Issues / TODOs

We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up
to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we
will submit to [arxiv.org](https://arxiv.org) once complete.

- [x] TensorFlow exports
- [x] DDP resume
- [ ] [arxiv.org](https://arxiv.org) paper

</details>

## <div align="center">Models</div>

All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset,
while Classification models are pretrained on the ImageNet dataset.
All YOLOv8 pretrained models are available here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.

[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
Expand All @@ -147,7 +130,7 @@ See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examp
<br>Reproduce by `yolo val detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0/cpu`
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu`

</details>

Expand All @@ -167,7 +150,7 @@ See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage e
<br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0/cpu`
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`

</details>

Expand All @@ -187,7 +170,7 @@ See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usag
<br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0/cpu`
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`

</details>

Expand Down
17 changes: 4 additions & 13 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ SOTA 模型。它在以前成功的 YOLO 版本基础上,引入了新的功能
<details open>
<summary>安装</summary>

Pip 安装包含所有 [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt)
Pip 安装包含所有 [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt)
ultralytics 包,环境要求 [**Python>=3.7**](https://www.python.org/),且 [\*\*PyTorch>=1.7
\*\*](https://pytorch.org/get-started/locally/)

Expand Down Expand Up @@ -100,15 +100,6 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会从
Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。

### 已知问题 / 待办事项

我们仍在努力完善 YOLOv8 的几个部分!我们的目标是尽快完成这些工作,使 YOLOv8 的功能设置达到YOLOv5
的水平,包括对所有相同格式的导出和推理。我们还在写一篇 YOLOv8 的论文,一旦完成,我们将提交给 [arxiv.org](https://arxiv.org)

- [x] TensorFlow 导出
- [x] DDP 恢复训练
- [ ] [arxiv.org](https://arxiv.org) 论文

</details>

## <div align="center">模型</div>
Expand All @@ -132,7 +123,7 @@ Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自
<br>复现命令 `yolo val detect data=coco.yaml device=0`
- **推理速度**使用 COCO
验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
<br>复现命令 `yolo val detect data=coco128.yaml batch=1 device=0/cpu`
<br>复现命令 `yolo val detect data=coco128.yaml batch=1 device=0|cpu`

</details>

Expand All @@ -150,7 +141,7 @@ Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自
<br>复现命令 `yolo val segment data=coco.yaml device=0`
- **推理速度**使用 COCO
验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
<br>复现命令 `yolo val segment data=coco128-seg.yaml batch=1 device=0/cpu`
<br>复现命令 `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`

</details>

Expand All @@ -168,7 +159,7 @@ Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自
<br>复现命令 `yolo val classify data=path/to/ImageNet device=0`
- **推理速度**使用 ImageNet
验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
<br>复现命令 `yolo val classify data=path/to/ImageNet batch=1 device=0/cpu`
<br>复现命令 `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`

</details>

Expand Down
28 changes: 25 additions & 3 deletions docs/tasks/classify.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,31 @@ of that class are located or what their exact shape is.

!!! tip "Tip"

YOLOv8 _classification_ models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on ImageNet.

[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
YOLOv8 Classify models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml).

## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8)

YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose models are pretrained on
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify
models are pretrained on
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.

[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.

| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
|----------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|--------------------------------|-------------------------------------|--------------------|--------------------------|
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |

- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set.
<br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`

## Train

Expand Down
28 changes: 25 additions & 3 deletions docs/tasks/detect.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,31 @@ scene, but don't need to know exactly where the object is or its exact shape.

!!! tip "Tip"

YOLOv8 _detection_ models have no suffix and are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on COCO.

[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml).

## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8)

YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models are pretrained on
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify
models are pretrained on
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.

[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.

| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|--------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |

- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo val detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu`

## Train

Expand Down
28 changes: 25 additions & 3 deletions docs/tasks/segment.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,31 @@ segmentation is useful when you need to know not only where objects are in an im

!!! tip "Tip"

YOLOv8 _segmentation_ models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on COCO.

[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
YOLOv8 Segment models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml).

## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8)

YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models are pretrained on
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify
models are pretrained on
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.

[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.

| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|----------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |

- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`

## Train

Expand Down
Loading

0 comments on commit ec10002

Please sign in to comment.