diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
index 28d4b8f5143..2e31ad2930c 100644
--- a/.github/ISSUE_TEMPLATE/config.yml
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -7,5 +7,5 @@ contact_links:
url: https://community.ultralytics.com/
about: Ask on Ultralytics Community Forum
- name: 🎧 Discord
- url: https://discord.gg/n6cFeSPZdD
+ url: https://discord.gg/7aegy5d8
about: Ask on Ultralytics Discord
diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci.yaml
index 32ceff2c207..85cfb578dde 100644
--- a/.github/workflows/ci.yaml
+++ b/.github/workflows/ci.yaml
@@ -141,7 +141,7 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest]
- python-version: ['3.7', '3.8', '3.9', '3.10']
+ python-version: ['3.8', '3.9', '3.10']
model: [yolov8n]
torch: [latest]
include:
diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml
index c9be02c5bc9..ae759ecd66e 100644
--- a/.github/workflows/publish.yml
+++ b/.github/workflows/publish.yml
@@ -63,7 +63,7 @@ jobs:
python -m twine upload dist/* -u __token__ -p $PYPI_TOKEN
- name: Deploy Docs
continue-on-error: true
- if: (github.event_name == 'push' && steps.check_pypi.outputs.increment == 'True') || github.event.inputs.docs == 'true'
+ if: ((github.event_name == 'push' && (contains(github.event.head_commit.message, 'docs/') || contains(github.event.head_commit.message, 'mkdocs.yaml'))) || github.event.inputs.docs == 'true') && github.repository == 'ultralytics/ultralytics' && github.actor == 'glenn-jocher'
env:
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
run: |
diff --git a/README.md b/README.md
index 5c3c0eabe2d..c6ec78f5da2 100644
--- a/README.md
+++ b/README.md
@@ -20,7 +20,7 @@
[Ultralytics](https://ultralytics.com) [YOLOv8](https://github.com/ultralytics/ultralytics) is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
-We hope that the resources here will help you get the most out of YOLOv8. Please browse the YOLOv8 Docs for details, raise an issue on GitHub for support, and join our Discord community for questions and discussions!
+We hope that the resources here will help you get the most out of YOLOv8. Please browse the YOLOv8 Docs for details, raise an issue on GitHub for support, and join our Discord community for questions and discussions!
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://ultralytics.com/license).
@@ -45,7 +45,7 @@ To request an Enterprise License please complete the form at [Ultralytics Licens
-
+
@@ -237,7 +237,7 @@ YOLOv8 is available under two different licenses:
##
@@ -47,4 +48,4 @@ Ultralytics YOLO repositories like YOLOv3, YOLOv5, or YOLOv8 are available under
- **AGPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for details.
- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of AGPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license).
-Please note our licensing approach ensures that any enhancements made to our open-source projects are shared back to the community. We firmly believe in the principles of open source, and we are committed to ensuring that our work can be used and improved upon in a manner that benefits everyone.
+Please note our licensing approach ensures that any enhancements made to our open-source projects are shared back to the community. We firmly believe in the principles of open source, and we are committed to ensuring that our work can be used and improved upon in a manner that benefits everyone.
\ No newline at end of file
diff --git a/docs/models/index.md b/docs/models/index.md
index 051bfb514d7..cce8af13f1d 100644
--- a/docs/models/index.md
+++ b/docs/models/index.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn about the supported models and architectures, such as YOLOv3, YOLOv5, and YOLOv8, and how to contribute your own model to Ultralytics.
+keywords: Ultralytics YOLO, YOLOv3, YOLOv4, YOLOv5, YOLOv6, YOLOv7, YOLOv8, SAM, YOLO-NAS, RT-DETR, object detection, instance segmentation, detection transformers, real-time detection, computer vision, CLI, Python
---
# Models
@@ -9,13 +10,15 @@ Ultralytics supports many models and architectures with more to come in the futu
In this documentation, we provide information on four major models:
-1. [YOLOv3](./yolov3.md): The third iteration of the YOLO model family, known for its efficient real-time object detection capabilities.
-2. [YOLOv5](./yolov5.md): An improved version of the YOLO architecture, offering better performance and speed tradeoffs compared to previous versions.
-3. [YOLOv6](./yolov6.md): Released by [Meituan](https://about.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots.
-4. [YOLOv8](./yolov8.md): The latest version of the YOLO family, featuring enhanced capabilities such as instance segmentation, pose/keypoints estimation, and classification.
-5. [Segment Anything Model (SAM)](./sam.md): Meta's Segment Anything Model (SAM).
-6. [YOLO-NAS](./yolo-nas.md): YOLO Neural Architecture Search (NAS) Models.
-7. [Realtime Detection Transformers (RT-DETR)](./rtdetr.md): Baidu's PaddlePaddle Realtime Detection Transformer (RT-DETR) models.
+1. [YOLOv3](./yolov3.md): The third iteration of the YOLO model family originally by Joseph Redmon, known for its efficient real-time object detection capabilities.
+2. [YOLOv4](./yolov3.md): A darknet-native update to YOLOv3 released by Alexey Bochkovskiy in 2020.
+3. [YOLOv5](./yolov5.md): An improved version of the YOLO architecture by Ultralytics, offering better performance and speed tradeoffs compared to previous versions.
+4. [YOLOv6](./yolov6.md): Released by [Meituan](https://about.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots.
+5. [YOLOv7](./yolov7.md): Updated YOLO models released in 2022 by the authors of YOLOv4.
+6. [YOLOv8](./yolov8.md): The latest version of the YOLO family, featuring enhanced capabilities such as instance segmentation, pose/keypoints estimation, and classification.
+7. [Segment Anything Model (SAM)](./sam.md): Meta's Segment Anything Model (SAM).
+8. [YOLO-NAS](./yolo-nas.md): YOLO Neural Architecture Search (NAS) Models.
+9. [Realtime Detection Transformers (RT-DETR)](./rtdetr.md): Baidu's PaddlePaddle Realtime Detection Transformer (RT-DETR) models.
You can use these models directly in the Command Line Interface (CLI) or in a Python environment. Below are examples of how to use the models with CLI and Python:
@@ -36,4 +39,4 @@ model.info() # display model information
model.train(data="coco128.yaml", epochs=100) # train the model
```
-For more details on each model, their supported tasks, modes, and performance, please visit their respective documentation pages linked above.
+For more details on each model, their supported tasks, modes, and performance, please visit their respective documentation pages linked above.
\ No newline at end of file
diff --git a/docs/models/rtdetr.md b/docs/models/rtdetr.md
index a38acbb3ef3..61d156abc59 100644
--- a/docs/models/rtdetr.md
+++ b/docs/models/rtdetr.md
@@ -1,6 +1,7 @@
---
comments: true
description: Dive into Baidu's RT-DETR, a revolutionary real-time object detection model built on the foundation of Vision Transformers (ViT). Learn how to use pre-trained PaddlePaddle RT-DETR models with the Ultralytics Python API for various tasks.
+keywords: RT-DETR, Transformer, ViT, Vision Transformers, Baidu RT-DETR, PaddlePaddle, Paddle Paddle RT-DETR, real-time object detection, Vision Transformers-based object detection, pre-trained PaddlePaddle RT-DETR models, Baidu's RT-DETR usage, Ultralytics Python API, object detector
---
# Baidu's RT-DETR: A Vision Transformer-Based Real-Time Object Detector
diff --git a/docs/models/sam.md b/docs/models/sam.md
index 12dd1159f8d..8dd1e35c24b 100644
--- a/docs/models/sam.md
+++ b/docs/models/sam.md
@@ -1,6 +1,7 @@
---
comments: true
description: Discover the Segment Anything Model (SAM), a revolutionary promptable image segmentation model, and delve into the details of its advanced architecture and the large-scale SA-1B dataset.
+keywords: Segment Anything, Segment Anything Model, SAM, Meta SAM, image segmentation, promptable segmentation, zero-shot performance, SA-1B dataset, advanced architecture, auto-annotation, Ultralytics, pre-trained models, SAM base, SAM large, instance segmentation, computer vision, AI, artificial intelligence, machine learning, data annotation, segmentation masks, detection model, YOLO detection model, bibtex, Meta AI
---
# Segment Anything Model (SAM)
@@ -95,4 +96,4 @@ If you find SAM useful in your research or development work, please consider cit
We would like to express our gratitude to Meta AI for creating and maintaining this valuable resource for the computer vision community.
-*keywords: Segment Anything, Segment Anything Model, SAM, Meta SAM, image segmentation, promptable segmentation, zero-shot performance, SA-1B dataset, advanced architecture, auto-annotation, Ultralytics, pre-trained models, SAM base, SAM large, instance segmentation, computer vision, AI, artificial intelligence, machine learning, data annotation, segmentation masks, detection model, YOLO detection model, bibtex, Meta AI.*
+*keywords: Segment Anything, Segment Anything Model, SAM, Meta SAM, image segmentation, promptable segmentation, zero-shot performance, SA-1B dataset, advanced architecture, auto-annotation, Ultralytics, pre-trained models, SAM base, SAM large, instance segmentation, computer vision, AI, artificial intelligence, machine learning, data annotation, segmentation masks, detection model, YOLO detection model, bibtex, Meta AI.*
\ No newline at end of file
diff --git a/docs/models/yolo-nas.md b/docs/models/yolo-nas.md
index 9da81756a14..4ce38e85a04 100644
--- a/docs/models/yolo-nas.md
+++ b/docs/models/yolo-nas.md
@@ -1,6 +1,7 @@
---
comments: true
description: Dive into YOLO-NAS, Deci's next-generation object detection model, offering breakthroughs in speed and accuracy. Learn how to utilize pre-trained models using the Ultralytics Python API for various tasks.
+keywords: YOLO-NAS, Deci AI, Ultralytics, object detection, deep learning, neural architecture search, Python API, pre-trained models, quantization
---
# YOLO-NAS
diff --git a/docs/models/yolov3.md b/docs/models/yolov3.md
index 0ca49ee7df4..da1415e0606 100644
--- a/docs/models/yolov3.md
+++ b/docs/models/yolov3.md
@@ -1,6 +1,7 @@
---
comments: true
description: YOLOv3, YOLOv3-Ultralytics and YOLOv3u by Ultralytics explained. Learn the evolution of these models and their specifications.
+keywords: YOLOv3, Ultralytics YOLOv3, YOLO v3, YOLOv3 models, object detection, models, machine learning, AI, image recognition, object recognition
---
# YOLOv3, YOLOv3-Ultralytics, and YOLOv3u
diff --git a/docs/models/yolov4.md b/docs/models/yolov4.md
new file mode 100644
index 00000000000..36a09ccba90
--- /dev/null
+++ b/docs/models/yolov4.md
@@ -0,0 +1,67 @@
+---
+comments: true
+description: Explore YOLOv4, a state-of-the-art, real-time object detector. Learn about its architecture, features, and performance.
+keywords: YOLOv4, object detection, real-time, CNN, GPU, Ultralytics, documentation, YOLOv4 architecture, YOLOv4 features, YOLOv4 performance
+---
+
+# YOLOv4: High-Speed and Precise Object Detection
+
+Welcome to the Ultralytics documentation page for YOLOv4, a state-of-the-art, real-time object detector launched in 2020 by Alexey Bochkovskiy at [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet). YOLOv4 is designed to provide the optimal balance between speed and accuracy, making it an excellent choice for many applications.
+
+![YOLOv4 architecture diagram](https://user-images.githubusercontent.com/26833433/246185689-530b7fe8-737b-4bb0-b5dd-de10ef5aface.png)
+**YOLOv4 architecture diagram**. Showcasing the intricate network design of YOLOv4, including the backbone, neck, and head components, and their interconnected layers for optimal real-time object detection.
+
+## Introduction
+
+YOLOv4 stands for You Only Look Once version 4. It is a real-time object detection model developed to address the limitations of previous YOLO versions like [YOLOv3](./yolov3.md) and other object detection models. Unlike other convolutional neural network (CNN) based object detectors, YOLOv4 is not only applicable for recommendation systems but also for standalone process management and human input reduction. Its operation on conventional graphics processing units (GPUs) allows for mass usage at an affordable price, and it is designed to work in real-time on a conventional GPU while requiring only one such GPU for training.
+
+## Architecture
+
+YOLOv4 makes use of several innovative features that work together to optimize its performance. These include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT), Mish-activation, Mosaic data augmentation, DropBlock regularization, and CIoU loss. These features are combined to achieve state-of-the-art results.
+
+A typical object detector is composed of several parts including the input, the backbone, the neck, and the head. The backbone of YOLOv4 is pre-trained on ImageNet and is used to predict classes and bounding boxes of objects. The backbone could be from several models including VGG, ResNet, ResNeXt, or DenseNet. The neck part of the detector is used to collect feature maps from different stages and usually includes several bottom-up paths and several top-down paths. The head part is what is used to make the final object detections and classifications.
+
+## Bag of Freebies
+
+YOLOv4 also makes use of methods known as "bag of freebies," which are techniques that improve the accuracy of the model during training without increasing the cost of inference. Data augmentation is a common bag of freebies technique used in object detection, which increases the variability of the input images to improve the robustness of the model. Some examples of data augmentation include photometric distortions (adjusting the brightness, contrast, hue, saturation, and noise of an image) and geometric distortions (adding random scaling, cropping, flipping, and rotating). These techniques help the model to generalize better to different types of images.
+
+## Features and Performance
+
+YOLOv4 is designed for optimal speed and accuracy in object detection. The architecture of YOLOv4 includes CSPDarknet53 as the backbone, PANet as the neck, and YOLOv3 as the detection head. This design allows YOLOv4 to perform object detection at an impressive speed, making it suitable for real-time applications. YOLOv4 also excels in accuracy, achieving state-of-the-art results in object detection benchmarks.
+
+## Usage Examples
+
+As of the time of writing, Ultralytics does not currently support YOLOv4 models. Therefore, any users interested in using YOLOv4 will need to refer directly to the YOLOv4 GitHub repository for installation and usage instructions.
+
+Here is a brief overview of the typical steps you might take to use YOLOv4:
+
+1. Visit the YOLOv4 GitHub repository: [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet).
+
+2. Follow the instructions provided in the README file for installation. This typically involves cloning the repository, installing necessary dependencies, and setting up any necessary environment variables.
+
+3. Once installation is complete, you can train and use the model as per the usage instructions provided in the repository. This usually involves preparing your dataset, configuring the model parameters, training the model, and then using the trained model to perform object detection.
+
+Please note that the specific steps may vary depending on your specific use case and the current state of the YOLOv4 repository. Therefore, it is strongly recommended to refer directly to the instructions provided in the YOLOv4 GitHub repository.
+
+We regret any inconvenience this may cause and will strive to update this document with usage examples for Ultralytics once support for YOLOv4 is implemented.
+
+## Conclusion
+
+YOLOv4 is a powerful and efficient object detection model that strikes a balance between speed and accuracy. Its use of unique features and bag of freebies techniques during training allows it to perform excellently in real-time object detection tasks. YOLOv4 can be trained and used by anyone with a conventional GPU, making it accessible and practical for a wide range of applications.
+
+## Citations and Acknowledgements
+
+We would like to acknowledge the YOLOv4 authors for their significant contributions in the field of real-time object detection:
+
+```bibtex
+@misc{bochkovskiy2020yolov4,
+ title={YOLOv4: Optimal Speed and Accuracy of Object Detection},
+ author={Alexey Bochkovskiy and Chien-Yao Wang and Hong-Yuan Mark Liao},
+ year={2020},
+ eprint={2004.10934},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV}
+}
+```
+
+The original YOLOv4 paper can be found on [arXiv](https://arxiv.org/pdf/2004.10934.pdf). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/AlexeyAB/darknet). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
\ No newline at end of file
diff --git a/docs/models/yolov5.md b/docs/models/yolov5.md
index e163f4272e4..959c06af140 100644
--- a/docs/models/yolov5.md
+++ b/docs/models/yolov5.md
@@ -1,6 +1,7 @@
---
comments: true
description: YOLOv5 by Ultralytics explained. Discover the evolution of this model and its key specifications. Experience faster and more accurate object detection.
+keywords: YOLOv5, Ultralytics YOLOv5, YOLO v5, YOLOv5 models, YOLO, object detection, model, neural network, accuracy, speed, pre-trained weights, inference, validation, training
---
# YOLOv5
diff --git a/docs/models/yolov6.md b/docs/models/yolov6.md
index b8239c86077..a8a2449f25c 100644
--- a/docs/models/yolov6.md
+++ b/docs/models/yolov6.md
@@ -1,6 +1,7 @@
---
comments: true
description: Discover Meituan YOLOv6, a robust real-time object detector. Learn how to utilize pre-trained models with Ultralytics Python API for a variety of tasks.
+keywords: Meituan, YOLOv6, object detection, Bi-directional Concatenation (BiC), anchor-aided training (AAT), pre-trained models, high-resolution input, real-time, ultra-fast computations
---
# Meituan YOLOv6
@@ -78,4 +79,4 @@ We would like to acknowledge the authors for their significant contributions in
}
```
-The original YOLOv6 paper can be found on [arXiv](https://arxiv.org/abs/2301.05586). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/meituan/YOLOv6). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
+The original YOLOv6 paper can be found on [arXiv](https://arxiv.org/abs/2301.05586). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/meituan/YOLOv6). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
\ No newline at end of file
diff --git a/docs/models/yolov7.md b/docs/models/yolov7.md
new file mode 100644
index 00000000000..d8b1ea62745
--- /dev/null
+++ b/docs/models/yolov7.md
@@ -0,0 +1,61 @@
+---
+comments: true
+description: Discover YOLOv7, a cutting-edge real-time object detector that surpasses competitors in speed and accuracy. Explore its unique trainable bag-of-freebies.
+keywords: object detection, real-time object detector, YOLOv7, MS COCO, computer vision, neural networks, AI, deep learning, deep neural networks, real-time, GPU, GitHub, arXiv
+---
+
+# YOLOv7: Trainable Bag-of-Freebies
+
+YOLOv7 is a state-of-the-art real-time object detector that surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS. It has the highest accuracy (56.8% AP) among all known real-time object detectors with 30 FPS or higher on GPU V100. Moreover, YOLOv7 outperforms other object detectors such as YOLOR, YOLOX, Scaled-YOLOv4, YOLOv5, and many others in speed and accuracy. The model is trained on the MS COCO dataset from scratch without using any other datasets or pre-trained weights. Source code for YOLOv7 is available on GitHub.
+
+![YOLOv7 comparison with SOTA object detectors](https://github.com/ultralytics/ultralytics/assets/26833433/5e1e0420-8122-4c79-b8d0-2860aa79af92)
+**Comparison of state-of-the-art object detectors.** From the results in Table 2 we know that the proposed method has the best speed-accuracy trade-off comprehensively. If we compare YOLOv7-tiny-SiLU with YOLOv5-N (r6.1), our method is 127 fps faster and 10.7% more accurate on AP. In addition, YOLOv7 has 51.4% AP at frame rate of 161 fps, while PPYOLOE-L with the same AP has only 78 fps frame rate. In terms of parameter usage, YOLOv7 is 41% less than PPYOLOE-L. If we compare YOLOv7-X with 114 fps inference speed to YOLOv5-L (r6.1) with 99 fps inference speed, YOLOv7-X can improve AP by 3.9%. If YOLOv7-X is compared with YOLOv5-X (r6.1) of similar scale, the inference speed of YOLOv7-X is 31 fps faster. In addition, in terms the amount of parameters and computation, YOLOv7-X reduces 22% of parameters and 8% of computation compared to YOLOv5-X (r6.1), but improves AP by 2.2% ([Source](https://arxiv.org/pdf/2207.02696.pdf)).
+
+## Overview
+
+Real-time object detection is an important component in many computer vision systems, including multi-object tracking, autonomous driving, robotics, and medical image analysis. In recent years, real-time object detection development has focused on designing efficient architectures and improving the inference speed of various CPUs, GPUs, and neural processing units (NPUs). YOLOv7 supports both mobile GPU and GPU devices, from the edge to the cloud.
+
+Unlike traditional real-time object detectors that focus on architecture optimization, YOLOv7 introduces a focus on the optimization of the training process. This includes modules and optimization methods designed to improve the accuracy of object detection without increasing the inference cost, a concept known as the "trainable bag-of-freebies".
+
+## Key Features
+
+YOLOv7 introduces several key features:
+
+1. **Model Re-parameterization**: YOLOv7 proposes a planned re-parameterized model, which is a strategy applicable to layers in different networks with the concept of gradient propagation path.
+
+2. **Dynamic Label Assignment**: The training of the model with multiple output layers presents a new issue: "How to assign dynamic targets for the outputs of different branches?" To solve this problem, YOLOv7 introduces a new label assignment method called coarse-to-fine lead guided label assignment.
+
+3. **Extended and Compound Scaling**: YOLOv7 proposes "extend" and "compound scaling" methods for the real-time object detector that can effectively utilize parameters and computation.
+
+4. **Efficiency**: The method proposed by YOLOv7 can effectively reduce about 40% parameters and 50% computation of state-of-the-art real-time object detector, and has faster inference speed and higher detection accuracy.
+
+## Usage Examples
+
+As of the time of writing, Ultralytics does not currently support YOLOv7 models. Therefore, any users interested in using YOLOv7 will need to refer directly to the YOLOv7 GitHub repository for installation and usage instructions.
+
+Here is a brief overview of the typical steps you might take to use YOLOv7:
+
+1. Visit the YOLOv7 GitHub repository: [https://github.com/WongKinYiu/yolov7](https://github.com/WongKinYiu/yolov7).
+
+2. Follow the instructions provided in the README file for installation. This typically involves cloning the repository, installing necessary dependencies, and setting up any necessary environment variables.
+
+3. Once installation is complete, you can train and use the model as per the usage instructions provided in the repository. This usually involves preparing your dataset, configuring the model parameters, training the model, and then using the trained model to perform object detection.
+
+Please note that the specific steps may vary depending on your specific use case and the current state of the YOLOv7 repository. Therefore, it is strongly recommended to refer directly to the instructions provided in the YOLOv7 GitHub repository.
+
+We regret any inconvenience this may cause and will strive to update this document with usage examples for Ultralytics once support for YOLOv7 is implemented.
+
+## Citations and Acknowledgements
+
+We would like to acknowledge the YOLOv7 authors for their significant contributions in the field of real-time object detection:
+
+```bibtex
+@article{wang2022yolov7,
+ title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
+ author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
+ journal={arXiv preprint arXiv:2207.02696},
+ year={2022}
+}
+```
+
+The original YOLOv7 paper can be found on [arXiv](https://arxiv.org/pdf/2207.02696.pdf). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/WongKinYiu/yolov7). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
\ No newline at end of file
diff --git a/docs/models/yolov8.md b/docs/models/yolov8.md
index 6e3adb5078f..8c78d87e77f 100644
--- a/docs/models/yolov8.md
+++ b/docs/models/yolov8.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn about YOLOv8's pre-trained weights supporting detection, instance segmentation, pose, and classification tasks. Get performance details.
+keywords: YOLOv8, real-time object detection, object detection, deep learning, machine learning
---
# YOLOv8
diff --git a/docs/modes/benchmark.md b/docs/modes/benchmark.md
index a7600e055e6..c1c159e6bed 100644
--- a/docs/modes/benchmark.md
+++ b/docs/modes/benchmark.md
@@ -1,6 +1,7 @@
---
comments: true
description: Benchmark mode compares speed and accuracy of various YOLOv8 export formats like ONNX or OpenVINO. Optimize formats for speed or accuracy.
+keywords: YOLOv8, Benchmark Mode, Export Formats, ONNX, OpenVINO, TensorRT, Ultralytics Docs
---
diff --git a/docs/modes/export.md b/docs/modes/export.md
index 2352cf4910a..42e45272aae 100644
--- a/docs/modes/export.md
+++ b/docs/modes/export.md
@@ -1,6 +1,7 @@
---
comments: true
description: 'Export mode: Create a deployment-ready YOLOv8 model by converting it to various formats. Export to ONNX or OpenVINO for up to 3x CPU speedup.'
+keywords: ultralytics docs, YOLOv8, export YOLOv8, YOLOv8 model deployment, exporting YOLOv8, ONNX, OpenVINO, TensorRT, CoreML, TF SavedModel, PaddlePaddle, TorchScript, ONNX format, OpenVINO format, TensorRT format, CoreML format, TF SavedModel format, PaddlePaddle format
---
diff --git a/docs/modes/index.md b/docs/modes/index.md
index c9ae14aabd7..5a00afa5d39 100644
--- a/docs/modes/index.md
+++ b/docs/modes/index.md
@@ -1,6 +1,7 @@
---
comments: true
description: Use Ultralytics YOLOv8 Modes (Train, Val, Predict, Export, Track, Benchmark) to train, validate, predict, track, export or benchmark.
+keywords: yolov8, yolo, ultralytics, training, validation, prediction, export, tracking, benchmarking, real-time object detection, object tracking
---
# Ultralytics YOLOv8 Modes
diff --git a/docs/modes/predict.md b/docs/modes/predict.md
index 3deee7c7564..581d4221d52 100644
--- a/docs/modes/predict.md
+++ b/docs/modes/predict.md
@@ -1,6 +1,7 @@
---
comments: true
description: Get started with YOLOv8 Predict mode and input sources. Accepts various input sources such as images, videos, and directories.
+keywords: YOLOv8, predict mode, generator, streaming mode, input sources, video formats, arguments customization
---
@@ -300,4 +301,4 @@ Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video f
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
- ```
+ ```
\ No newline at end of file
diff --git a/docs/modes/track.md b/docs/modes/track.md
index c0d8adb5069..7b9a83a714b 100644
--- a/docs/modes/track.md
+++ b/docs/modes/track.md
@@ -1,6 +1,7 @@
---
comments: true
description: Explore YOLOv8n-based object tracking with Ultralytics' BoT-SORT and ByteTrack. Learn configuration, usage, and customization tips.
+keywords: object tracking, YOLO, trackers, BoT-SORT, ByteTrack
---
@@ -97,5 +98,4 @@ any configurations(expect the `tracker_type`) you need to.
```
Please refer to [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg)
-page
-
+page
\ No newline at end of file
diff --git a/docs/modes/train.md b/docs/modes/train.md
index 1738975c1f2..882c0d1f1eb 100644
--- a/docs/modes/train.md
+++ b/docs/modes/train.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to train custom YOLOv8 models on various datasets, configure hyperparameters, and use Ultralytics' YOLO for seamless training.
+keywords: YOLOv8, train mode, train a custom YOLOv8 model, hyperparameters, train a model, Comet, ClearML, TensorBoard, logging, loggers
---
diff --git a/docs/modes/val.md b/docs/modes/val.md
index bc294ecbf3e..79fdf6f8e36 100644
--- a/docs/modes/val.md
+++ b/docs/modes/val.md
@@ -1,6 +1,7 @@
---
comments: true
description: Validate and improve YOLOv8n model accuracy on COCO128 and other datasets using hyperparameter & configuration tuning, in Val mode.
+keywords: Ultralytics, YOLO, YOLOv8, Val, Validation, Hyperparameters, Performance, Accuracy, Generalization, COCO, Export Formats, PyTorch
---
diff --git a/docs/quickstart.md b/docs/quickstart.md
index b1fe2afc543..b762e5ab461 100644
--- a/docs/quickstart.md
+++ b/docs/quickstart.md
@@ -1,6 +1,7 @@
---
comments: true
description: Install and use YOLOv8 via CLI or Python. Run single-line commands or integrate with Python projects for object detection, segmentation, and classification.
+keywords: YOLOv8, object detection, segmentation, classification, pip, git, CLI, Python
---
## Install
diff --git a/docs/reference/hub/auth.md b/docs/reference/hub/auth.md
index 6c19030704c..a8b7f9dbef7 100644
--- a/docs/reference/hub/auth.md
+++ b/docs/reference/hub/auth.md
@@ -1,8 +1,9 @@
---
description: Learn how to use Ultralytics hub authentication in your projects with examples and guidelines from the Auth page on Ultralytics Docs.
+keywords: Ultralytics, ultralytics hub, api keys, authentication, collab accounts, requests, hub management, monitoring
---
# Auth
---
:::ultralytics.hub.auth.Auth
-
+
\ No newline at end of file
diff --git a/docs/reference/hub/session.md b/docs/reference/hub/session.md
index 2d7033340e6..3b2115ca5af 100644
--- a/docs/reference/hub/session.md
+++ b/docs/reference/hub/session.md
@@ -1,8 +1,9 @@
---
description: Accelerate your AI development with the Ultralytics HUB Training Session. High-performance training of object detection models.
+keywords: YOLOv5, object detection, HUBTrainingSession, custom models, Ultralytics Docs
---
# HUBTrainingSession
---
:::ultralytics.hub.session.HUBTrainingSession
-
+
\ No newline at end of file
diff --git a/docs/reference/hub/utils.md b/docs/reference/hub/utils.md
index 5b710580cf2..bacefc84534 100644
--- a/docs/reference/hub/utils.md
+++ b/docs/reference/hub/utils.md
@@ -1,5 +1,6 @@
---
description: Explore Ultralytics events, including 'request_with_credentials' and 'smart_request', to improve your project's performance and efficiency.
+keywords: Ultralytics, Hub Utils, API Documentation, Python, requests_with_progress, Events, classes, usage, examples
---
# Events
@@ -20,4 +21,4 @@ description: Explore Ultralytics events, including 'request_with_credentials' an
# smart_request
---
:::ultralytics.hub.utils.smart_request
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/autobackend.md b/docs/reference/nn/autobackend.md
index 2166e7c80a6..9feb71b032e 100644
--- a/docs/reference/nn/autobackend.md
+++ b/docs/reference/nn/autobackend.md
@@ -1,5 +1,6 @@
---
description: Ensure class names match filenames for easy imports. Use AutoBackend to automatically rename and refactor model files.
+keywords: AutoBackend, ultralytics, nn, autobackend, check class names, neural network
---
# AutoBackend
@@ -10,4 +11,4 @@ description: Ensure class names match filenames for easy imports. Use AutoBacken
# check_class_names
---
:::ultralytics.nn.autobackend.check_class_names
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/autoshape.md b/docs/reference/nn/autoshape.md
index 2c5745e33a8..e17976ef8e8 100644
--- a/docs/reference/nn/autoshape.md
+++ b/docs/reference/nn/autoshape.md
@@ -1,5 +1,6 @@
---
description: Detect 80+ object categories with bounding box coordinates and class probabilities using AutoShape in Ultralytics YOLO. Explore Detections now.
+keywords: Ultralytics, YOLO, docs, autoshape, detections, object detection, customized shapes, bounding boxes, computer vision
---
# AutoShape
@@ -10,4 +11,4 @@ description: Detect 80+ object categories with bounding box coordinates and clas
# Detections
---
:::ultralytics.nn.autoshape.Detections
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/modules/block.md b/docs/reference/nn/modules/block.md
index 58a23ed63f5..687e3080340 100644
--- a/docs/reference/nn/modules/block.md
+++ b/docs/reference/nn/modules/block.md
@@ -1,5 +1,6 @@
---
description: Explore ultralytics.nn.modules.block to build powerful YOLO object detection models. Master DFL, HGStem, SPP, CSP components and more.
+keywords: Ultralytics, NN Modules, Blocks, DFL, HGStem, SPP, C1, C2f, C3x, C3TR, GhostBottleneck, BottleneckCSP, Computer Vision
---
# DFL
@@ -85,4 +86,4 @@ description: Explore ultralytics.nn.modules.block to build powerful YOLO object
# BottleneckCSP
---
:::ultralytics.nn.modules.block.BottleneckCSP
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/modules/conv.md b/docs/reference/nn/modules/conv.md
index 7cfaf012cb3..60d0345f4b8 100644
--- a/docs/reference/nn/modules/conv.md
+++ b/docs/reference/nn/modules/conv.md
@@ -1,5 +1,6 @@
---
description: Explore convolutional neural network modules & techniques such as LightConv, DWConv, ConvTranspose, GhostConv, CBAM & autopad with Ultralytics Docs.
+keywords: Ultralytics, Convolutional Neural Network, Conv2, DWConv, ConvTranspose, GhostConv, ChannelAttention, CBAM, autopad
---
# Conv
@@ -70,4 +71,4 @@ description: Explore convolutional neural network modules & techniques such as L
# autopad
---
:::ultralytics.nn.modules.conv.autopad
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/modules/head.md b/docs/reference/nn/modules/head.md
index 17488da73fd..b21124e21a2 100644
--- a/docs/reference/nn/modules/head.md
+++ b/docs/reference/nn/modules/head.md
@@ -1,5 +1,6 @@
---
description: 'Learn about Ultralytics YOLO modules: Segment, Classify, and RTDETRDecoder. Optimize object detection and classification in your project.'
+keywords: Ultralytics, YOLO, object detection, pose estimation, RTDETRDecoder, modules, classes, documentation
---
# Detect
@@ -25,4 +26,4 @@ description: 'Learn about Ultralytics YOLO modules: Segment, Classify, and RTDET
# RTDETRDecoder
---
:::ultralytics.nn.modules.head.RTDETRDecoder
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/modules/transformer.md b/docs/reference/nn/modules/transformer.md
index 654917d8284..8d6429d73aa 100644
--- a/docs/reference/nn/modules/transformer.md
+++ b/docs/reference/nn/modules/transformer.md
@@ -1,5 +1,6 @@
---
description: Explore the Ultralytics nn modules pages on Transformer and MLP blocks, LayerNorm2d, and Deformable Transformer Decoder Layer.
+keywords: Ultralytics, NN Modules, TransformerEncoderLayer, TransformerLayer, MLPBlock, LayerNorm2d, DeformableTransformerDecoderLayer, examples, code snippets, tutorials
---
# TransformerEncoderLayer
@@ -50,4 +51,4 @@ description: Explore the Ultralytics nn modules pages on Transformer and MLP blo
# DeformableTransformerDecoder
---
:::ultralytics.nn.modules.transformer.DeformableTransformerDecoder
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/modules/utils.md b/docs/reference/nn/modules/utils.md
index 877c52c01b7..f7aa43f0ded 100644
--- a/docs/reference/nn/modules/utils.md
+++ b/docs/reference/nn/modules/utils.md
@@ -1,5 +1,6 @@
---
description: 'Learn about Ultralytics NN modules: get_clones, linear_init_, and multi_scale_deformable_attn_pytorch. Code examples and usage tips.'
+keywords: Ultralytics, NN Utils, Docs, PyTorch, bias initialization, linear initialization, multi-scale deformable attention
---
# _get_clones
@@ -25,4 +26,4 @@ description: 'Learn about Ultralytics NN modules: get_clones, linear_init_, and
# multi_scale_deformable_attn_pytorch
---
:::ultralytics.nn.modules.utils.multi_scale_deformable_attn_pytorch
-
+
\ No newline at end of file
diff --git a/docs/reference/nn/tasks.md b/docs/reference/nn/tasks.md
index 502b82d7495..3258e4ff920 100644
--- a/docs/reference/nn/tasks.md
+++ b/docs/reference/nn/tasks.md
@@ -1,5 +1,6 @@
---
description: Learn how to work with Ultralytics YOLO Detection, Segmentation & Classification Models, load weights and parse models in PyTorch.
+keywords: neural network, deep learning, computer vision, object detection, image segmentation, image classification, model ensemble, PyTorch
---
# BaseModel
@@ -70,4 +71,4 @@ description: Learn how to work with Ultralytics YOLO Detection, Segmentation & C
# guess_model_task
---
:::ultralytics.nn.tasks.guess_model_task
-
+
\ No newline at end of file
diff --git a/docs/reference/tracker/track.md b/docs/reference/tracker/track.md
index 51c48c9f317..156ee0002ab 100644
--- a/docs/reference/tracker/track.md
+++ b/docs/reference/tracker/track.md
@@ -1,5 +1,6 @@
---
description: Learn how to register custom event-tracking and track predictions with Ultralytics YOLO via on_predict_start and register_tracker methods.
+keywords: Ultralytics YOLO, tracker registration, on_predict_start, object detection
---
# on_predict_start
@@ -15,4 +16,4 @@ description: Learn how to register custom event-tracking and track predictions w
# register_tracker
---
:::ultralytics.tracker.track.register_tracker
-
+
\ No newline at end of file
diff --git a/docs/reference/tracker/trackers/basetrack.md b/docs/reference/tracker/trackers/basetrack.md
index d21f29e1cce..9b767eca3ff 100644
--- a/docs/reference/tracker/trackers/basetrack.md
+++ b/docs/reference/tracker/trackers/basetrack.md
@@ -1,5 +1,6 @@
---
description: 'TrackState: A comprehensive guide to Ultralytics tracker''s BaseTrack for monitoring model performance. Improve your tracking capabilities now!'
+keywords: object detection, object tracking, Ultralytics YOLO, TrackState, workflow improvement
---
# TrackState
@@ -10,4 +11,4 @@ description: 'TrackState: A comprehensive guide to Ultralytics tracker''s BaseTr
# BaseTrack
---
:::ultralytics.tracker.trackers.basetrack.BaseTrack
-
+
\ No newline at end of file
diff --git a/docs/reference/tracker/trackers/bot_sort.md b/docs/reference/tracker/trackers/bot_sort.md
index b2d0f9bbcdf..f3f50132ed0 100644
--- a/docs/reference/tracker/trackers/bot_sort.md
+++ b/docs/reference/tracker/trackers/bot_sort.md
@@ -1,5 +1,6 @@
---
description: '"Optimize tracking with Ultralytics BOTrack. Easily sort and track bots with BOTSORT. Streamline data collection for improved performance."'
+keywords: BOTrack, Ultralytics YOLO Docs, features, usage
---
# BOTrack
@@ -10,4 +11,4 @@ description: '"Optimize tracking with Ultralytics BOTrack. Easily sort and track
# BOTSORT
---
:::ultralytics.tracker.trackers.bot_sort.BOTSORT
-
+
\ No newline at end of file
diff --git a/docs/reference/tracker/trackers/byte_tracker.md b/docs/reference/tracker/trackers/byte_tracker.md
index c96f85a62eb..cbaf90a910d 100644
--- a/docs/reference/tracker/trackers/byte_tracker.md
+++ b/docs/reference/tracker/trackers/byte_tracker.md
@@ -1,5 +1,6 @@
---
description: Learn how to track ByteAI model sizes and tips for model optimization with STrack, a byte tracking tool from Ultralytics.
+keywords: Byte Tracker, Ultralytics STrack, application monitoring, bytes sent, bytes received, code examples, setup instructions
---
# STrack
@@ -10,4 +11,4 @@ description: Learn how to track ByteAI model sizes and tips for model optimizati
# BYTETracker
---
:::ultralytics.tracker.trackers.byte_tracker.BYTETracker
-
+
\ No newline at end of file
diff --git a/docs/reference/tracker/utils/gmc.md b/docs/reference/tracker/utils/gmc.md
index b208a4a1b3c..461b7fd88b0 100644
--- a/docs/reference/tracker/utils/gmc.md
+++ b/docs/reference/tracker/utils/gmc.md
@@ -1,8 +1,9 @@
---
description: '"Track Google Marketing Campaigns in GMC with Ultralytics Tracker. Learn to set up and use GMC for detailed analytics. Get started now."'
+keywords: Ultralytics, YOLO, object detection, tracker, optimization, models, documentation
---
# GMC
---
:::ultralytics.tracker.utils.gmc.GMC
-
+
\ No newline at end of file
diff --git a/docs/reference/tracker/utils/kalman_filter.md b/docs/reference/tracker/utils/kalman_filter.md
index baa749c03d5..93217151cdb 100644
--- a/docs/reference/tracker/utils/kalman_filter.md
+++ b/docs/reference/tracker/utils/kalman_filter.md
@@ -1,5 +1,6 @@
---
description: Improve object tracking with KalmanFilterXYAH in Ultralytics YOLO - an efficient and accurate algorithm for state estimation.
+keywords: KalmanFilterXYAH, Ultralytics Docs, Kalman filter algorithm, object tracking, computer vision, YOLO
---
# KalmanFilterXYAH
@@ -10,4 +11,4 @@ description: Improve object tracking with KalmanFilterXYAH in Ultralytics YOLO -
# KalmanFilterXYWH
---
:::ultralytics.tracker.utils.kalman_filter.KalmanFilterXYWH
-
+
\ No newline at end of file
diff --git a/docs/reference/tracker/utils/matching.md b/docs/reference/tracker/utils/matching.md
index 5d8474bee0d..4f1725dbbe8 100644
--- a/docs/reference/tracker/utils/matching.md
+++ b/docs/reference/tracker/utils/matching.md
@@ -1,5 +1,6 @@
---
description: Learn how to match and fuse object detections for accurate target tracking using Ultralytics' YOLO merge_matches, iou_distance, and embedding_distance.
+keywords: Ultralytics, multi-object tracking, object tracking, detection, recognition, matching, indices, iou distance, gate cost matrix, fuse iou, bbox ious
---
# merge_matches
@@ -60,4 +61,4 @@ description: Learn how to match and fuse object detections for accurate target t
# bbox_ious
---
:::ultralytics.tracker.utils.matching.bbox_ious
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/annotator.md b/docs/reference/yolo/data/annotator.md
index 9e97df62775..25ca21aaf8b 100644
--- a/docs/reference/yolo/data/annotator.md
+++ b/docs/reference/yolo/data/annotator.md
@@ -1,8 +1,9 @@
---
description: Learn how to use auto_annotate in Ultralytics YOLO to generate annotations automatically for your dataset. Simplify object detection workflows.
+keywords: Ultralytics YOLO, Auto Annotator, AI, image annotation, object detection, labelling, tool
---
# auto_annotate
---
:::ultralytics.yolo.data.annotator.auto_annotate
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/augment.md b/docs/reference/yolo/data/augment.md
index 1cb38d06581..1bff44f60d8 100644
--- a/docs/reference/yolo/data/augment.md
+++ b/docs/reference/yolo/data/augment.md
@@ -1,5 +1,6 @@
---
description: Use Ultralytics YOLO Data Augmentation transforms with Base, MixUp, and Albumentations for object detection and classification.
+keywords: YOLO, data augmentation, transforms, BaseTransform, MixUp, RandomHSV, Albumentations, ToTensor, classify_transforms, classify_albumentations
---
# BaseTransform
@@ -95,4 +96,4 @@ description: Use Ultralytics YOLO Data Augmentation transforms with Base, MixUp,
# classify_albumentations
---
:::ultralytics.yolo.data.augment.classify_albumentations
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/base.md b/docs/reference/yolo/data/base.md
index 2f14c1dd9ba..eb6defed282 100644
--- a/docs/reference/yolo/data/base.md
+++ b/docs/reference/yolo/data/base.md
@@ -1,8 +1,9 @@
---
description: Learn about BaseDataset in Ultralytics YOLO, a flexible dataset class for object detection. Maximize your YOLO performance with custom datasets.
+keywords: BaseDataset, Ultralytics YOLO, object detection, real-world applications, documentation
---
# BaseDataset
---
:::ultralytics.yolo.data.base.BaseDataset
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/build.md b/docs/reference/yolo/data/build.md
index b84cefffac6..d1f05084486 100644
--- a/docs/reference/yolo/data/build.md
+++ b/docs/reference/yolo/data/build.md
@@ -1,5 +1,6 @@
---
description: Maximize YOLO performance with Ultralytics' InfiniteDataLoader, seed_worker, build_dataloader, and load_inference_source functions.
+keywords: Ultralytics, YOLO, object detection, data loading, build dataloader, load inference source
---
# InfiniteDataLoader
@@ -35,4 +36,4 @@ description: Maximize YOLO performance with Ultralytics' InfiniteDataLoader, see
# load_inference_source
---
:::ultralytics.yolo.data.build.load_inference_source
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/converter.md b/docs/reference/yolo/data/converter.md
index 34856337415..7f8833aa9ee 100644
--- a/docs/reference/yolo/data/converter.md
+++ b/docs/reference/yolo/data/converter.md
@@ -1,5 +1,6 @@
---
description: Convert COCO-91 to COCO-80 class, RLE to polygon, and merge multi-segment images with Ultralytics YOLO data converter. Improve your object detection.
+keywords: Ultralytics, YOLO, converter, COCO91, COCO80, rle2polygon, merge_multi_segment, annotations
---
# coco91_to_coco80_class
@@ -30,4 +31,4 @@ description: Convert COCO-91 to COCO-80 class, RLE to polygon, and merge multi-s
# delete_dsstore
---
:::ultralytics.yolo.data.converter.delete_dsstore
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/dataloaders/stream_loaders.md b/docs/reference/yolo/data/dataloaders/stream_loaders.md
index afecea7aac6..536aa8d455c 100644
--- a/docs/reference/yolo/data/dataloaders/stream_loaders.md
+++ b/docs/reference/yolo/data/dataloaders/stream_loaders.md
@@ -1,5 +1,6 @@
---
description: 'Ultralytics YOLO Docs: Learn about stream loaders for image and tensor data, as well as autocasting techniques. Check out SourceTypes and more.'
+keywords: Ultralytics YOLO, data loaders, stream load images, screenshots, tensor data, autocast list, youtube URL retriever
---
# SourceTypes
@@ -40,4 +41,4 @@ description: 'Ultralytics YOLO Docs: Learn about stream loaders for image and te
# get_best_youtube_url
---
:::ultralytics.yolo.data.dataloaders.stream_loaders.get_best_youtube_url
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/dataloaders/v5augmentations.md b/docs/reference/yolo/data/dataloaders/v5augmentations.md
index c75e57d5c4d..aa5f3f71005 100644
--- a/docs/reference/yolo/data/dataloaders/v5augmentations.md
+++ b/docs/reference/yolo/data/dataloaders/v5augmentations.md
@@ -1,5 +1,6 @@
---
description: Enhance image data with Albumentations CenterCrop, normalize, augment_hsv, replicate, random_perspective, cutout, & box_candidates.
+keywords: YOLO, object detection, data loaders, V5 augmentations, CenterCrop, normalize, random_perspective
---
# Albumentations
@@ -85,4 +86,4 @@ description: Enhance image data with Albumentations CenterCrop, normalize, augme
# classify_transforms
---
:::ultralytics.yolo.data.dataloaders.v5augmentations.classify_transforms
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/dataloaders/v5loader.md b/docs/reference/yolo/data/dataloaders/v5loader.md
index d8b3110445a..20161f8b7c9 100644
--- a/docs/reference/yolo/data/dataloaders/v5loader.md
+++ b/docs/reference/yolo/data/dataloaders/v5loader.md
@@ -1,5 +1,6 @@
---
description: Efficiently load images and labels to models using Ultralytics YOLO's InfiniteDataLoader, LoadScreenshots, and LoadStreams.
+keywords: YOLO, data loader, image classification, object detection, Ultralytics
---
# InfiniteDataLoader
@@ -90,4 +91,4 @@ description: Efficiently load images and labels to models using Ultralytics YOLO
# create_classification_dataloader
---
:::ultralytics.yolo.data.dataloaders.v5loader.create_classification_dataloader
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/dataset.md b/docs/reference/yolo/data/dataset.md
index c0d181e3563..f5bd9a10116 100644
--- a/docs/reference/yolo/data/dataset.md
+++ b/docs/reference/yolo/data/dataset.md
@@ -1,5 +1,6 @@
---
description: Create custom YOLOv5 datasets with Ultralytics YOLODataset and SemanticDataset. Streamline your object detection and segmentation projects.
+keywords: YOLODataset, SemanticDataset, Ultralytics YOLO Docs, Object Detection, Segmentation
---
# YOLODataset
@@ -15,4 +16,4 @@ description: Create custom YOLOv5 datasets with Ultralytics YOLODataset and Sema
# SemanticDataset
---
:::ultralytics.yolo.data.dataset.SemanticDataset
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/dataset_wrappers.md b/docs/reference/yolo/data/dataset_wrappers.md
index 04e2997b99c..49a24af652b 100644
--- a/docs/reference/yolo/data/dataset_wrappers.md
+++ b/docs/reference/yolo/data/dataset_wrappers.md
@@ -1,8 +1,9 @@
---
description: Create a custom dataset of mixed and oriented rectangular objects with Ultralytics YOLO's MixAndRectDataset.
+keywords: Ultralytics YOLO, MixAndRectDataset, dataset wrapper, image-level annotations, object-level annotations, rectangular object detection
---
# MixAndRectDataset
---
:::ultralytics.yolo.data.dataset_wrappers.MixAndRectDataset
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/data/utils.md b/docs/reference/yolo/data/utils.md
index 3a0e9a4acf7..19bd739a5c2 100644
--- a/docs/reference/yolo/data/utils.md
+++ b/docs/reference/yolo/data/utils.md
@@ -1,5 +1,6 @@
---
description: Efficiently handle data in YOLO with Ultralytics. Utilize HUBDatasetStats and customize dataset with these data utility functions.
+keywords: YOLOv4, Object Detection, Computer Vision, Deep Learning, Convolutional Neural Network, CNN, Ultralytics Docs
---
# HUBDatasetStats
@@ -65,4 +66,4 @@ description: Efficiently handle data in YOLO with Ultralytics. Utilize HUBDatase
# zip_directory
---
:::ultralytics.yolo.data.utils.zip_directory
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/engine/exporter.md b/docs/reference/yolo/engine/exporter.md
index 4b19204243e..aef271b5c65 100644
--- a/docs/reference/yolo/engine/exporter.md
+++ b/docs/reference/yolo/engine/exporter.md
@@ -1,5 +1,6 @@
---
description: Learn how to export your YOLO model in various formats using Ultralytics' exporter package - iOS, GDC, and more.
+keywords: Ultralytics, YOLO, exporter, iOS detect model, gd_outputs, export
---
# Exporter
@@ -30,4 +31,4 @@ description: Learn how to export your YOLO model in various formats using Ultral
# export
---
:::ultralytics.yolo.engine.exporter.export
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/engine/model.md b/docs/reference/yolo/engine/model.md
index d9b90d766c7..be36339923c 100644
--- a/docs/reference/yolo/engine/model.md
+++ b/docs/reference/yolo/engine/model.md
@@ -1,8 +1,9 @@
---
description: Discover the YOLO model of Ultralytics engine to simplify your object detection tasks with state-of-the-art models.
+keywords: YOLO, object detection, model, architecture, usage, customization, Ultralytics Docs
---
# YOLO
---
:::ultralytics.yolo.engine.model.YOLO
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/engine/predictor.md b/docs/reference/yolo/engine/predictor.md
index 52540e738c2..ec17842f51c 100644
--- a/docs/reference/yolo/engine/predictor.md
+++ b/docs/reference/yolo/engine/predictor.md
@@ -1,8 +1,9 @@
---
description: '"The BasePredictor class in Ultralytics YOLO Engine predicts object detection in images and videos. Learn to implement YOLO with ease."'
+keywords: Ultralytics, YOLO, BasePredictor, Object Detection, Computer Vision, Fast Model, Insights
---
# BasePredictor
---
:::ultralytics.yolo.engine.predictor.BasePredictor
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/engine/results.md b/docs/reference/yolo/engine/results.md
index e64e603563f..504864cb147 100644
--- a/docs/reference/yolo/engine/results.md
+++ b/docs/reference/yolo/engine/results.md
@@ -1,5 +1,6 @@
---
description: Learn about BaseTensor & Boxes in Ultralytics YOLO Engine. Check out Ultralytics Docs for quality tutorials and resources on object detection.
+keywords: YOLO, Engine, Results, Masks, Probs, Ultralytics
---
# BaseTensor
@@ -21,3 +22,13 @@ description: Learn about BaseTensor & Boxes in Ultralytics YOLO Engine. Check ou
---
:::ultralytics.yolo.engine.results.Masks
+
+# Keypoints
+---
+:::ultralytics.yolo.engine.results.Keypoints
+
+
+# Probs
+---
+:::ultralytics.yolo.engine.results.Probs
+
\ No newline at end of file
diff --git a/docs/reference/yolo/engine/trainer.md b/docs/reference/yolo/engine/trainer.md
index 1892bbd88ca..fc51c24bc83 100644
--- a/docs/reference/yolo/engine/trainer.md
+++ b/docs/reference/yolo/engine/trainer.md
@@ -1,13 +1,9 @@
---
description: Train faster with mixed precision. Learn how to use BaseTrainer with Advanced Mixed Precision to optimize YOLOv3 and YOLOv4 models.
+keywords: Ultralytics YOLO, BaseTrainer, object detection models, training guide
---
# BaseTrainer
---
:::ultralytics.yolo.engine.trainer.BaseTrainer
-
-
-# check_amp
----
-:::ultralytics.yolo.engine.trainer.check_amp
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/engine/validator.md b/docs/reference/yolo/engine/validator.md
index e499fa78b48..d99b062559f 100644
--- a/docs/reference/yolo/engine/validator.md
+++ b/docs/reference/yolo/engine/validator.md
@@ -1,8 +1,9 @@
---
description: Ensure YOLOv5 models meet constraints and standards with the BaseValidator class. Learn how to use it here.
+keywords: Ultralytics, YOLO, BaseValidator, models, validation, object detection
---
# BaseValidator
---
:::ultralytics.yolo.engine.validator.BaseValidator
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/nas/model.md b/docs/reference/yolo/nas/model.md
new file mode 100644
index 00000000000..c5fe258469b
--- /dev/null
+++ b/docs/reference/yolo/nas/model.md
@@ -0,0 +1,9 @@
+---
+description: Learn about the Neural Architecture Search (NAS) feature available in Ultralytics YOLO. Find out how NAS can improve object detection models and increase accuracy. Get started today!.
+keywords: Ultralytics YOLO, object detection, NAS, Neural Architecture Search, model optimization, accuracy improvement
+---
+
+# NAS
+---
+:::ultralytics.yolo.nas.model.NAS
+
\ No newline at end of file
diff --git a/docs/reference/yolo/nas/predict.md b/docs/reference/yolo/nas/predict.md
new file mode 100644
index 00000000000..0b8a62d1fbe
--- /dev/null
+++ b/docs/reference/yolo/nas/predict.md
@@ -0,0 +1,9 @@
+---
+description: Learn how to use NASPredictor in Ultralytics YOLO for deploying efficient CNN models with search algorithms in neural architecture search.
+keywords: Ultralytics YOLO, NASPredictor, neural architecture search, efficient CNN models, search algorithms
+---
+
+# NASPredictor
+---
+:::ultralytics.yolo.nas.predict.NASPredictor
+
\ No newline at end of file
diff --git a/docs/reference/yolo/nas/val.md b/docs/reference/yolo/nas/val.md
new file mode 100644
index 00000000000..6f849a471fb
--- /dev/null
+++ b/docs/reference/yolo/nas/val.md
@@ -0,0 +1,9 @@
+---
+description: Learn about NASValidator in the Ultralytics YOLO Docs. Properly validate YOLO neural architecture search results for optimal performance.
+keywords: NASValidator, YOLO, neural architecture search, validation, performance, Ultralytics
+---
+
+# NASValidator
+---
+:::ultralytics.yolo.nas.val.NASValidator
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/autobatch.md b/docs/reference/yolo/utils/autobatch.md
index a2bf4a3c690..dc7c0a8b8c1 100644
--- a/docs/reference/yolo/utils/autobatch.md
+++ b/docs/reference/yolo/utils/autobatch.md
@@ -1,5 +1,6 @@
---
description: Dynamically adjusts input size to optimize GPU memory usage during training. Learn how to use check_train_batch_size with Ultralytics YOLO.
+keywords: YOLOv5, batch size, training, Ultralytics Autobatch, object detection, model performance
---
# check_train_batch_size
@@ -10,4 +11,4 @@ description: Dynamically adjusts input size to optimize GPU memory usage during
# autobatch
---
:::ultralytics.yolo.utils.autobatch.autobatch
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/benchmarks.md b/docs/reference/yolo/utils/benchmarks.md
index e3abcade64a..39112eac232 100644
--- a/docs/reference/yolo/utils/benchmarks.md
+++ b/docs/reference/yolo/utils/benchmarks.md
@@ -1,5 +1,6 @@
---
description: Improve your YOLO's performance and measure its speed. Benchmark utility for YOLOv5.
+keywords: Ultralytics YOLO, ProfileModels, benchmark, model inference, detection
---
# ProfileModels
@@ -10,4 +11,4 @@ description: Improve your YOLO's performance and measure its speed. Benchmark ut
# benchmark
---
:::ultralytics.yolo.utils.benchmarks.benchmark
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/base.md b/docs/reference/yolo/utils/callbacks/base.md
index a448dac8c66..d82f0fd287e 100644
--- a/docs/reference/yolo/utils/callbacks/base.md
+++ b/docs/reference/yolo/utils/callbacks/base.md
@@ -1,5 +1,6 @@
---
description: Learn about YOLO's callback functions from on_train_start to add_integration_callbacks. See how these callbacks modify and save models.
+keywords: YOLO, Ultralytics, callbacks, object detection, training, inference
---
# on_pretrain_routine_start
@@ -135,4 +136,4 @@ description: Learn about YOLO's callback functions from on_train_start to add_in
# add_integration_callbacks
---
:::ultralytics.yolo.utils.callbacks.base.add_integration_callbacks
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/clearml.md b/docs/reference/yolo/utils/callbacks/clearml.md
index 8b7bbfce0ea..7dc01a38dfb 100644
--- a/docs/reference/yolo/utils/callbacks/clearml.md
+++ b/docs/reference/yolo/utils/callbacks/clearml.md
@@ -1,5 +1,6 @@
---
description: Improve your YOLOv5 model training with callbacks from ClearML. Learn about log debug samples, pre-training routines, validation and more.
+keywords: Ultralytics YOLO, callbacks, log plots, epoch monitoring, training end events
---
# _log_debug_samples
@@ -35,4 +36,4 @@ description: Improve your YOLOv5 model training with callbacks from ClearML. Lea
# on_train_end
---
:::ultralytics.yolo.utils.callbacks.clearml.on_train_end
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/comet.md b/docs/reference/yolo/utils/callbacks/comet.md
index 9e81dfc87b0..2e1bffe8344 100644
--- a/docs/reference/yolo/utils/callbacks/comet.md
+++ b/docs/reference/yolo/utils/callbacks/comet.md
@@ -1,5 +1,6 @@
---
description: Learn about YOLO callbacks using the Comet.ml platform, enhancing object detection training and testing with custom logging and visualizations.
+keywords: Ultralytics, YOLO, callbacks, Comet ML, log images, log predictions, log plots, fetch metadata, fetch annotations, create experiment data, format experiment data
---
# _get_comet_mode
@@ -120,4 +121,4 @@ description: Learn about YOLO callbacks using the Comet.ml platform, enhancing o
# on_train_end
---
:::ultralytics.yolo.utils.callbacks.comet.on_train_end
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/dvc.md b/docs/reference/yolo/utils/callbacks/dvc.md
new file mode 100644
index 00000000000..1ca463697dd
--- /dev/null
+++ b/docs/reference/yolo/utils/callbacks/dvc.md
@@ -0,0 +1,54 @@
+---
+description: Explore Ultralytics YOLO Utils DVC Callbacks such as logging images, plots, confusion matrices, and training progress.
+keywords: Ultralytics, YOLO, Utils, DVC, Callbacks, images, plots, confusion matrices, training progress
+---
+
+# _logger_disabled
+---
+:::ultralytics.yolo.utils.callbacks.dvc._logger_disabled
+
+
+# _log_images
+---
+:::ultralytics.yolo.utils.callbacks.dvc._log_images
+
+
+# _log_plots
+---
+:::ultralytics.yolo.utils.callbacks.dvc._log_plots
+
+
+# _log_confusion_matrix
+---
+:::ultralytics.yolo.utils.callbacks.dvc._log_confusion_matrix
+
+
+# on_pretrain_routine_start
+---
+:::ultralytics.yolo.utils.callbacks.dvc.on_pretrain_routine_start
+
+
+# on_pretrain_routine_end
+---
+:::ultralytics.yolo.utils.callbacks.dvc.on_pretrain_routine_end
+
+
+# on_train_start
+---
+:::ultralytics.yolo.utils.callbacks.dvc.on_train_start
+
+
+# on_train_epoch_start
+---
+:::ultralytics.yolo.utils.callbacks.dvc.on_train_epoch_start
+
+
+# on_fit_epoch_end
+---
+:::ultralytics.yolo.utils.callbacks.dvc.on_fit_epoch_end
+
+
+# on_train_end
+---
+:::ultralytics.yolo.utils.callbacks.dvc.on_train_end
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/hub.md b/docs/reference/yolo/utils/callbacks/hub.md
index aa751e3909f..7337c86c527 100644
--- a/docs/reference/yolo/utils/callbacks/hub.md
+++ b/docs/reference/yolo/utils/callbacks/hub.md
@@ -1,5 +1,6 @@
---
description: Improve YOLOv5 model training with Ultralytics' on-train callbacks. Boost performance on-pretrain-routine-end, model-save, train/predict start.
+keywords: Ultralytics, YOLO, callbacks, on_pretrain_routine_end, on_fit_epoch_end, on_train_start, on_val_start, on_predict_start, on_export_start
---
# on_pretrain_routine_end
@@ -40,4 +41,4 @@ description: Improve YOLOv5 model training with Ultralytics' on-train callbacks.
# on_export_start
---
:::ultralytics.yolo.utils.callbacks.hub.on_export_start
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/mlflow.md b/docs/reference/yolo/utils/callbacks/mlflow.md
index 8c0d717c91a..b6708904981 100644
--- a/docs/reference/yolo/utils/callbacks/mlflow.md
+++ b/docs/reference/yolo/utils/callbacks/mlflow.md
@@ -1,5 +1,6 @@
---
description: Track model performance and metrics with MLflow in YOLOv5. Use callbacks like on_pretrain_routine_end or on_train_end to log information.
+keywords: Ultralytics, YOLO, Utils, MLflow, callbacks, on_pretrain_routine_end, on_train_end, Tracking, Model Management, training
---
# on_pretrain_routine_end
@@ -15,4 +16,4 @@ description: Track model performance and metrics with MLflow in YOLOv5. Use call
# on_train_end
---
:::ultralytics.yolo.utils.callbacks.mlflow.on_train_end
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/neptune.md b/docs/reference/yolo/utils/callbacks/neptune.md
index 2b4597875e5..195c9beef9c 100644
--- a/docs/reference/yolo/utils/callbacks/neptune.md
+++ b/docs/reference/yolo/utils/callbacks/neptune.md
@@ -1,5 +1,6 @@
---
description: Improve YOLOv5 training with Neptune, a powerful logging tool. Track metrics like images, plots, and epochs for better model performance.
+keywords: Ultralytics, YOLO, Neptune, Callbacks, log scalars, log images, log plots, training, validation
---
# _log_scalars
@@ -40,4 +41,4 @@ description: Improve YOLOv5 training with Neptune, a powerful logging tool. Trac
# on_train_end
---
:::ultralytics.yolo.utils.callbacks.neptune.on_train_end
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/raytune.md b/docs/reference/yolo/utils/callbacks/raytune.md
index d20f4eada7e..fa1267d356d 100644
--- a/docs/reference/yolo/utils/callbacks/raytune.md
+++ b/docs/reference/yolo/utils/callbacks/raytune.md
@@ -1,8 +1,9 @@
---
description: '"Improve YOLO model performance with on_fit_epoch_end callback. Learn to integrate with Ray Tune for hyperparameter tuning. Ultralytics YOLO docs."'
+keywords: on_fit_epoch_end, Ultralytics YOLO, callback function, training, model tuning
---
# on_fit_epoch_end
---
:::ultralytics.yolo.utils.callbacks.raytune.on_fit_epoch_end
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/tensorboard.md b/docs/reference/yolo/utils/callbacks/tensorboard.md
index 95291dc6ddf..bf0f4c42cba 100644
--- a/docs/reference/yolo/utils/callbacks/tensorboard.md
+++ b/docs/reference/yolo/utils/callbacks/tensorboard.md
@@ -1,5 +1,6 @@
---
description: Learn how to monitor the training process with Tensorboard using Ultralytics YOLO's "_log_scalars" and "on_batch_end" methods.
+keywords: TensorBoard callbacks, YOLO training, ultralytics YOLO
---
# _log_scalars
@@ -20,4 +21,4 @@ description: Learn how to monitor the training process with Tensorboard using Ul
# on_fit_epoch_end
---
:::ultralytics.yolo.utils.callbacks.tensorboard.on_fit_epoch_end
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/callbacks/wb.md b/docs/reference/yolo/utils/callbacks/wb.md
index 48a6c812c5c..03045e2953b 100644
--- a/docs/reference/yolo/utils/callbacks/wb.md
+++ b/docs/reference/yolo/utils/callbacks/wb.md
@@ -1,7 +1,13 @@
---
description: Learn how to use Ultralytics YOLO's built-in callbacks `on_pretrain_routine_start` and `on_train_epoch_end` for improved training performance.
+keywords: Ultralytics, YOLO, callbacks, weights, biases, training
---
+# _log_plots
+---
+:::ultralytics.yolo.utils.callbacks.wb._log_plots
+
+
# on_pretrain_routine_start
---
:::ultralytics.yolo.utils.callbacks.wb.on_pretrain_routine_start
@@ -20,4 +26,4 @@ description: Learn how to use Ultralytics YOLO's built-in callbacks `on_pretrain
# on_train_end
---
:::ultralytics.yolo.utils.callbacks.wb.on_train_end
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/checks.md b/docs/reference/yolo/utils/checks.md
index 82b661b8256..4995371a5bc 100644
--- a/docs/reference/yolo/utils/checks.md
+++ b/docs/reference/yolo/utils/checks.md
@@ -1,5 +1,6 @@
---
description: 'Check functions for YOLO utils: image size, version, font, requirements, filename suffix, YAML file, YOLO, and Git version.'
+keywords: YOLO, Ultralytics, Utils, Checks, image sizing, version updates, font compatibility, Python requirements, file suffixes, YAML syntax, image showing, AMP
---
# is_ascii
@@ -72,6 +73,11 @@ description: 'Check functions for YOLO utils: image size, version, font, require
:::ultralytics.yolo.utils.checks.check_yolo
+# check_amp
+---
+:::ultralytics.yolo.utils.checks.check_amp
+
+
# git_describe
---
:::ultralytics.yolo.utils.checks.git_describe
@@ -80,4 +86,4 @@ description: 'Check functions for YOLO utils: image size, version, font, require
# print_args
---
:::ultralytics.yolo.utils.checks.print_args
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/dist.md b/docs/reference/yolo/utils/dist.md
index ef70e99c8e6..c5505446eea 100644
--- a/docs/reference/yolo/utils/dist.md
+++ b/docs/reference/yolo/utils/dist.md
@@ -1,5 +1,6 @@
---
description: Learn how to find free network port and generate DDP (Distributed Data Parallel) command in Ultralytics YOLO with easy examples.
+keywords: ultralytics, YOLO, utils, dist, distributed deep learning, DDP file, DDP cleanup
---
# find_free_network_port
@@ -20,4 +21,4 @@ description: Learn how to find free network port and generate DDP (Distributed D
# ddp_cleanup
---
:::ultralytics.yolo.utils.dist.ddp_cleanup
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/downloads.md b/docs/reference/yolo/utils/downloads.md
index 76580f12ca4..5206e02c9a6 100644
--- a/docs/reference/yolo/utils/downloads.md
+++ b/docs/reference/yolo/utils/downloads.md
@@ -1,5 +1,6 @@
---
description: Download and unzip YOLO pretrained models. Ultralytics YOLO docs utils.downloads.unzip_file, checks disk space, downloads and attempts assets.
+keywords: Ultralytics YOLO, downloads, trained models, datasets, weights, deep learning, computer vision
---
# is_url
@@ -30,4 +31,4 @@ description: Download and unzip YOLO pretrained models. Ultralytics YOLO docs ut
# download
---
:::ultralytics.yolo.utils.downloads.download
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/errors.md b/docs/reference/yolo/utils/errors.md
index fced2117fbd..a498db258d2 100644
--- a/docs/reference/yolo/utils/errors.md
+++ b/docs/reference/yolo/utils/errors.md
@@ -1,8 +1,9 @@
---
description: Learn about HUBModelError in Ultralytics YOLO Docs. Resolve the error and get the most out of your YOLO model.
+keywords: HUBModelError, Ultralytics YOLO, YOLO Documentation, Object detection errors, YOLO Errors, HUBModelError Solutions
---
# HUBModelError
---
:::ultralytics.yolo.utils.errors.HUBModelError
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/files.md b/docs/reference/yolo/utils/files.md
index 6ba88554bdd..4ca39c721e8 100644
--- a/docs/reference/yolo/utils/files.md
+++ b/docs/reference/yolo/utils/files.md
@@ -1,5 +1,6 @@
---
description: 'Learn about Ultralytics YOLO files and directory utilities: WorkingDirectory, file_age, file_size, and make_dirs.'
+keywords: YOLO, object detection, file utils, file age, file size, working directory, make directories, Ultralytics Docs
---
# WorkingDirectory
@@ -35,4 +36,4 @@ description: 'Learn about Ultralytics YOLO files and directory utilities: Workin
# make_dirs
---
:::ultralytics.yolo.utils.files.make_dirs
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/instance.md b/docs/reference/yolo/utils/instance.md
index 455669e8d8d..1b32a808827 100644
--- a/docs/reference/yolo/utils/instance.md
+++ b/docs/reference/yolo/utils/instance.md
@@ -1,5 +1,6 @@
---
description: Learn about Bounding Boxes (Bboxes) and _ntuple in Ultralytics YOLO for object detection. Improve accuracy and speed with these powerful tools.
+keywords: Ultralytics, YOLO, Bboxes, _ntuple, object detection, instance segmentation
---
# Bboxes
@@ -15,4 +16,4 @@ description: Learn about Bounding Boxes (Bboxes) and _ntuple in Ultralytics YOLO
# _ntuple
---
:::ultralytics.yolo.utils.instance._ntuple
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/loss.md b/docs/reference/yolo/utils/loss.md
index b01e3efd433..ad4aa688210 100644
--- a/docs/reference/yolo/utils/loss.md
+++ b/docs/reference/yolo/utils/loss.md
@@ -1,5 +1,6 @@
---
description: Learn about Varifocal Loss and Keypoint Loss in Ultralytics YOLO for advanced bounding box and pose estimation. Visit our docs for more.
+keywords: Ultralytics, YOLO, loss functions, object detection, keypoint detection, segmentation, classification
---
# VarifocalLoss
@@ -35,4 +36,4 @@ description: Learn about Varifocal Loss and Keypoint Loss in Ultralytics YOLO fo
# v8ClassificationLoss
---
:::ultralytics.yolo.utils.loss.v8ClassificationLoss
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/metrics.md b/docs/reference/yolo/utils/metrics.md
index 10d4728ca2d..4cb1158f121 100644
--- a/docs/reference/yolo/utils/metrics.md
+++ b/docs/reference/yolo/utils/metrics.md
@@ -1,5 +1,6 @@
---
description: Explore Ultralytics YOLO's FocalLoss, DetMetrics, PoseMetrics, ClassifyMetrics, and more with Ultralytics Metrics documentation.
+keywords: YOLOv5, metrics, losses, confusion matrix, detection metrics, pose metrics, classification metrics, intersection over area, intersection over union, keypoint intersection over union, average precision, per class average precision, Ultralytics Docs
---
# FocalLoss
@@ -95,4 +96,4 @@ description: Explore Ultralytics YOLO's FocalLoss, DetMetrics, PoseMetrics, Clas
# ap_per_class
---
:::ultralytics.yolo.utils.metrics.ap_per_class
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/ops.md b/docs/reference/yolo/utils/ops.md
index b55c6b88499..0a8aa35650b 100644
--- a/docs/reference/yolo/utils/ops.md
+++ b/docs/reference/yolo/utils/ops.md
@@ -1,5 +1,6 @@
---
description: Learn about various utility functions in Ultralytics YOLO, including x, y, width, height conversions, non-max suppression, and more.
+keywords: Ultralytics, YOLO, Utils Ops, Functions, coco80_to_coco91_class, scale_boxes, non_max_suppression, clip_coords, xyxy2xywh, xywhn2xyxy, xyn2xy, xyxy2ltwh, ltwh2xyxy, resample_segments, process_mask_upsample, process_mask_native, masks2segments, clean_str
---
# Profile
@@ -135,4 +136,4 @@ description: Learn about various utility functions in Ultralytics YOLO, includin
# clean_str
---
:::ultralytics.yolo.utils.ops.clean_str
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/plotting.md b/docs/reference/yolo/utils/plotting.md
index 801032ed1bd..f402f4814dc 100644
--- a/docs/reference/yolo/utils/plotting.md
+++ b/docs/reference/yolo/utils/plotting.md
@@ -1,5 +1,6 @@
---
description: 'Discover the power of YOLO''s plotting functions: Colors, Labels and Images. Code examples to output targets and visualize features. Check it now.'
+keywords: YOLO, object detection, plotting, visualization, annotator, save one box, plot results, feature visualization, Ultralytics
---
# Colors
@@ -40,4 +41,4 @@ description: 'Discover the power of YOLO''s plotting functions: Colors, Labels a
# feature_visualization
---
:::ultralytics.yolo.utils.plotting.feature_visualization
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/tal.md b/docs/reference/yolo/utils/tal.md
index b6835073209..1f309322012 100644
--- a/docs/reference/yolo/utils/tal.md
+++ b/docs/reference/yolo/utils/tal.md
@@ -1,5 +1,6 @@
---
description: Improve your YOLO models with Ultralytics' TaskAlignedAssigner, select_highest_overlaps, and dist2bbox utilities. Streamline your workflow today.
+keywords: Ultrayltics, YOLO, select_candidates_in_gts, make_anchor, bbox2dist, object detection, tracking
---
# TaskAlignedAssigner
@@ -30,4 +31,4 @@ description: Improve your YOLO models with Ultralytics' TaskAlignedAssigner, sel
# bbox2dist
---
:::ultralytics.yolo.utils.tal.bbox2dist
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/utils/torch_utils.md b/docs/reference/yolo/utils/torch_utils.md
index f8fe445a72a..dbdb78679e4 100644
--- a/docs/reference/yolo/utils/torch_utils.md
+++ b/docs/reference/yolo/utils/torch_utils.md
@@ -1,5 +1,6 @@
---
description: Optimize your PyTorch models with Ultralytics YOLO's torch_utils functions such as ModelEMA, select_device, and is_parallel.
+keywords: Ultralytics YOLO, Torch, Utils, Pytorch, Object Detection
---
# ModelEMA
@@ -130,4 +131,4 @@ description: Optimize your PyTorch models with Ultralytics YOLO's torch_utils fu
# profile
---
:::ultralytics.yolo.utils.torch_utils.profile
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/classify/predict.md b/docs/reference/yolo/v8/classify/predict.md
index d8c637240f4..a99f833eab0 100644
--- a/docs/reference/yolo/v8/classify/predict.md
+++ b/docs/reference/yolo/v8/classify/predict.md
@@ -1,5 +1,6 @@
---
description: Learn how to use ClassificationPredictor in Ultralytics YOLOv8 for object classification tasks in a simple and efficient way.
+keywords: Ultralytics, YOLO, v8, Classify Predictor, object detection, classification, computer vision
---
# ClassificationPredictor
@@ -10,4 +11,4 @@ description: Learn how to use ClassificationPredictor in Ultralytics YOLOv8 for
# predict
---
:::ultralytics.yolo.v8.classify.predict.predict
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/classify/train.md b/docs/reference/yolo/v8/classify/train.md
index 33a3967e7e0..076b277493f 100644
--- a/docs/reference/yolo/v8/classify/train.md
+++ b/docs/reference/yolo/v8/classify/train.md
@@ -1,5 +1,6 @@
---
description: Train a custom image classification model using Ultralytics YOLOv8 with ClassificationTrainer. Boost accuracy and efficiency today.
+keywords: Ultralytics, YOLOv8, object detection, classification, training, API
---
# ClassificationTrainer
@@ -10,4 +11,4 @@ description: Train a custom image classification model using Ultralytics YOLOv8
# train
---
:::ultralytics.yolo.v8.classify.train.train
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/classify/val.md b/docs/reference/yolo/v8/classify/val.md
index 88157b2dc7a..fda08e21e39 100644
--- a/docs/reference/yolo/v8/classify/val.md
+++ b/docs/reference/yolo/v8/classify/val.md
@@ -1,5 +1,6 @@
---
description: Ensure model classification accuracy with Ultralytics YOLO's ClassificationValidator. Validate and improve your model with ease.
+keywords: ClassificationValidator, Ultralytics YOLO, Validation, Data Science, Deep Learning
---
# ClassificationValidator
@@ -10,4 +11,4 @@ description: Ensure model classification accuracy with Ultralytics YOLO's Classi
# val
---
:::ultralytics.yolo.v8.classify.val.val
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/detect/predict.md b/docs/reference/yolo/v8/detect/predict.md
index f51fa1351aa..de49aa0f81d 100644
--- a/docs/reference/yolo/v8/detect/predict.md
+++ b/docs/reference/yolo/v8/detect/predict.md
@@ -1,5 +1,6 @@
---
description: Detect and predict objects in images and videos using the Ultralytics YOLO v8 model with DetectionPredictor.
+keywords: detectionpredictor, ultralytics yolo, object detection, neural network, machine learning
---
# DetectionPredictor
@@ -10,4 +11,4 @@ description: Detect and predict objects in images and videos using the Ultralyti
# predict
---
:::ultralytics.yolo.v8.detect.predict.predict
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/detect/train.md b/docs/reference/yolo/v8/detect/train.md
index 84e47949f20..9ed57ad8752 100644
--- a/docs/reference/yolo/v8/detect/train.md
+++ b/docs/reference/yolo/v8/detect/train.md
@@ -1,5 +1,6 @@
---
description: Train and optimize custom object detection models with Ultralytics DetectionTrainer and train functions. Get started with YOLO v8 today.
+keywords: DetectionTrainer, Ultralytics YOLO, custom object detection, train models, AI applications
---
# DetectionTrainer
@@ -10,4 +11,4 @@ description: Train and optimize custom object detection models with Ultralytics
# train
---
:::ultralytics.yolo.v8.detect.train.train
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/detect/val.md b/docs/reference/yolo/v8/detect/val.md
index 3d3c8afd597..dec0259dfce 100644
--- a/docs/reference/yolo/v8/detect/val.md
+++ b/docs/reference/yolo/v8/detect/val.md
@@ -1,5 +1,6 @@
---
description: Validate YOLOv5 detections using this PyTorch module. Ensure model accuracy with NMS IOU threshold tuning and label mapping.
+keywords: detection, validator, YOLOv5, object detection, model improvement, Ultralytics Docs
---
# DetectionValidator
@@ -10,4 +11,4 @@ description: Validate YOLOv5 detections using this PyTorch module. Ensure model
# val
---
:::ultralytics.yolo.v8.detect.val.val
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/pose/predict.md b/docs/reference/yolo/v8/pose/predict.md
index b635a342e2f..6f333cf7554 100644
--- a/docs/reference/yolo/v8/pose/predict.md
+++ b/docs/reference/yolo/v8/pose/predict.md
@@ -1,5 +1,6 @@
---
description: Predict human pose coordinates and confidence scores using YOLOv5. Use on real-time video streams or static images.
+keywords: Ultralytics, YOLO, v8, documentation, PosePredictor, pose prediction, pose estimation, predict method
---
# PosePredictor
@@ -10,4 +11,4 @@ description: Predict human pose coordinates and confidence scores using YOLOv5.
# predict
---
:::ultralytics.yolo.v8.pose.predict.predict
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/pose/train.md b/docs/reference/yolo/v8/pose/train.md
index 7d3f5863b51..ede822f2208 100644
--- a/docs/reference/yolo/v8/pose/train.md
+++ b/docs/reference/yolo/v8/pose/train.md
@@ -1,5 +1,6 @@
---
description: Boost posture detection using PoseTrainer and train models using train() API. Learn PoseLoss for ultra-fast and accurate pose detection with Ultralytics YOLO.
+keywords: PoseTrainer, human pose models, deep learning, computer vision, Ultralytics YOLO, v8
---
# PoseTrainer
@@ -10,4 +11,4 @@ description: Boost posture detection using PoseTrainer and train models using tr
# train
---
:::ultralytics.yolo.v8.pose.train.train
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/pose/val.md b/docs/reference/yolo/v8/pose/val.md
index 8fef7e35afb..af5f5873c91 100644
--- a/docs/reference/yolo/v8/pose/val.md
+++ b/docs/reference/yolo/v8/pose/val.md
@@ -1,5 +1,6 @@
---
description: Ensure proper human poses in images with YOLOv8 Pose Validation, part of the Ultralytics YOLO v8 suite.
+keywords: PoseValidator, Ultralytics YOLO, object detection, pose analysis, validation
---
# PoseValidator
@@ -10,4 +11,4 @@ description: Ensure proper human poses in images with YOLOv8 Pose Validation, pa
# val
---
:::ultralytics.yolo.v8.pose.val.val
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/segment/predict.md b/docs/reference/yolo/v8/segment/predict.md
index eccd8ec493d..30afdbd0bcf 100644
--- a/docs/reference/yolo/v8/segment/predict.md
+++ b/docs/reference/yolo/v8/segment/predict.md
@@ -1,5 +1,6 @@
---
description: '"Use SegmentationPredictor in YOLOv8 for efficient object detection and segmentation. Explore Ultralytics YOLO Docs for more information."'
+keywords: Ultralytics YOLO, SegmentationPredictor, object detection, segmentation masks, predict
---
# SegmentationPredictor
@@ -10,4 +11,4 @@ description: '"Use SegmentationPredictor in YOLOv8 for efficient object detectio
# predict
---
:::ultralytics.yolo.v8.segment.predict.predict
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/segment/train.md b/docs/reference/yolo/v8/segment/train.md
index ee25fe232ea..7bf27a0b8f6 100644
--- a/docs/reference/yolo/v8/segment/train.md
+++ b/docs/reference/yolo/v8/segment/train.md
@@ -1,5 +1,6 @@
---
description: Learn about SegmentationTrainer and Train in Ultralytics YOLO v8 for efficient object detection models. Improve your training with Ultralytics Docs.
+keywords: SegmentationTrainer, Ultralytics YOLO, object detection, segmentation, train, tutorial, guide, code examples
---
# SegmentationTrainer
@@ -10,4 +11,4 @@ description: Learn about SegmentationTrainer and Train in Ultralytics YOLO v8 fo
# train
---
:::ultralytics.yolo.v8.segment.train.train
-
+
\ No newline at end of file
diff --git a/docs/reference/yolo/v8/segment/val.md b/docs/reference/yolo/v8/segment/val.md
index e9e2c6f732a..382660d2c76 100644
--- a/docs/reference/yolo/v8/segment/val.md
+++ b/docs/reference/yolo/v8/segment/val.md
@@ -1,5 +1,6 @@
---
description: Ensure segmentation quality on large datasets with SegmentationValidator. Review and visualize results with ease. Learn more at Ultralytics Docs.
+keywords: SegmentationValidator, YOLOv8, Ultralytics Docs, segmentation model, validation
---
# SegmentationValidator
@@ -10,4 +11,4 @@ description: Ensure segmentation quality on large datasets with SegmentationVali
# val
---
:::ultralytics.yolo.v8.segment.val.val
-
+
\ No newline at end of file
diff --git a/docs/tasks/classify.md b/docs/tasks/classify.md
index 47c6cb750fd..fe0b939b3a9 100644
--- a/docs/tasks/classify.md
+++ b/docs/tasks/classify.md
@@ -1,6 +1,7 @@
---
comments: true
description: Check YOLO class label with only one class for the whole image, using image classification. Get strategies for training and validation models.
+keywords: YOLOv8n-cls, image classification, pretrained models
---
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of
@@ -176,4 +177,4 @@ i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your mo
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | ✅ | `imgsz` |
-See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
+See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
\ No newline at end of file
diff --git a/docs/tasks/detect.md b/docs/tasks/detect.md
index 6942060f301..35a3d444c5d 100644
--- a/docs/tasks/detect.md
+++ b/docs/tasks/detect.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to use YOLOv8, an object detection model pre-trained with COCO and about the different YOLOv8 models and how to train and export them.
+keywords: object detection, YOLOv8 Detect models, COCO dataset, models, train, predict, export
---
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
@@ -167,4 +168,4 @@ Available YOLOv8 export formats are in the table below. You can predict or valid
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
-See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
+See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
\ No newline at end of file
diff --git a/docs/tasks/index.md b/docs/tasks/index.md
index 982bb6255fa..23e384bd347 100644
--- a/docs/tasks/index.md
+++ b/docs/tasks/index.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how Ultralytics YOLOv8 AI framework supports detection, segmentation, classification, and pose/keypoint estimation tasks.
+keywords: YOLOv8, computer vision, detection, segmentation, classification, pose, keypoint detection, image segmentation, medical imaging
---
# Ultralytics YOLOv8 Tasks
diff --git a/docs/tasks/pose.md b/docs/tasks/pose.md
index 68ccd1972e2..094f95b8729 100644
--- a/docs/tasks/pose.md
+++ b/docs/tasks/pose.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to use YOLOv8 pose estimation models to identify the position of keypoints on objects in an image, and how to train, validate, predict, and export these models for use with various formats such as ONNX or CoreML.
+keywords: YOLOv8, Pose Models, Keypoint Detection, COCO dataset, COCO val2017, Amazon EC2 P4d, PyTorch
---
Pose estimation is a task that involves identifying the location of specific points in an image, usually referred
@@ -181,4 +182,4 @@ i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your m
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | ✅ | `imgsz` |
-See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
+See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
\ No newline at end of file
diff --git a/docs/tasks/segment.md b/docs/tasks/segment.md
index 8eb5db8766e..4f9192ffc0b 100644
--- a/docs/tasks/segment.md
+++ b/docs/tasks/segment.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn what Instance segmentation is. Get pretrained YOLOv8 segment models, and how to train and export them to segments masks. Check the preformance metrics!
+keywords: instance segmentation, YOLOv8, Ultralytics, pretrained models, train, predict, export, datasets
---
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image
@@ -181,4 +182,4 @@ i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your mo
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | ✅ | `imgsz` |
-See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
+See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
\ No newline at end of file
diff --git a/docs/usage/callbacks.md b/docs/usage/callbacks.md
index 031d64488a2..7968fafdd04 100644
--- a/docs/usage/callbacks.md
+++ b/docs/usage/callbacks.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to leverage callbacks in Ultralytics YOLO framework to perform custom tasks in trainer, validator, predictor and exporter modes.
+keywords: callbacks, Ultralytics framework, Trainer, Validator, Predictor, Exporter, train, val, export, predict, YOLO, Object Detection
---
## Callbacks
diff --git a/docs/usage/cfg.md b/docs/usage/cfg.md
index b7ddb496f21..b027da310e6 100644
--- a/docs/usage/cfg.md
+++ b/docs/usage/cfg.md
@@ -1,6 +1,7 @@
---
comments: true
-description: 'Learn about YOLO settings and modes for different tasks like detection, segmentation etc. Train and predict with custom argparse commands.'
+description: Learn about YOLO settings and modes for different tasks like detection, segmentation etc. Train and predict with custom argparse commands.
+keywords: YOLO settings, hyperparameters, YOLOv8, Ultralytics, YOLO guide, YOLO commands, YOLO tasks, YOLO modes, YOLO training, YOLO detect, YOLO segment, YOLO classify, YOLO pose, YOLO train, YOLO val, YOLO predict, YOLO export, YOLO track, YOLO benchmark
---
YOLO settings and hyperparameters play a critical role in the model's performance, speed, and accuracy. These settings
diff --git a/docs/usage/cli.md b/docs/usage/cli.md
index 1b07b61e187..21879d7e851 100644
--- a/docs/usage/cli.md
+++ b/docs/usage/cli.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to use YOLOv8 from the Command Line Interface (CLI) through simple, single-line commands with `yolo` without Python code.
+keywords: YOLO, CLI, command line interface, detect, segment, classify, train, validate, predict, export, Ultralytics Docs
---
# Command Line Interface Usage
diff --git a/docs/usage/engine.md b/docs/usage/engine.md
index 852e850bdf9..8f6444390c5 100644
--- a/docs/usage/engine.md
+++ b/docs/usage/engine.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to train and customize your models fast with the Ultralytics YOLO 'DetectionTrainer' and 'CustomTrainer'. Read more here!
+keywords: Ultralytics, YOLO, DetectionTrainer, BaseTrainer, engine components, trainers, customizing, callbacks, validators, predictors
---
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine
@@ -83,4 +84,4 @@ To know more about Callback triggering events and entry point, checkout our [Cal
## Other engine components
There are other components that can be customized similarly like `Validators` and `Predictors`
-See Reference section for more information on these.
+See Reference section for more information on these.
\ No newline at end of file
diff --git a/docs/usage/hyperparameter_tuning.md b/docs/usage/hyperparameter_tuning.md
index f3589a77503..06a38414460 100644
--- a/docs/usage/hyperparameter_tuning.md
+++ b/docs/usage/hyperparameter_tuning.md
@@ -1,6 +1,7 @@
---
comments: true
description: Discover how to integrate hyperparameter tuning with Ray Tune and Ultralytics YOLOv8. Speed up the tuning process and optimize your model's performance.
+keywords: yolov8, ray tune, hyperparameter tuning, hyperparameter optimization, machine learning, computer vision, deep learning, image recognition
---
# Hyperparameter Tuning with Ray Tune and YOLOv8
diff --git a/docs/usage/python.md b/docs/usage/python.md
index 04b813b9fe6..2d8bb4c4e6f 100644
--- a/docs/usage/python.md
+++ b/docs/usage/python.md
@@ -1,6 +1,7 @@
---
comments: true
description: Integrate YOLOv8 in Python. Load, use pretrained models, train, and infer images. Export to ONNX. Track objects in videos.
+keywords: yolov8, python usage, object detection, segmentation, classification, pretrained models, train models, image predictions
---
# Python Usage
diff --git a/docs/yolov5/environments/aws_quickstart_tutorial.md b/docs/yolov5/environments/aws_quickstart_tutorial.md
index dbcfb3a4aa2..3e3ea83cc59 100644
--- a/docs/yolov5/environments/aws_quickstart_tutorial.md
+++ b/docs/yolov5/environments/aws_quickstart_tutorial.md
@@ -1,6 +1,7 @@
---
comments: true
description: Get started with YOLOv5 on AWS. Our comprehensive guide provides everything you need to know to run YOLOv5 on an Amazon Deep Learning instance.
+keywords: YOLOv5, AWS, Deep Learning, Instance, Guide, Quickstart
---
# YOLOv5 🚀 on AWS Deep Learning Instance: A Comprehensive Guide
diff --git a/docs/yolov5/environments/docker_image_quickstart_tutorial.md b/docs/yolov5/environments/docker_image_quickstart_tutorial.md
index 365139d1150..b2b90c9b49d 100644
--- a/docs/yolov5/environments/docker_image_quickstart_tutorial.md
+++ b/docs/yolov5/environments/docker_image_quickstart_tutorial.md
@@ -1,6 +1,7 @@
---
comments: true
description: Get started with YOLOv5 in a Docker container. Learn to set up and run YOLOv5 models and explore other quickstart options. 🚀
+keywords: YOLOv5, Docker, tutorial, setup, training, testing, detection
---
# Get Started with YOLOv5 🚀 in Docker
diff --git a/docs/yolov5/environments/google_cloud_quickstart_tutorial.md b/docs/yolov5/environments/google_cloud_quickstart_tutorial.md
index 47f53b126f3..c834a18a35a 100644
--- a/docs/yolov5/environments/google_cloud_quickstart_tutorial.md
+++ b/docs/yolov5/environments/google_cloud_quickstart_tutorial.md
@@ -1,6 +1,7 @@
---
comments: true
description: Set up YOLOv5 on a Google Cloud Platform (GCP) Deep Learning VM. Train, test, detect, and export YOLOv5 models. Tutorial updated April 2023.
+keywords: YOLOv5, GCP, deep learning, tutorial, Google Cloud Platform, virtual machine, VM, setup, free credit, Colab Notebook, AWS, Docker
---
# Run YOLOv5 🚀 on Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) ⭐
diff --git a/docs/yolov5/index.md b/docs/yolov5/index.md
index 3f5d6fb406c..8c666a2b9ce 100644
--- a/docs/yolov5/index.md
+++ b/docs/yolov5/index.md
@@ -1,9 +1,10 @@
---
comments: true
-description: Discover the YOLOv5 object detection model designed to deliver fast and accurate real-time results. Let's dive into this documentation to harness its full potential!
+description: Explore the extensive functionalities of the YOLOv5 object detection model, renowned for its speed and precision. Dive into our comprehensive guide for installation, architectural insights, use-cases, and more to unlock the full potential of YOLOv5 for your computer vision applications.
+keywords: ultralytics, yolov5, object detection, deep learning, pytorch, computer vision, tutorial, architecture, documentation, frameworks, real-time, model training, multicore, multithreading
---
-# Ultralytics YOLOv5
+# Comprehensive Guide to Ultralytics YOLOv5
@@ -21,54 +22,48 @@ description: Discover the YOLOv5 object detection model designed to deliver fast
-Welcome to the Ultralytics YOLOv5 🚀 Docs! YOLOv5, or You Only Look Once version 5, is an Ultralytics object detection model designed to deliver fast and accurate real-time results.
+Welcome to the Ultralytics' YOLOv5 🚀 Documentation! YOLOv5, the fifth iteration of the revolutionary "You Only Look Once" object detection model, is designed to deliver high-speed, high-accuracy results in real-time.
-This powerful deep learning framework is built on the PyTorch platform and has gained immense popularity due to its ease of use, high performance, and versatility. In this documentation, we will guide you through the installation process, explain the model's architecture, showcase various use-cases, and provide detailed tutorials to help you harness the full potential of YOLOv5 for your computer vision projects. Let's dive in!
+Built on PyTorch, this powerful deep learning framework has garnered immense popularity for its versatility, ease of use, and high performance. Our documentation guides you through the installation process, explains the architectural nuances of the model, showcases various use-cases, and provides a series of detailed tutorials. These resources will help you harness the full potential of YOLOv5 for your computer vision projects. Let's get started!
## Tutorials
-* [Train Custom Data](tutorials/train_custom_data.md) 🚀 RECOMMENDED
-* [Tips for Best Training Results](tutorials/tips_for_best_training_results.md) ☘️
-* [Multi-GPU Training](tutorials/multi_gpu_training.md)
-* [PyTorch Hub](tutorials/pytorch_hub_model_loading.md) 🌟 NEW
-* [TFLite, ONNX, CoreML, TensorRT Export](tutorials/model_export.md) 🚀
-* [NVIDIA Jetson platform Deployment](tutorials/running_on_jetson_nano.md) 🌟 NEW
-* [Test-Time Augmentation (TTA)](tutorials/test_time_augmentation.md)
-* [Model Ensembling](tutorials/model_ensembling.md)
-* [Model Pruning/Sparsity](tutorials/model_pruning_and_sparsity.md)
-* [Hyperparameter Evolution](tutorials/hyperparameter_evolution.md)
-* [Transfer Learning with Frozen Layers](tutorials/transfer_learning_with_frozen_layers.md)
-* [Architecture Summary](tutorials/architecture_description.md) 🌟 NEW
-* [Roboflow for Datasets, Labeling, and Active Learning](tutorials/roboflow_datasets_integration.md)
-* [ClearML Logging](tutorials/clearml_logging_integration.md) 🌟 NEW
-* [YOLOv5 with Neural Magic's Deepsparse](tutorials/neural_magic_pruning_quantization.md) 🌟 NEW
-* [Comet Logging](tutorials/comet_logging_integration.md) 🌟 NEW
+Here's a compilation of comprehensive tutorials that will guide you through different aspects of YOLOv5.
+
+* [Train Custom Data](tutorials/train_custom_data.md) 🚀 RECOMMENDED: Learn how to train the YOLOv5 model on your custom dataset.
+* [Tips for Best Training Results](tutorials/tips_for_best_training_results.md) ☘️: Uncover practical tips to optimize your model training process.
+* [Multi-GPU Training](tutorials/multi_gpu_training.md): Understand how to leverage multiple GPUs to expedite your training.
+* [PyTorch Hub](tutorials/pytorch_hub_model_loading.md) 🌟 NEW: Learn to load pre-trained models via PyTorch Hub.
+* [TFLite, ONNX, CoreML, TensorRT Export](tutorials/model_export.md) 🚀: Understand how to export your model to different formats.
+* [NVIDIA Jetson platform Deployment](tutorials/running_on_jetson_nano.md) 🌟 NEW: Learn how to deploy your YOLOv5 model on NVIDIA Jetson platform.
+* [Test-Time Augmentation (TTA)](tutorials/test_time_augmentation.md): Explore how to use TTA to improve your model's prediction accuracy.
+* [Model Ensembling](tutorials/model_ensembling.md): Learn the strategy of combining multiple models for improved performance.
+* [Model Pruning/Sparsity](tutorials/model_pruning_and_sparsity.md): Understand pruning and sparsity concepts, and how to create a more efficient model.
+* [Hyperparameter Evolution](tutorials/hyperparameter_evolution.md): Discover the process of automated hyperparameter tuning for better model performance.
+* [Transfer Learning with Frozen Layers](tutorials/transfer_learning_with_frozen_layers.md): Learn how to implement transfer learning by freezing layers in YOLOv5.
+* [Architecture Summary](tutorials/architecture_description.md) 🌟 Delve into the structural details of the YOLOv5 model.
+* [Roboflow for Datasets](tutorials/roboflow_datasets_integration.md): Understand how to utilize Roboflow for dataset management, labeling, and active learning.
+* [ClearML Logging](tutorials/clearml_logging_integration.md) 🌟 Learn how to integrate ClearML for efficient logging during your model training.
+* [YOLOv5 with Neural Magic](tutorials/neural_magic_pruning_quantization.md) Discover how to use Neural Magic's Deepsparse to prune and quantize your YOLOv5 model.
+* [Comet Logging](tutorials/comet_logging_integration.md) 🌟 NEW: Explore how to utilize Comet for improved model training logging.
## Environments
-YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies
-including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/)
-and [PyTorch](https://pytorch.org/) preinstalled):
+YOLOv5 is designed to be run in the following up-to-date, verified environments, with all dependencies (including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/)) pre-installed:
- **Notebooks** with free
GPU:
-- **Google Cloud** Deep Learning VM.
- See [GCP Quickstart Guide](environments/google_cloud_quickstart_tutorial.md)
+- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](environments/google_cloud_quickstart_tutorial.md)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](environments/aws_quickstart_tutorial.md)
-- **Docker Image**.
- See [Docker Quickstart Guide](environments/docker_image_quickstart_tutorial.md)
+- **Docker Image**. See [Docker Quickstart Guide](environments/docker_image_quickstart_tutorial.md)
## Status
-If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous
-Integration (CI) tests are currently passing. CI tests verify correct operation of
-YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py)
-and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24
-hours and on every commit.
+This badge signifies that all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify the correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and with every new commit.
@@ -90,6 +85,6 @@ hours and on every commit.
-
+
\ No newline at end of file
diff --git a/docs/yolov5/quickstart_tutorial.md b/docs/yolov5/quickstart_tutorial.md
index 055a4ab5242..e46091f8330 100644
--- a/docs/yolov5/quickstart_tutorial.md
+++ b/docs/yolov5/quickstart_tutorial.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to quickly start using YOLOv5 including installation, inference, and training on this Ultralytics Docs page.
+keywords: YOLOv5, object detection, PyTorch, quickstart, detect.py, training, Ultralytics Docs
---
# YOLOv5 Quickstart
diff --git a/docs/yolov5/tutorials/architecture_description.md b/docs/yolov5/tutorials/architecture_description.md
index 71ef2bbc03a..26a2d911661 100644
--- a/docs/yolov5/tutorials/architecture_description.md
+++ b/docs/yolov5/tutorials/architecture_description.md
@@ -1,27 +1,34 @@
---
comments: true
-description: 'Ultralytics YOLOv5 Docs: Learn model structure, data augmentation & training strategies. Build targets and the losses of object detection.'
+description: Explore the details of Ultralytics YOLOv5 architecture, a comprehensive guide to its model structure, data augmentation techniques, training strategies, and various features. Understand the intricacies of object detection algorithms and improve your skills in the machine learning field.
+keywords: yolov5 architecture, data augmentation, training strategies, object detection, yolo docs, ultralytics
---
+# Ultralytics YOLOv5 Architecture
+
+YOLOv5 (v6.0/6.1) is a powerful object detection algorithm developed by Ultralytics. This article dives deep into the YOLOv5 architecture, data augmentation strategies, training methodologies, and loss computation techniques. This comprehensive understanding will help improve your practical application of object detection in various fields, including surveillance, autonomous vehicles, and image recognition.
+
## 1. Model Structure
-YOLOv5 (v6.0/6.1) consists of:
+YOLOv5's architecture consists of three main parts:
-- **Backbone**: `New CSP-Darknet53`
-- **Neck**: `SPPF`, `New CSP-PAN`
-- **Head**: `YOLOv3 Head`
+- **Backbone**: This is the main body of the network. For YOLOv5, the backbone is designed using the `New CSP-Darknet53` structure, a modification of the Darknet architecture used in previous versions.
+- **Neck**: This part connects the backbone and the head. In YOLOv5, `SPPF` and `New CSP-PAN` structures are utilized.
+- **Head**: This part is responsible for generating the final output. YOLOv5 uses the `YOLOv3 Head` for this purpose.
-Model structure (`yolov5l.yaml`):
+The structure of the model is depicted in the image below. The model structure details can be found in `yolov5l.yaml`.
![yolov5](https://user-images.githubusercontent.com/31005897/172404576-c260dcf9-76bb-4bc8-b6a9-f2d987792583.png)
-Some minor changes compared to previous versions:
+YOLOv5 introduces some minor changes compared to its predecessors:
+
+1. The `Focus` structure, found in earlier versions, is replaced with a `6x6 Conv2d` structure. This change boosts efficiency [#4825](https://github.com/ultralytics/yolov5/issues/4825).
+2. The `SPP` structure is replaced with `SPPF`. This alteration more than doubles the speed of processing.
-1. Replace the `Focus` structure with `6x6 Conv2d`(more efficient, refer #4825)
-2. Replace the `SPP` structure with `SPPF`(more than double the speed)
+To test the speed of `SPP` and `SPPF`, the following code can be used:
-test code
+SPP vs SPPF speed profiling example (click to open)
```python
import time
@@ -67,12 +74,12 @@ def main():
t_start = time.time()
for _ in range(100):
spp(input_tensor)
- print(f"spp time: {time.time() - t_start}")
+ print(f"SPP time: {time.time() - t_start}")
t_start = time.time()
for _ in range(100):
sppf(input_tensor)
- print(f"sppf time: {time.time() - t_start}")
+ print(f"SPPF time: {time.time() - t_start}")
if __name__ == '__main__':
@@ -83,63 +90,75 @@ result:
```
True
-spp time: 0.5373051166534424
-sppf time: 0.20780706405639648
+SPP time: 0.5373051166534424
+SPPF time: 0.20780706405639648
```
-## 2. Data Augmentation
+## 2. Data Augmentation Techniques
+
+YOLOv5 employs various data augmentation techniques to improve the model's ability to generalize and reduce overfitting. These techniques include:
+
+- **Mosaic Augmentation**: An image processing technique that combines four training images into one in ways that encourage object detection models to better handle various object scales and translations.
+
+ ![mosaic](https://user-images.githubusercontent.com/31005897/159109235-c7aad8f2-1d4f-41f9-8d5f-b2fde6f2885e.png)
+
+- **Copy-Paste Augmentation**: An innovative data augmentation method that copies random patches from an image and pastes them onto another randomly chosen image, effectively generating a new training sample.
+
+ ![copy-paste](https://user-images.githubusercontent.com/31005897/159116277-91b45033-6bec-4f82-afc4-41138866628e.png)
+
+- **Random Affine Transformations**: This includes random rotation, scaling, translation, and shearing of the images.
-- Mosaic
-
+ ![random-affine](https://user-images.githubusercontent.com/31005897/159109326-45cd5acb-14fa-43e7-9235-0f21b0021c7d.png)
-- Copy paste
-
+- **MixUp Augmentation**: A method that creates composite images by taking a linear combination of two images and their associated labels.
-- Random affine(Rotation, Scale, Translation and Shear)
-
+ ![mixup](https://user-images.githubusercontent.com/31005897/159109361-3b24333b-f481-478b-ae00-df7838f0b5cd.png)
-- MixUp
-
+- **Albumentations**: A powerful library for image augmenting that supports a wide variety of augmentation techniques.
+- **HSV Augmentation**: Random changes to the Hue, Saturation, and Value of the images.
-- Albumentations
-- Augment HSV(Hue, Saturation, Value)
-
+ ![hsv](https://user-images.githubusercontent.com/31005897/159109407-83d100ba-1aba-4f4b-aa03-4f048f815981.png)
-- Random horizontal flip
-
+- **Random Horizontal Flip**: An augmentation method that randomly flips images horizontally.
+
+ ![horizontal-flip](https://user-images.githubusercontent.com/31005897/159109429-0d44619a-a76a-49eb-bfc0-6709860c043e.png)
## 3. Training Strategies
-- Multi-scale training(0.5~1.5x)
-- AutoAnchor(For training custom data)
-- Warmup and Cosine LR scheduler
-- EMA(Exponential Moving Average)
-- Mixed precision
-- Evolve hyper-parameters
+YOLOv5 applies several sophisticated training strategies to enhance the model's performance. They include:
+
+- **Multiscale Training**: The input images are randomly rescaled within a range of 0.5 to 1.5 times their original size during the training process.
+- **AutoAnchor**: This strategy optimizes the prior anchor boxes to match the statistical characteristics of the ground truth boxes in your custom data.
+- **Warmup and Cosine LR Scheduler**: A method to adjust the learning rate to enhance model performance.
+- **Exponential Moving Average (EMA)**: A strategy that uses the average of parameters over past steps to stabilize the training process and reduce generalization error.
+- **Mixed Precision Training**: A method to perform operations in half-precision format, reducing memory usage and enhancing computational speed.
+- **Hyperparameter Evolution**: A strategy to automatically tune hyperparameters to achieve optimal performance.
-## 4. Others
+## 4. Additional Features
### 4.1 Compute Losses
-The YOLOv5 loss consists of three parts:
+The loss in YOLOv5 is computed as a combination of three individual loss components:
+
+- **Classes Loss (BCE Loss)**: Binary Cross-Entropy loss, measures the error for the classification task.
+- **Objectness Loss (BCE Loss)**: Another Binary Cross-Entropy loss, calculates the error in detecting whether an object is present in a particular grid cell or not.
+- **Location Loss (CIoU Loss)**: Complete IoU loss, measures the error in localizing the object within the grid cell.
-- Classes loss(BCE loss)
-- Objectness loss(BCE loss)
-- Location loss(CIoU loss)
+The overall loss function is depicted by:
![loss](https://latex.codecogs.com/svg.image?Loss=\lambda_1L_{cls}+\lambda_2L_{obj}+\lambda_3L_{loc})
### 4.2 Balance Losses
-The objectness losses of the three prediction layers(`P3`, `P4`, `P5`) are weighted differently. The balance weights are `[4.0, 1.0, 0.4]` respectively.
+The objectness losses of the three prediction layers (`P3`, `P4`, `P5`) are weighted differently. The balance weights are `[4.0, 1.0, 0.4]` respectively. This approach ensures that the predictions at different scales contribute appropriately to the total loss.
![obj_loss](https://latex.codecogs.com/svg.image?L_{obj}=4.0\cdot&space;L_{obj}^{small}+1.0\cdot&space;L_{obj}^{medium}+0.4\cdot&space;L_{obj}^{large})
### 4.3 Eliminate Grid Sensitivity
-In YOLOv2 and YOLOv3, the formula for calculating the predicted target information is:
+The YOLOv5 architecture makes some important changes to the box prediction strategy compared to earlier versions of YOLO. In YOLOv2 and YOLOv3, the box coordinates were directly predicted using the activation of the last layer.
![b_x](https://latex.codecogs.com/svg.image?b_x=\sigma(t_x)+c_x)
![b_y](https://latex.codecogs.com/svg.image?b_y=\sigma(t_y)+c_y)
@@ -148,9 +167,9 @@ In YOLOv2 and YOLOv3, the formula for calculating the predicted target informati
+However, in YOLOv5, the formula for predicting the box coordinates has been updated to reduce grid sensitivity and prevent the model from predicting unbounded box dimensions.
-
-In YOLOv5, the formula is:
+The revised formulas for calculating the predicted bounding box are as follows:
![bx](https://latex.codecogs.com/svg.image?b_x=(2\cdot\sigma(t_x)-0.5)+c_x)
![by](https://latex.codecogs.com/svg.image?b_y=(2\cdot\sigma(t_y)-0.5)+c_y)
@@ -168,9 +187,11 @@ Compare the height and width scaling ratio(relative to anchor) before and after
### 4.4 Build Targets
-Match positive samples:
+The build target process in YOLOv5 is critical for training efficiency and model accuracy. It involves assigning ground truth boxes to the appropriate grid cells in the output map and matching them with the appropriate anchor boxes.
+
+This process follows these steps:
-- Calculate the aspect ratio of GT and Anchor Templates
+- Calculate the ratio of the ground truth box dimensions and the dimensions of each anchor template.
![rw](https://latex.codecogs.com/svg.image?r_w=w_{gt}/w_{at})
@@ -186,10 +207,18 @@ Match positive samples:
-- Assign the successfully matched Anchor Templates to the corresponding cells
+- If the calculated ratio is within the threshold, match the ground truth box with the corresponding anchor.
-- Because the center point offset range is adjusted from (0, 1) to (-0.5, 1.5). GT Box can be assigned to more anchors.
+- Assign the matched anchor to the appropriate cells, keeping in mind that due to the revised center point offset, a ground truth box can be assigned to more than one anchor. Because the center point offset range is adjusted from (0, 1) to (-0.5, 1.5). GT Box can be assigned to more anchors.
+
+
+
+This way, the build targets process ensures that each ground truth object is properly assigned and matched during the training process, allowing YOLOv5 to learn the task of object detection more effectively.
+
+## Conclusion
+
+In conclusion, YOLOv5 represents a significant step forward in the development of real-time object detection models. By incorporating various new features, enhancements, and training strategies, it surpasses previous versions of the YOLO family in performance and efficiency.
-
\ No newline at end of file
+The primary enhancements in YOLOv5 include the use of a dynamic architecture, an extensive range of data augmentation techniques, innovative training strategies, as well as important adjustments in computing losses and the process of building targets. All these innovations significantly improve the accuracy and efficiency of object detection while retaining a high degree of speed, which is the trademark of YOLO models.
\ No newline at end of file
diff --git a/docs/yolov5/tutorials/clearml_logging_integration.md b/docs/yolov5/tutorials/clearml_logging_integration.md
index f0843cfb29c..3d8672d09cc 100644
--- a/docs/yolov5/tutorials/clearml_logging_integration.md
+++ b/docs/yolov5/tutorials/clearml_logging_integration.md
@@ -1,6 +1,7 @@
---
comments: true
description: Integrate ClearML with YOLOv5 to track experiments and manage data versions. Optimize hyperparameters and remotely monitor your runs.
+keywords: YOLOv5, ClearML, experiment manager, remotely train, monitor, hyperparameter optimization, data versioning tool, HPO, data version management, optimization locally, agent, training progress, custom YOLOv5, AI development, model building
---
# ClearML Integration
diff --git a/docs/yolov5/tutorials/comet_logging_integration.md b/docs/yolov5/tutorials/comet_logging_integration.md
index e1716c957a5..263f1468989 100644
--- a/docs/yolov5/tutorials/comet_logging_integration.md
+++ b/docs/yolov5/tutorials/comet_logging_integration.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to use YOLOv5 with Comet, a tool for logging and visualizing machine learning model metrics in real-time. Install, log and analyze seamlessly.
+keywords: object detection, YOLOv5, Comet, model metrics, deep learning, image classification, Colab notebook, machine learning, datasets, hyperparameters tracking, training script, checkpoint
---
diff --git a/docs/yolov5/tutorials/hyperparameter_evolution.md b/docs/yolov5/tutorials/hyperparameter_evolution.md
index eebb554bb3a..8134b36e476 100644
--- a/docs/yolov5/tutorials/hyperparameter_evolution.md
+++ b/docs/yolov5/tutorials/hyperparameter_evolution.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn to find optimum YOLOv5 hyperparameters via **evolution**. A guide to learn hyperparameter tuning with Genetic Algorithms.
+keywords: YOLOv5, Hyperparameter Evolution, Genetic Algorithm, Hyperparameter Optimization, Fitness, Evolve, Visualize
---
📚 This guide explains **hyperparameter evolution** for YOLOv5 🚀. Hyperparameter evolution is a method of [Hyperparameter Optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization) using a [Genetic Algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) (GA) for optimization. UPDATED 25 September 2022.
@@ -151,7 +152,7 @@ We recommend a minimum of 300 generations of evolution for best results. Note th
## Environments
-YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
+YOLOv5 is designed to be run in the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Notebooks** with free GPU:
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
diff --git a/docs/yolov5/tutorials/model_ensembling.md b/docs/yolov5/tutorials/model_ensembling.md
index a76996dee09..3e13435048e 100644
--- a/docs/yolov5/tutorials/model_ensembling.md
+++ b/docs/yolov5/tutorials/model_ensembling.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to ensemble YOLOv5 models for improved mAP and Recall! Clone the repo, install requirements, and start testing and inference.
+keywords: YOLOv5, object detection, ensemble learning, mAP, Recall
---
📚 This guide explains how to use YOLOv5 🚀 **model ensembling** during testing and inference for improved mAP and Recall.
@@ -132,7 +133,7 @@ Done. (0.223s)
## Environments
-YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
+YOLOv5 is designed to be run in the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Notebooks** with free GPU:
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
diff --git a/docs/yolov5/tutorials/model_export.md b/docs/yolov5/tutorials/model_export.md
index 09e72685457..53afd015203 100644
--- a/docs/yolov5/tutorials/model_export.md
+++ b/docs/yolov5/tutorials/model_export.md
@@ -1,6 +1,7 @@
---
comments: true
description: Export YOLOv5 models to TFLite, ONNX, CoreML, and TensorRT formats. Achieve up to 5x GPU speedup using TensorRT. Benchmarks included.
+keywords: YOLOv5, object detection, export, ONNX, CoreML, TensorFlow, TensorRT, OpenVINO
---
# TFLite, ONNX, CoreML, TensorRT Export
@@ -231,7 +232,7 @@ YOLOv5 OpenVINO C++ inference examples:
## Environments
-YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
+YOLOv5 is designed to be run in the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Notebooks** with free GPU:
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
diff --git a/docs/yolov5/tutorials/model_pruning_and_sparsity.md b/docs/yolov5/tutorials/model_pruning_and_sparsity.md
index 0793f662efc..25e4f8c300a 100644
--- a/docs/yolov5/tutorials/model_pruning_and_sparsity.md
+++ b/docs/yolov5/tutorials/model_pruning_and_sparsity.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to apply pruning to your YOLOv5 models. See the before and after performance with an explanation of sparsity and more.
+keywords: YOLOv5, ultralytics, pruning, deep learning, computer vision, object detection, AI, tutorial
---
📚 This guide explains how to apply **pruning** to YOLOv5 🚀 models.
@@ -95,7 +96,7 @@ In the results we can observe that we have achieved a **sparsity of 30%** in our
## Environments
-YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
+YOLOv5 is designed to be run in the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Notebooks** with free GPU:
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
diff --git a/docs/yolov5/tutorials/multi_gpu_training.md b/docs/yolov5/tutorials/multi_gpu_training.md
index d002d05c120..24221db783a 100644
--- a/docs/yolov5/tutorials/multi_gpu_training.md
+++ b/docs/yolov5/tutorials/multi_gpu_training.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to train your dataset on single or multiple machines using YOLOv5 on multiple GPUs. Use simple commands with DDP mode for faster performance.
+keywords: ultralytics, yolo, yolov5, multi-gpu, training, dataset, dataloader, data parallel, distributed data parallel, docker, pytorch
---
📚 This guide explains how to properly use **multiple** GPUs to train a dataset with YOLOv5 🚀 on single or multiple machine(s).
@@ -172,7 +173,7 @@ If you went through all the above, feel free to raise an Issue by giving as much
## Environments
-YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
+YOLOv5 is designed to be run in the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Notebooks** with free GPU:
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
diff --git a/docs/yolov5/tutorials/neural_magic_pruning_quantization.md b/docs/yolov5/tutorials/neural_magic_pruning_quantization.md
index 532ced7a7a7..839f7ef9b7e 100644
--- a/docs/yolov5/tutorials/neural_magic_pruning_quantization.md
+++ b/docs/yolov5/tutorials/neural_magic_pruning_quantization.md
@@ -1,6 +1,7 @@
---
comments: true
description: Learn how to deploy YOLOv5 with DeepSparse to achieve exceptional CPU performance close to GPUs, using pruning, and quantization.
+keywords: YOLOv5, DeepSparse, Neural Magic, CPU, Production, Performance, Deployments, APIs, SparseZoo, Ultralytics, Model Sparsity, Inference, Open-source, ONNX, Server
---