-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
41507fa
commit 75ea2fa
Showing
79 changed files
with
17,816 additions
and
0 deletions.
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
664 changes: 664 additions & 0 deletions
664
examples/Notebook Tutorials/2. Using the Tensorflow TensorRT Integration.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
1,275 changes: 1,275 additions & 0 deletions
1,275
examples/Notebook Tutorials/3. Using Tensorflow 2 through ONNX.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
992 changes: 992 additions & 0 deletions
992
examples/Notebook Tutorials/4. Using PyTorch through ONNX.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
107 changes: 107 additions & 0 deletions
107
examples/Notebook Tutorials/5. Understanding TensorRT Runtimes.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,107 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Runtimes: What are my options? How do I choose?\n", | ||
"\n", | ||
"Remember that TensorRT consists of two main components - __1. A series of parsers and integrations__ to convert your model to an optimized engine and __2. An series of TensorRT runtime APIs__ with several associated tools for deployment.\n", | ||
"\n", | ||
"In this notebook, we will focus on the latter - various runtime options for TensorRT engines.\n", | ||
"\n", | ||
"The runtimes have different use cases for running TRT engines. \n", | ||
"\n", | ||
"### Considerations when picking a runtime:\n", | ||
"\n", | ||
"Generally speaking, there are a few major considerations when picking a runtime:\n", | ||
"- __Framework__ - Some options, like TF-TRT, are only relevant to Tensorflow\n", | ||
"- __Time-to-solution__ - TF-TRT is much more likely to work 'out-of-the-box' if a quick solution is required and ONNX fails\n", | ||
"- __Serving needs__ - TF-TRT can use TF Serving to serve models over HTTP as a simple solution. For other frameworks (or for more advanced features) TRITON is framework agnostic, allows for concurrent model execution or multiple copies within a GPU to reduce latency, and can accept engines created through both the ONNX and TF-TRT paths\n", | ||
"- __Performance__ - Different TensorRT runtimes offer varying levels of performance. For example, TF-TRT is generally going to be slower than using ONNX or the C++ API directly.\n", | ||
"\n", | ||
"### Python API:\n", | ||
"\n", | ||
"__Use this when:__\n", | ||
"- You can accept some performance overhead, and\n", | ||
"- You are most familiar with Python, or\n", | ||
"- You are performing initial debugging and testing with TRT\n", | ||
"\n", | ||
"__More info:__\n", | ||
"\n", | ||
" \n", | ||
"The [TensorRT Python API](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#perform_inference_python) gives you fine-grained control over the execution of your engine using a Python interface. It makes memory allocation, kernel execution, and copies to and from the GPU explicit - which can make integration into high performance applications easier. It is also great for testing models in a Python environment - such as in a Jupyter notebook.\n", | ||
" \n", | ||
"The [ONNX notebook for Tensorflow](./3.%20Using%20Tensorflow%202%20through%20ONNX.ipynb) and [for PyTorch](./4.%20Using%20PyTorch%20through%20ONNX.ipynb) are good examples of using TensorRT to get great performance while staying in Python\n", | ||
"\n", | ||
"### C++ API: \n", | ||
"\n", | ||
"__Use this when:__\n", | ||
"- You want the least amount of overhead possible to maximize the performance of your models and achieve better latency\n", | ||
"- You are not using TF-TRT (though TF-TRT graph conversions that only generate a single engine can still be exported to C++)\n", | ||
"- You are most familiar with C++\n", | ||
"- You want to optimize your inference pipeline as much as possible\n", | ||
"\n", | ||
"__More info:__\n", | ||
"\n", | ||
"The [TensorRT C++ API](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#perform_inference_c) gives you fine-grained control over the execution of your engine using a C++ interface. It makes memory allocation, kernel execution, and copies to and from the GPU explicit - which can make integration into high performance C++ applications easier. The C++ API is generally the most performant option for running TensorRT engines, with the least overhead.\n", | ||
"\n", | ||
"[This NVIDIA Developer blog](https://developer.nvidia.com/blog/speed-up-inference-tensorrt/) is a good example of taking an ONNX model and running it with dynamic batch size support using the C++ API.\n", | ||
"\n", | ||
"\n", | ||
"### Tensorflow/TF-TRT Runtime: (Tensorflow Only) \n", | ||
" \n", | ||
"__Use this when:__\n", | ||
" \n", | ||
"- You are using TF-TRT, and\n", | ||
"- Your model converts to more than one TensorRT engine\n", | ||
"\n", | ||
"__More info:__\n", | ||
"\n", | ||
"\n", | ||
"TF-TRT is the standard runtime used with models that were converted in TF-TRT. It works by taking groups of nodes at once in the Tensorflow graph, and replacing them with a singular optimized engine that calls the TensorRT Python API behind the scenes. This optimized engine is in the form of a Tensorflow operation - which means that your graph is still in Tensorflow and will essentially function like any other Tensorflow model. For example, it can be a useful exercise to take a look at your model in Tensorboard to validate which nodes TensorRT was able to optimize.\n", | ||
"\n", | ||
"If your graph entirely converts to a single TF-TRT engine, it can be more efficient to export the engine node and run it using one of the other APIs. You can find instructions to do this in the [TF-TRT documentation](https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#tensorrt-plan).\n", | ||
"\n", | ||
"As an example, the TF-TRT notebooks included with this guide use the TF-TRT runtime.\n", | ||
"\n", | ||
"### TRITON Inference Server\n", | ||
"\n", | ||
"__Use this when:__\n", | ||
"- You want to serve your models over HTTP or gRPC\n", | ||
"- You want to load balance across multiple models or copies of models across GPUs to minimze latency and make better use of the GPU\n", | ||
"- You want to have multiple models running efficiently on a single GPU at the same time\n", | ||
"- You want to serve a variety of models converted using a variety of converters and frameworks (including TF-TRT and ONNX) through a uniform interface\n", | ||
"- You need serving support but are using PyTorch, another framework, or the ONNX path in general\n", | ||
"\n", | ||
"__More info:__\n", | ||
"\n", | ||
"\n", | ||
"TRITON is an open source inference serving software that lets teams deploy trained AI models from any framework (TensorFlow, TensorRT, PyTorch, ONNX Runtime, or a custom framework), from local storage or Google Cloud Platform or AWS S3 on any GPU- or CPU-based infrastructure (cloud, data center, or edge). It is a flexible project with several unique features - such as concurrent model execution of both heterogeneous models and multiple copies of the same model (multiple model copies can reduce latency further) as well as load balancing and model analysis. It is a good option if you need to serve your models over HTTP - such as in a cloud inferencing solution.\n", | ||
" \n", | ||
"You can find the TRITON home page [here](https://developer.nvidia.com/nvidia-triton-inference-server), and the documentation [here](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/)." | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.6.9" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
} |
665 changes: 665 additions & 0 deletions
665
examples/Notebook Tutorials/EfficientDet-TensorRT8.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,210 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"id": "PN1cAxdvd61e" | ||
}, | ||
"source": [ | ||
"<div align=\"center\">\n", | ||
"\n", | ||
" <a href=\"https://ultralytics.com/yolo\" target=\"_blank\">\n", | ||
" <img width=\"1024\", src=\"https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png\"></a>\n", | ||
"\n", | ||
" [中文](https://docs.ultralytics.com/zh/) | [한국어](https://docs.ultralytics.com/ko/) | [日本語](https://docs.ultralytics.com/ja/) | [Русский](https://docs.ultralytics.com/ru/) | [Deutsch](https://docs.ultralytics.com/de/) | [Français](https://docs.ultralytics.com/fr/) | [Español](https://docs.ultralytics.com/es/) | [Português](https://docs.ultralytics.com/pt/) | [Türkçe](https://docs.ultralytics.com/tr/) | [Tiếng Việt](https://docs.ultralytics.com/vi/) | [العربية](https://docs.ultralytics.com/ar/)\n", | ||
"\n", | ||
" <a href=\"https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml\"><img src=\"https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg\" alt=\"Ultralytics CI\"></a>\n", | ||
" <a href=\"https://console.paperspace.com/github/ultralytics/ultralytics\"><img src=\"https://assets.paperspace.io/img/gradient-badge.svg\" alt=\"Run on Gradient\"/></a>\n", | ||
" <a href=\"https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/object_counting.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n", | ||
" <a href=\"https://www.kaggle.com/ultralytics/yolov8\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a>\n", | ||
" <a href=\"https://ultralytics.com/discord\"><img alt=\"Discord\" src=\"https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue\"></a>\n", | ||
"\n", | ||
"Welcome to the Ultralytics YOLOv8 🚀 notebook! <a href=\"https://github.com/ultralytics/ultralytics\">YOLOv8</a> is the latest version of the YOLO (You Only Look Once) AI models developed by <a href=\"https://ultralytics.com\">Ultralytics</a>. This notebook serves as the starting point for exploring the various resources available to help you get started with YOLOv8 and understand its features and capabilities.\n", | ||
"\n", | ||
"YOLOv8 models are fast, accurate, and easy to use, making them ideal for various object detection and image segmentation tasks. They can be trained on large datasets and run on diverse hardware platforms, from CPUs to GPUs.\n", | ||
"\n", | ||
"We hope that the resources in this notebook will help you get the most out of YOLOv8. Please browse the YOLOv8 <a href=\"https://docs.ultralytics.com/guides/object-counting/\"> Object Counting Docs</a> for details, raise an issue on <a href=\"https://github.com/ultralytics/ultralytics\">GitHub</a> for support, and join our <a href=\"https://ultralytics.com/discord\">Discord</a> community for questions and discussions!\n", | ||
"\n", | ||
"</div>" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"id": "o68Sg1oOeZm2" | ||
}, | ||
"source": [ | ||
"# Setup\n", | ||
"\n", | ||
"Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n", | ||
"\n", | ||
"[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 1, | ||
"metadata": { | ||
"colab": { | ||
"base_uri": "https://localhost:8080/" | ||
}, | ||
"id": "9dSwz_uOReMI", | ||
"outputId": "fd3bab88-2f25-46c0-cae9-04d2beedc0c1" | ||
}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"Ultralytics YOLOv8.2.18 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n", | ||
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 29.8/78.2 GB disk)\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"%pip install ultralytics\n", | ||
"import ultralytics\n", | ||
"\n", | ||
"ultralytics.checks()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"id": "m7VkxQ2aeg7k" | ||
}, | ||
"source": [ | ||
"# Object Counting using Ultralytics YOLOv8 🚀\n", | ||
"\n", | ||
"## What is Object Counting?\n", | ||
"\n", | ||
"Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves accurate identification and counting of specific objects in videos and camera streams. YOLOv8 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and deep learning capabilities.\n", | ||
"\n", | ||
"## Advantages of Object Counting?\n", | ||
"\n", | ||
"- **Resource Optimization:** Object counting facilitates efficient resource management by providing accurate counts, and optimizing resource allocation in applications like inventory management.\n", | ||
"- **Enhanced Security:** Object counting enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.\n", | ||
"- **Informed Decision-Making:** Object counting offers valuable insights for decision-making, optimizing processes in retail, traffic management, and various other domains.\n", | ||
"\n", | ||
"## Real World Applications\n", | ||
"\n", | ||
"| Logistics | Aquaculture |\n", | ||
"|:-------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------:|\n", | ||
"| ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/70e2d106-510c-4c6c-a57a-d34a765aa757) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/c60d047b-3837-435f-8d29-bb9fc95d2191) |\n", | ||
"| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"id": "Cx-u59HQdu2o" | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"import cv2\n", | ||
"\n", | ||
"from ultralytics import YOLO, solutions\n", | ||
"\n", | ||
"# Load the pre-trained YOLOv8 model\n", | ||
"model = YOLO(\"yolov8n.pt\")\n", | ||
"\n", | ||
"# Open the video file\n", | ||
"cap = cv2.VideoCapture(\"path/to/video/file.mp4\")\n", | ||
"assert cap.isOpened(), \"Error reading video file\"\n", | ||
"\n", | ||
"# Get video properties: width, height, and frames per second (fps)\n", | ||
"w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))\n", | ||
"\n", | ||
"# Define points for a line or region of interest in the video frame\n", | ||
"line_points = [(20, 400), (1080, 400)] # Line coordinates\n", | ||
"\n", | ||
"# Specify classes to count, for example: person (0) and car (2)\n", | ||
"classes_to_count = [0, 2] # Class IDs for person and car\n", | ||
"\n", | ||
"# Initialize the video writer to save the output video\n", | ||
"video_writer = cv2.VideoWriter(\"object_counting_output.avi\", cv2.VideoWriter_fourcc(*\"mp4v\"), fps, (w, h))\n", | ||
"\n", | ||
"# Initialize the Object Counter with visualization options and other parameters\n", | ||
"counter = solutions.ObjectCounter(\n", | ||
" view_img=True, # Display the image during processing\n", | ||
" reg_pts=line_points, # Region of interest points\n", | ||
" names=model.names, # Class names from the YOLO model\n", | ||
" draw_tracks=True, # Draw tracking lines for objects\n", | ||
" line_thickness=2, # Thickness of the lines drawn\n", | ||
")\n", | ||
"\n", | ||
"# Process video frames in a loop\n", | ||
"while cap.isOpened():\n", | ||
" success, im0 = cap.read()\n", | ||
" if not success:\n", | ||
" print(\"Video frame is empty or video processing has been successfully completed.\")\n", | ||
" break\n", | ||
"\n", | ||
" # Perform object tracking on the current frame, filtering by specified classes\n", | ||
" tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)\n", | ||
"\n", | ||
" # Use the Object Counter to count objects in the frame and get the annotated image\n", | ||
" im0 = counter.start_counting(im0, tracks)\n", | ||
"\n", | ||
" # Write the annotated frame to the output video\n", | ||
" video_writer.write(im0)\n", | ||
"\n", | ||
"# Release the video capture and writer objects\n", | ||
"cap.release()\n", | ||
"video_writer.release()\n", | ||
"\n", | ||
"# Close all OpenCV windows\n", | ||
"cv2.destroyAllWindows()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"id": "QrlKg-y3fEyD" | ||
}, | ||
"source": [ | ||
"# Additional Resources\n", | ||
"\n", | ||
"## Community Support\n", | ||
"\n", | ||
"For more information on counting objects with Ultralytics, you can explore the comprehensive [Ultralytics Object Counting Docs](https://docs.ultralytics.com/guides/object-counting/). This guide covers everything from basic concepts to advanced techniques, ensuring you get the most out of counting and visualization.\n", | ||
"\n", | ||
"## Ultralytics ⚡ Resources\n", | ||
"\n", | ||
"At Ultralytics, we are committed to providing cutting-edge AI solutions. Here are some key resources to learn more about our company and get involved with our community:\n", | ||
"\n", | ||
"- [Ultralytics HUB](https://ultralytics.com/hub): Simplify your AI projects with Ultralytics HUB, our no-code tool for effortless YOLO training and deployment.\n", | ||
"- [Ultralytics Licensing](https://ultralytics.com/license): Review our licensing terms to understand how you can use our software in your projects.\n", | ||
"- [About Us](https://ultralytics.com/about): Discover our mission, vision, and the story behind Ultralytics.\n", | ||
"- [Join Our Team](https://ultralytics.com/work): Explore career opportunities and join our team of talented professionals.\n", | ||
"\n", | ||
"## YOLOv8 🚀 Resources\n", | ||
"\n", | ||
"YOLOv8 is the latest evolution in the YOLO series, offering state-of-the-art performance in object detection and image segmentation. Here are some essential resources to help you get started with YOLOv8:\n", | ||
"\n", | ||
"- [GitHub](https://github.com/ultralytics/ultralytics): Access the YOLOv8 repository on GitHub, where you can find the source code, contribute to the project, and report issues.\n", | ||
"- [Docs](https://docs.ultralytics.com/): Explore the official documentation for YOLOv8, including installation guides, tutorials, and detailed API references.\n", | ||
"- [Discord](https://ultralytics.com/discord): Join our Discord community to connect with other users, share your projects, and get help from the Ultralytics team.\n", | ||
"\n", | ||
"These resources are designed to help you leverage the full potential of Ultralytics' offerings and YOLOv8. Whether you're a beginner or an experienced developer, you'll find the information and support you need to succeed." | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"accelerator": "GPU", | ||
"colab": { | ||
"gpuType": "T4", | ||
"provenance": [] | ||
}, | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"name": "python" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 0 | ||
} |
Oops, something went wrong.