NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 configuration for YOLO models
- DeepStream tutorials
- Dynamic batch-size
- Updated INT8 calibration
- Support for segmentation models
- Support for classification models
- Support for INT8 calibration
- Support for non square models
- Models benchmarks
- Support for Darknet YOLO models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing
- Support for YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
- Add GPU bbox parser (it is slightly slower than CPU bbox parser on V100 GPU tests)
- Requirements
- Suported models
- Benchmarks
- dGPU installation
- Basic usage
- Docker usage
- NMS configuration
- INT8 calibration
- YOLOv5 usage
- YOLOv6 usage
- YOLOv7 usage
- YOLOv8 usage
- YOLOR usage
- YOLOX usage
- DAMO-YOLO usage
- PP-YOLOE / PP-YOLOE+ usage
- YOLO-NAS usage
- Using your custom model
- Multiple YOLO GIEs
- Ubuntu 20.04
- CUDA 11.8
- TensorRT 8.5 GA Update 1 (8.5.2.2)
- NVIDIA Driver 525.85.12 (Data center / Tesla series) / 525.105.17 (TITAN, GeForce RTX / GTX series and RTX / Quadro series)
- NVIDIA DeepStream SDK 6.2
- GStreamer 1.16.3
- DeepStream-Yolo
- Ubuntu 20.04
- CUDA 11.7 Update 1
- TensorRT 8.4 GA (8.4.1.5)
- NVIDIA Driver 515.65.01
- NVIDIA DeepStream SDK 6.1.1
- GStreamer 1.16.2
- DeepStream-Yolo
- Ubuntu 20.04
- CUDA 11.6 Update 1
- TensorRT 8.2 GA Update 4 (8.2.5.1)
- NVIDIA Driver 510.47.03
- NVIDIA DeepStream SDK 6.1
- GStreamer 1.16.2
- DeepStream-Yolo
- Ubuntu 18.04
- CUDA 11.4 Update 1
- TensorRT 8.0 GA (8.0.1)
- NVIDIA Driver 470.63.01
- NVIDIA DeepStream SDK 6.0.1 / 6.0
- GStreamer 1.14.5
- DeepStream-Yolo
- Darknet YOLO
- MobileNet-YOLO
- YOLO-Fastest
- YOLOv5
- YOLOv6
- YOLOv7
- YOLOv8
- YOLOR
- YOLOX
- DAMO-YOLO
- PP-YOLOE / PP-YOLOE+
- YOLO-NAS
board = NVIDIA Tesla V100 16GB (AWS: p3.2xlarge)
batch-size = 1
eval = val2017 (COCO)
sample = 1920x1080 video
NOTE: Used maintain-aspect-ratio=1 in config_infer file for Darknet (with letter_box=1) and PyTorch models.
- Eval
nms-iou-threshold = 0.6 (Darknet) / 0.65 (YOLOv5, YOLOv6, YOLOv7, YOLOR and YOLOX) / 0.7 (Paddle, YOLO-NAS, DAMO-YOLO, YOLOv8 and YOLOv7-u6)
pre-cluster-threshold = 0.001
topk = 300
- Test
nms-iou-threshold = 0.45
pre-cluster-threshold = 0.25
topk = 300
NOTE: * = PyTorch.
NOTE: ** = The YOLOv4 is trained with the trainvalno5k set, so the mAP is high on val2017 test.
NOTE: star = DAMO-YOLO model trained with distillation.
NOTE: The V100 GPU decoder max out at 625-635 FPS on DeepStream even using lighter models.
NOTE: The GPU bbox parser is a bit slower than CPU bbox parser on V100 GPU tests.
DeepStream | Precision | Resolution | IoU=0.5:0.95 | IoU=0.5 | IoU=0.75 | FPS (without display) |
---|---|---|---|---|---|---|
YOLO-NAS L | FP16 | 640 | 0.484 | 0.658 | 0.532 | 235.27 |
YOLO-NAS M | FP16 | 640 | 0.480 | 0.651 | 0.524 | 287.39 |
YOLO-NAS S | FP16 | 640 | 0.442 | 0.614 | 0.485 | 478.52 |
PP-YOLOE+_x | FP16 | 640 | 0.528 | 0.705 | 0.579 | 121.17 |
PP-YOLOE+_l | FP16 | 640 | 0.511 | 0.686 | 0.557 | 191.82 |
PP-YOLOE+_m | FP16 | 640 | 0.483 | 0.658 | 0.528 | 264.39 |
PP-YOLOE+_s | FP16 | 640 | 0.424 | 0.594 | 0.464 | 476.13 |
PP-YOLOE-s (400) | FP16 | 640 | 0.423 | 0.589 | 0.463 | 461.23 |
DAMO-YOLO-L star | FP16 | 640 | 0.502 | 0.674 | 0.551 | 176.93 |
DAMO-YOLO-M star | FP16 | 640 | 0.485 | 0.656 | 0.530 | 242.24 |
DAMO-YOLO-S star | FP16 | 640 | 0.460 | 0.631 | 0.502 | 385.09 |
DAMO-YOLO-S | FP16 | 640 | 0.445 | 0.611 | 0.486 | 378.68 |
DAMO-YOLO-T star | FP16 | 640 | 0.419 | 0.586 | 0.455 | 492.24 |
DAMO-YOLO-Nl | FP16 | 416 | 0.392 | 0.559 | 0.423 | 483.73 |
DAMO-YOLO-Nm | FP16 | 416 | 0.371 | 0.532 | 0.402 | 555.94 |
DAMO-YOLO-Ns | FP16 | 416 | 0.312 | 0.460 | 0.335 | 627.67 |
YOLOX-x | FP16 | 640 | 0.447 | 0.616 | 0.483 | 125.40 |
YOLOX-l | FP16 | 640 | 0.430 | 0.598 | 0.466 | 193.10 |
YOLOX-m | FP16 | 640 | 0.397 | 0.566 | 0.431 | 298.61 |
YOLOX-s | FP16 | 640 | 0.335 | 0.502 | 0.365 | 522.05 |
YOLOX-s legacy | FP16 | 640 | 0.375 | 0.569 | 0.407 | 518.52 |
YOLOX-Darknet | FP16 | 640 | 0.414 | 0.595 | 0.453 | 212.88 |
YOLOX-Tiny | FP16 | 640 | 0.274 | 0.427 | 0.292 | 633.95 |
YOLOX-Nano | FP16 | 640 | 0.212 | 0.342 | 0.222 | 633.04 |
YOLOv8x | FP16 | 640 | 0.499 | 0.669 | 0.545 | 130.49 |
YOLOv8l | FP16 | 640 | 0.491 | 0.660 | 0.535 | 180.75 |
YOLOv8m | FP16 | 640 | 0.468 | 0.637 | 0.510 | 278.08 |
YOLOv8s | FP16 | 640 | 0.415 | 0.578 | 0.453 | 493.45 |
YOLOv8n | FP16 | 640 | 0.343 | 0.492 | 0.373 | 627.43 |
YOLOv7-u6 | FP16 | 640 | 0.484 | 0.652 | 0.530 | 193.54 |
YOLOv7x* | FP16 | 640 | 0.496 | 0.679 | 0.536 | 155.07 |
YOLOv7* | FP16 | 640 | 0.476 | 0.660 | 0.518 | 226.01 |
YOLOv7-Tiny Leaky* | FP16 | 640 | 0.345 | 0.516 | 0.372 | 626.23 |
YOLOv7-Tiny Leaky* | FP16 | 416 | 0.328 | 0.493 | 0.349 | 633.90 |
YOLOv6-L 4.0 | FP16 | 640 | 0.490 | 0.671 | 0.535 | 178.41 |
YOLOv6-M 4.0 | FP16 | 640 | 0.460 | 0.635 | 0.502 | 293.39 |
YOLOv6-S 4.0 | FP16 | 640 | 0.416 | 0.585 | 0.453 | 513.90 |
YOLOv6-N 4.0 | FP16 | 640 | 0.349 | 0.503 | 0.378 | 633.37 |
YOLOv5x 7.0 | FP16 | 640 | 0.471 | 0.652 | 0.513 | 149.93 |
YOLOv5l 7.0 | FP16 | 640 | 0.455 | 0.637 | 0.497 | 235.55 |
YOLOv5m 7.0 | FP16 | 640 | 0.421 | 0.604 | 0.459 | 351.69 |
YOLOv5s 7.0 | FP16 | 640 | 0.344 | 0.529 | 0.372 | 618.13 |
YOLOv5n 7.0 | FP16 | 640 | 0.247 | 0.414 | 0.257 | 629.66 |
To install the DeepStream on dGPU (x86 platform), without docker, we need to do some steps to prepare the computer.
DeepStream 6.2
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install dkms
sudo apt install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler
sudo apt-get install linux-headers-$(uname -r)
NOTE: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path)
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
TITAN, GeForce RTX / GTX series and RTX / Quadro series
Download
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/525.105.17/NVIDIA-Linux-x86_64-525.105.17.run
Laptop
Run
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
NOTE: This step will disable the nouveau drivers.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
NOTE: If you are using a laptop with NVIDIA Optimius, run
sudo apt-get install nvidia-prime sudo prime-select nvidia
Desktop
Run
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
NOTE: This step will disable the nouveau drivers.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
Data center / Tesla series
Download
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/525.105.17/NVIDIA-Linux-x86_64-525.105.17.run
Run
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run --silent --toolkit
-
Export environment variables
echo $'export PATH=/usr/local/cuda-11.8/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update
sudo apt-get install libnvinfer8=8.5.2-1+cuda11.8 libnvinfer-plugin8=8.5.2-1+cuda11.8 libnvparsers8=8.5.2-1+cuda11.8 libnvonnxparsers8=8.5.2-1+cuda11.8 libnvinfer-bin=8.5.2-1+cuda11.8 libnvinfer-dev=8.5.2-1+cuda11.8 libnvinfer-plugin-dev=8.5.2-1+cuda11.8 libnvparsers-dev=8.5.2-1+cuda11.8 libnvonnxparsers-dev=8.5.2-1+cuda11.8 libnvinfer-samples=8.5.2-1+cuda11.8 libcudnn8=8.7.0.84-1+cuda11.8 libcudnn8-dev=8.7.0.84-1+cuda11.8 python3-libnvinfer=8.5.2-1+cuda11.8 python3-libnvinfer-dev=8.5.2-1+cuda11.8
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer* tensorrt
7. Download from NVIDIA website and install the DeepStream SDK
DeepStream 6.2 for Servers and Workstations (.deb)
sudo apt-get install ./deepstream-6.2_6.2.0-1_amd64.deb
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-11.8 /usr/local/cuda
sudo reboot
DeepStream 6.1.1
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install dkms
sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev
sudo apt-get install linux-headers-$(uname -r)
NOTE: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path)
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
TITAN, GeForce RTX / GTX series and RTX / Quadro series
Download
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/515.65.01/NVIDIA-Linux-x86_64-515.65.01.run
Laptop
Run
sudo sh NVIDIA-Linux-x86_64-515.65.01.run --silent --disable-nouveau --dkms --install-libglvnd
NOTE: This step will disable the nouveau drivers.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-515.65.01.run --silent --disable-nouveau --dkms --install-libglvnd
NOTE: If you are using a laptop with NVIDIA Optimius, run
sudo apt-get install nvidia-prime sudo prime-select nvidia
Desktop
Run
sudo sh NVIDIA-Linux-x86_64-515.65.01.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
NOTE: This step will disable the nouveau drivers.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-515.65.01.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
Data center / Tesla series
Download
wget https://us.download.nvidia.com/tesla/515.65.01/NVIDIA-Linux-x86_64-515.65.01.run
Run
sudo sh NVIDIA-Linux-x86_64-515.65.01.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
wget https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda_11.7.1_515.65.01_linux.run
sudo sh cuda_11.7.1_515.65.01_linux.run --silent --toolkit
-
Export environment variables
echo $'export PATH=/usr/local/cuda-11.7/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
6. Download from NVIDIA website and install the TensorRT
TensorRT 8.4 GA for Ubuntu 20.04 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4, 11.5, 11.6 and 11.7 DEB local repo Package
sudo dpkg -i nv-tensorrt-repo-ubuntu2004-cuda11.6-trt8.4.1.5-ga-20220604_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-ubuntu2004-cuda11.6-trt8.4.1.5-ga-20220604/9a60d8bf.pub
sudo apt-get update
sudo apt-get install libnvinfer8=8.4.1-1+cuda11.6 libnvinfer-plugin8=8.4.1-1+cuda11.6 libnvparsers8=8.4.1-1+cuda11.6 libnvonnxparsers8=8.4.1-1+cuda11.6 libnvinfer-bin=8.4.1-1+cuda11.6 libnvinfer-dev=8.4.1-1+cuda11.6 libnvinfer-plugin-dev=8.4.1-1+cuda11.6 libnvparsers-dev=8.4.1-1+cuda11.6 libnvonnxparsers-dev=8.4.1-1+cuda11.6 libnvinfer-samples=8.4.1-1+cuda11.6 libcudnn8=8.4.1.50-1+cuda11.6 libcudnn8-dev=8.4.1.50-1+cuda11.6 python3-libnvinfer=8.4.1-1+cuda11.6 python3-libnvinfer-dev=8.4.1-1+cuda11.6
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt
7. Download from NVIDIA website and install the DeepStream SDK
DeepStream 6.1.1 for Servers and Workstations (.deb)
sudo apt-get install ./deepstream-6.1_6.1.1-1_amd64.deb
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-11.7 /usr/local/cuda
sudo reboot
DeepStream 6.1
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install dkms
sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev
sudo apt-get install linux-headers-$(uname -r)
NOTE: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path)
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
TITAN, GeForce RTX / GTX series and RTX / Quadro series
Download
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
Laptop
Run
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd
NOTE: This step will disable the nouveau drivers.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd
NOTE: If you are using a laptop with NVIDIA Optimius, run
sudo apt-get install nvidia-prime sudo prime-select nvidia
Desktop
Run
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
NOTE: This step will disable the nouveau drivers.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
Data center / Tesla series
Download
wget https://us.download.nvidia.com/tesla/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
Run
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
wget https://developer.download.nvidia.com/compute/cuda/11.6.1/local_installers/cuda_11.6.1_510.47.03_linux.run
sudo sh cuda_11.6.1_510.47.03_linux.run --silent --toolkit
-
Export environment variables
echo $'export PATH=/usr/local/cuda-11.6/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.6/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
6. Download from NVIDIA website and install the TensorRT
TensorRT 8.2 GA Update 4 for Ubuntu 20.04 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4 and 11.5 DEB local repo Package
sudo dpkg -i nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505/82307095.pub
sudo apt-get update
sudo apt-get install libnvinfer8=8.2.5-1+cuda11.4 libnvinfer-plugin8=8.2.5-1+cuda11.4 libnvparsers8=8.2.5-1+cuda11.4 libnvonnxparsers8=8.2.5-1+cuda11.4 libnvinfer-bin=8.2.5-1+cuda11.4 libnvinfer-dev=8.2.5-1+cuda11.4 libnvinfer-plugin-dev=8.2.5-1+cuda11.4 libnvparsers-dev=8.2.5-1+cuda11.4 libnvonnxparsers-dev=8.2.5-1+cuda11.4 libnvinfer-samples=8.2.5-1+cuda11.4 libnvinfer-doc=8.2.5-1+cuda11.4 libcudnn8-dev=8.4.0.27-1+cuda11.6 libcudnn8=8.4.0.27-1+cuda11.6
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt
7. Download from NVIDIA website and install the DeepStream SDK
DeepStream 6.1 for Servers and Workstations (.deb)
sudo apt-get install ./deepstream-6.1_6.1.0-1_amd64.deb
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-11.6 /usr/local/cuda
sudo reboot
DeepStream 6.0.1 / 6.0
If you are using a laptop with newer Intel/AMD processors and your Graphics in Settings->Details->About tab is llvmpipe, please update the kernel.
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-headers-5.11.0-051100_5.11.0-051100.202102142330_all.deb
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-headers-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-image-unsigned-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-modules-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
sudo dpkg -i *.deb
sudo reboot
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install libssl1.0.0 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4
sudo apt-get install linux-headers-$(uname -r)
NOTE: Install DKMS only if you are using the default Ubuntu kernel
sudo apt-get install dkms
NOTE: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path)
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
TITAN, GeForce RTX / GTX series and RTX / Quadro series
Download
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/470.129.06/NVIDIA-Linux-x86_64-470.129.06.run
Laptop
Run
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd
NOTE: This step will disable the nouveau drivers.
NOTE: Remove --dkms flag if you installed the 5.11.0 kernel.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd
NOTE: Remove --dkms flag if you installed the 5.11.0 kernel.
NOTE: If you are using a laptop with NVIDIA Optimius, run
sudo apt-get install nvidia-prime sudo prime-select nvidia
Desktop
Run
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
NOTE: This step will disable the nouveau drivers.
NOTE: Remove --dkms flag if you installed the 5.11.0 kernel.
Reboot
sudo reboot
Install
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
NOTE: Remove --dkms flag if you installed the 5.11.0 kernel.
Data center / Tesla series
Download
wget https://us.download.nvidia.com/tesla/470.129.06/NVIDIA-Linux-x86_64-470.129.06.run
Run
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
NOTE: Remove --dkms flag if you installed the 5.11.0 kernel.
wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda_11.4.1_470.57.02_linux.run
sudo sh cuda_11.4.1_470.57.02_linux.run --silent --toolkit
-
Export environment variables
echo $'export PATH=/usr/local/cuda-11.4/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
6. Download from NVIDIA website and install the TensorRT
TensorRT 8.0.1 GA for Ubuntu 18.04 and CUDA 11.3 DEB local repo package
sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626/7fa2af80.pub
sudo apt-get update
sudo apt-get install libnvinfer8=8.0.1-1+cuda11.3 libnvinfer-plugin8=8.0.1-1+cuda11.3 libnvparsers8=8.0.1-1+cuda11.3 libnvonnxparsers8=8.0.1-1+cuda11.3 libnvinfer-bin=8.0.1-1+cuda11.3 libnvinfer-dev=8.0.1-1+cuda11.3 libnvinfer-plugin-dev=8.0.1-1+cuda11.3 libnvparsers-dev=8.0.1-1+cuda11.3 libnvonnxparsers-dev=8.0.1-1+cuda11.3 libnvinfer-samples=8.0.1-1+cuda11.3 libnvinfer-doc=8.0.1-1+cuda11.3 libcudnn8-dev=8.2.1.32-1+cuda11.3 libcudnn8=8.2.1.32-1+cuda11.3
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt
7. Download from NVIDIA website and install the DeepStream SDK
-
DeepStream 6.0.1 for Servers and Workstations (.deb)
sudo apt-get install ./deepstream-6.0_6.0.1-1_amd64.deb
-
DeepStream 6.0 for Servers and Workstations (.deb)
sudo apt-get install ./deepstream-6.0_6.0.0-1_amd64.deb
-
Run
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin sudo ln -snf /usr/local/cuda-11.4 /usr/local/cuda
sudo reboot
git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
cd DeepStream-Yolo
2. Download the cfg
and weights
files from Darknet repo to the DeepStream-Yolo folder
-
DeepStream 6.2 on x86 platform
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.1.1 on x86 platform
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.1 on x86 platform
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.0.1 / 6.0 on x86 platform
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.0.1 / 6.0 on Jetson platform
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
[property]
...
custom-network-config=yolov4.cfg
model-file=yolov4.weights
...
deepstream-app -c deepstream_app_config.txt
NOTE: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).
NOTE: If you want to use YOLOv2 or YOLOv2-Tiny models, change the deepstream_app_config.txt
file before run it
...
[primary-gie]
...
config-file=config_infer_primary_yoloV2.txt
...
-
x86 platform
nvcr.io/nvidia/deepstream:6.2-devel nvcr.io/nvidia/deepstream:6.2-triton
-
Jetson platform
nvcr.io/nvidia/deepstream-l4t:6.2-samples nvcr.io/nvidia/deepstream-l4t:6.2-triton
NOTE: To compile the
nvdsinfer_custom_impl_Yolo
, you need to install the g++ inside the containerapt-get install build-essential
NOTE: With DeepStream 6.2, the docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio track. Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features:
/opt/nvidia/deepstream/deepstream/user_additional_install.sh
To change the nms-iou-threshold
, pre-cluster-threshold
and topk
values, modify the config_infer file
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
NOTE: Make sure to set cluster-mode=2
in the config_infer file.
NOTE: For now, Only for Darknet YOLO model.
sudo apt-get install libopencv-dev
-
DeepStream 6.2 on x86 platform
CUDA_VER=11.8 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.1.1 on x86 platform
CUDA_VER=11.7 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.1 on x86 platform
CUDA_VER=11.6 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.0.1 / 6.0 on x86 platform
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
-
DeepStream 6.0.1 / 6.0 on Jetson platform
CUDA_VER=10.2 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
3. For COCO dataset, download the val2017, extract, and move to DeepStream-Yolo folder
-
Select 1000 random images from COCO dataset to run calibration
mkdir calibration
for jpg in $(ls -1 val2017/*.jpg | sort -R | head -1000); do \ cp ${jpg} calibration/; \ done
-
Create the
calibration.txt
file with all selected imagesrealpath calibration/*jpg > calibration.txt
-
Set environment variables
export INT8_CALIB_IMG_PATH=calibration.txt export INT8_CALIB_BATCH_SIZE=1
-
Edit the
config_infer
file... model-engine-file=model_b1_gpu0_fp32.engine #int8-calib-file=calib.table ... network-mode=0 ...
To
... model-engine-file=model_b1_gpu0_int8.engine int8-calib-file=calib.table ... network-mode=1 ...
-
Run
deepstream-app -c deepstream_app_config.txt
NOTE: NVIDIA recommends at least 500 images to get a good accuracy. On this example, I recommend to use 1000 images to get better accuracy (more images = more accuracy). Higher INT8_CALIB_BATCH_SIZE
values will result in more accuracy and faster calibration speed. Set it according to you GPU memory. This process may take a long time.
You can get metadata from DeepStream using Python and C/C++. For C/C++, you can edit the deepstream-app
or deepstream-test
codes. For Python, your can install and edit deepstream_python_apps.
Basically, you need manipulate the NvDsObjectMeta
(Python / C/C++) and NvDsFrameMeta
(Python / C/C++) to get the label, position, etc. of bboxes.
My projects: https://www.youtube.com/MarcosLucianoTV