Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: adding YOLONAS to ml backend examples #316

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions label_studio_ml/examples/yolonas/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
CHECKPOINT_FILE="/home/testuser/app/model.pth"
PORT=9090
YOLO_LABELS=/home/testuser/app/labels.txt
IOU_THRESHOLD=0.25
SCORE_THRESHOLD=0.4
IMG_SIZE=1280
DEVICE=cpu
ENDPOINT_URL=<specify minio address http://myminio:9000>
AWS_ACCESS_KEY_ID=minio
AWS_SECRET_ACCESS_KEY=<specify minio password>
LABEL_STUDIO_HOSTNAME=<specify label studio address with port like http://mylabelstudio.com:8080 >
YOLO_MODEL_TYPE=yolo_nas_m
5 changes: 5 additions & 0 deletions label_studio_ml/examples/yolonas/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
FROM bodbe/yolonas
USER testuser
WORKDIR /home/testuser/app
COPY _wsgi.py ./
COPY model.py ./
63 changes: 63 additions & 0 deletions label_studio_ml/examples/yolonas/Dockerfile.full
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
FROM nvidia/cuda:11.2.2-cudnn8-devel-ubuntu18.04

ENV TZ 'Europe/Moscow'
RUN echo $TZ > /etc/timezone

RUN apt-get update && apt-get install -y locales sudo
RUN sed -i -e 's/# ru_RU.UTF-8 UTF-8/ru_RU.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=ru_RU.UTF-8
ENV LANG ru_RU.UTF-8
ENV LANGUAGE ru_RU
ENV LC_ALL ru_RU.UTF-8

RUN apt-get update && apt-get install -y python3.8 python3.8-dev python3-pip git
#RUN apt-get upgrade python3-pip
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1

RUN apt-get install -y ffmpeg libsm6 libxext6 libgl1-mesa-glx libgl1
RUN apt-get install -y wget nano sudo
RUN wget https://bootstrap.pypa.io/get-pip.py && python3.8 get-pip.py
RUN apt-get clean

RUN useradd -ms /bin/bash testuser && \
echo "testuser:testuser" | chpasswd && \
usermod -aG sudo testuser && \
chmod 777 -R /root

WORKDIR /home/testuser

USER testuser
ENV PATH="/home/testuser/.local/bin:${PATH}"

RUN mkdir -p /home/testuser/.jupyter/lab/user-settings/@jupyterlab/terminal-extension
COPY --chown=testuser plugin.jupyterlab-settings /home/testuser/.jupyter/lab/user-settings/@jupyterlab/terminal-extension/
COPY --chown=testuser jupyter_lab_config.py /home/testuser/.jupyter/

RUN openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:4096 -keyout .jupyter/jupyter.key -out .jupyter/jupyter.pem \
-subj "/C=RU/ST=Uranopolis/L=Karyes/O=Space/OU=DS/CN=agion.oros"

RUN python3.8 -m pip install nvidia-pyindex
RUN python3.8 -m pip install pytorch-quantization==2.1.2
COPY requirements.txt .
RUN python3.8 -m pip install -U -r requirements.txt
RUN python3.8 -m pip install super-gradients==3.1.2

############
## add-ons for late package installs
############
#RUN python3.8 -m pip install redis rq label-studio-ml

# setting up token string
# you must run the build with command
# docker build --build-arg token_string=<whatever your token is> . -t torch_custom
ARG token_string
ENV JUPYTER_TOKEN=${token_string}

ENV SHELL="/bin/bash"

RUN mkdir /home/testuser/.clearml && mkdir /home/testuser/.ssh
RUN ln -s /usr/bin/python3 ~/.local/bin/python
WORKDIR /home/testuser

CMD ["jupyter", "lab"]
46 changes: 46 additions & 0 deletions label_studio_ml/examples/yolonas/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# YOLONAS ML Backend for Label Studio

## Intro
Use Deci AI [YOLONAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model with Label Studio.

## Setup
### 0. Important things to note
- This ML backend is designed to work in **docker**. You can run in on host but this manual does not cover that.
- There is no easy way to run ML backend with GPU support - you get an error `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
- Single image inference on CPU takes about a second.
- Base docker image has a `jupyter lab` command. So you can comment a `command` in `docker-compose.yml`, uncomment `8888:8888` for port mapping and use it as jupyter lab with password `change-me` over https.
- Main tested scenario has been for s3 cloud storage with custom endpoint url. Other storage options are not guaranteed to work.
### 1. Clone this repo
### 2. Get model weights
### 3. Adjust variables
Adjust these variables in `.env` file.
```
CHECKPOINT_FILE="/home/testuser/app/model.pth"
PORT=9090
YOLO_LABELS=/home/testuser/app/labels.txt
IOU_THRESHOLD=0.25
SCORE_THRESHOLD=0.4
IMG_SIZE=1280
DEVICE=cpu
ENDPOINT_URL=<specify minio address http://myminio:9000>
AWS_ACCESS_KEY_ID=minio
AWS_SECRET_ACCESS_KEY=<specify minio password>
LABEL_STUDIO_HOSTNAME=<specify label studio address with port like http://mylabelstudio.com:8080 >
YOLO_MODEL_TYPE=yolo_nas_m
```

`YOLO_LABELS=/home/testuser/app/labels.txt` file with labels - each label on new line.
Labels should be the same in labeling interface and in this file.
If yolo labels differ you need to provide `LABELS_FILE` variable with mapping from Label studio label to yolo label like `{"airplane": "Boeing"}`

### 4. Build docker image
Run `docker compose build` to build an image.
Base image `bodbe/yolonas` is built with [Dockerfile.full](Dockerfile.full).

### 5. Run ML Backend
Run `docker compose up -d`

### 6. How to run on GPU
- Update DEVICE variable in `.env` file to `cuda:0`
- Uncomment `deploy` section in `docker-compose.yml`
- Change `command` section in `docker-compose.yml` to `bash -c "python _wsgi.py"
126 changes: 126 additions & 0 deletions label_studio_ml/examples/yolonas/_wsgi.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
import os
import argparse
import logging
import logging.config

logging.config.dictConfig({
"version": 1,
"formatters": {
"standard": {
"format": "[%(asctime)s] [%(levelname)s] [%(name)s::%(funcName)s::%(lineno)d] %(message)s"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"stream": "ext://sys.stdout",
"formatter": "standard"
}
},
"root": {
"level": "ERROR",
"handlers": [
"console"
],
"propagate": True
}
})

from label_studio_ml.api import init_app
from model import ObjectDetectorModel


_DEFAULT_CONFIG_PATH = os.path.join(os.path.dirname(__file__), 'config.json')


def get_kwargs_from_config(config_path=_DEFAULT_CONFIG_PATH):
if not os.path.exists(config_path):
return dict()
with open(config_path) as f:
config = json.load(f)
assert isinstance(config, dict)
return config


if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Label studio')
parser.add_argument(
'-p', '--port', dest='port', type=int, default=9090,
help='Server port')
parser.add_argument(
'--host', dest='host', type=str, default='0.0.0.0',
help='Server host')
parser.add_argument(
'--kwargs', '--with', dest='kwargs', metavar='KEY=VAL', nargs='+', type=lambda kv: kv.split('='),
help='Additional LabelStudioMLBase model initialization kwargs')
parser.add_argument(
'-d', '--debug', dest='debug', action='store_true',
help='Switch debug mode')
parser.add_argument(
'--log-level', dest='log_level', choices=['DEBUG', 'INFO', 'WARNING', 'ERROR'], default=None,
help='Logging level')
parser.add_argument(
'--model-dir', dest='model_dir', default=os.path.dirname(__file__),
help='Directory where models are stored (relative to the project directory)')
parser.add_argument(
'--check', dest='check', action='store_true',
help='Validate model instance before launching server')

args = parser.parse_args()

# setup logging level
if args.log_level:
logging.root.setLevel(args.log_level)

def isfloat(value):
try:
float(value)
return True
except ValueError:
return False

def parse_kwargs():
param = dict()
for k, v in args.kwargs:
if v.isdigit():
param[k] = int(v)
elif v == 'True' or v == 'true':
param[k] = True
elif v == 'False' or v == 'False':
param[k] = False
elif isfloat(v):
param[k] = float(v)
else:
param[k] = v
return param

kwargs = get_kwargs_from_config()

if args.kwargs:
kwargs.update(parse_kwargs())

if args.check:
print('Check "' + ObjectDetectorModel.__name__ + '" instance creation..')
model = ObjectDetectorModel(**kwargs)

app = init_app(
model_class=ObjectDetectorModel,
model_dir=os.environ.get('MODEL_DIR', args.model_dir),
redis_queue=os.environ.get('RQ_QUEUE_NAME', 'default'),
redis_host=os.environ.get('REDIS_HOST', 'localhost'),
redis_port=os.environ.get('REDIS_PORT', 6379),
**kwargs
)

app.run(host=args.host, port=args.port, debug=args.debug)

else:
# for uWSGI use
app = init_app(
model_class=ObjectDetectorModel,
model_dir=os.environ.get('MODEL_DIR', os.path.dirname(__file__)),
redis_queue=os.environ.get('RQ_QUEUE_NAME', 'default'),
redis_host=os.environ.get('REDIS_HOST', 'localhost'),
redis_port=os.environ.get('REDIS_PORT', 6379)
)
57 changes: 57 additions & 0 deletions label_studio_ml/examples/yolonas/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
version: "3.8"

services:
redis:
image: redis:alpine
container_name: redis
hostname: redis
volumes:
- "./data/redis:/data"
expose:
- 6379
server:
build: .
container_name: server
environment:
# file with checkpoint from yolo nas
- CHECKPOINT_FILE=${CHECKPOINT_FILE:-/home/testuser/app/model.pth}
# file with labels - each label on new line
# labels should be the same in labeling interface and in this file
# otherwise need to provide LABELS_FILE variable with mapping from
# Label studio label to Yolo label like {"airplane": "Boeing"}
- YOLO_LABELS=${YOLO_LABELS:-/home/testuser/app/labels.txt}
- LABELS_FILE=${LABELS_FILE:-}
- IOU_THRESHOLD=${IOU_THRESHOLD:-0.25}
- SCORE_THRESHOLD=${SCORE_THRESHOLD:-0.45}
# target resolution to feed to yolo
- IMG_SIZE=${IMG_SIZE:-1280}
- DEVICE=${DEVICE:-cpu}
# custom endpoint url for s3 storage that holds files in Label Studio
- ENDPOINT_URL=${ENDPOINT_URL:-}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID:-}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- LABEL_STUDIO_HOSTNAME=${LABEL_STUDIO_HOSTNAME}
# yolo model specification
- YOLO_MODEL_TYPE=${YOLO_MODEL_TYPE:-yolo_nas_m}
- RQ_QUEUE_NAME=default
- REDIS_HOST=redis
- REDIS_PORT=6379
- MODEL_DIR=./
ports:
# uncomment to use jupyter lab
# - 8888:8888
- ${PORT}:${PORT:-9090}
restart: always
volumes:
- ./model.pth:${CHECKPOINT_FILE}
- ./labels.txt:${YOLO_LABELS}
command: bash -c "exec gunicorn --preload --bind :${PORT} --workers 1 --threads 8 --timeout 0 _wsgi:app"
# uncomment if you wand a GPU access inside your docker
# command: bash -c "python _wsgi.py"
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# device_ids: [ '0']
# capabilities: [ gpu ]
Loading