+
+Here is the issues and why these are difficult to achieve the same score as the official one:
+
+The first one:
+
+1. Altered EfficientNet the wrong way, strides have been changed to adapt the BiFPN, but we should be aware that efficientnet's great performance comes from it's specific parameters combinations. Any slight alteration could lead to worse performance.
+
+The second one:
+
+1. Pytorch's BatchNormalization is slightly different from TensorFlow, momentum_pytorch = 1 - momentum_tensorflow. Well I didn't realize this trap if I paid less attentions. signatrix/efficientdet succeeded the parameter from TensorFlow, so the BN will perform badly because running mean and the running variance is being dominated by the new input.
+
+2. Mis-implement of Depthwise-Separable Conv2D. Depthwise-Separable Conv2D is Depthwise-Conv2D and Pointwise-Conv2D and BiasAdd ,there is only a BiasAdd after two Conv2D, while signatrix/efficientdet has a extra BiasAdd on Depthwise-Conv2D.
+
+3. Misunderstand the first parameter of MaxPooling2D, the first parameter is kernel_size, instead of stride.
+
+4. Missing BN after downchannel of the feature of the efficientnet output.
+
+5. Using the wrong output feature of the efficientnet. This is big one. It takes whatever output that has the conv.stride of 2, but it's wrong. It should be the one whose next conv.stride is 2 or the final output of efficientnet.
+
+6. Does not apply same padding on Conv2D and Pooling.
+
+7. Missing swish activation after several operations.
+
+8. Missing Conv/BN operations in BiFPN, Regressor and Classifier. This one is very tricky, if you don't dig deeper into the official implement, there are some same operations with different weights.
+
+ illustration of a minimal bifpn unit
+ P7_0 -------------------------> P7_2 -------->
+ |-------------| ↑
+ ↓ |
+ P6_0 ---------> P6_1 ---------> P6_2 -------->
+ |-------------|--------------↑ ↑
+ ↓ |
+ P5_0 ---------> P5_1 ---------> P5_2 -------->
+ |-------------|--------------↑ ↑
+ ↓ |
+ P4_0 ---------> P4_1 ---------> P4_2 -------->
+ |-------------|--------------↑ ↑
+ |--------------↓ |
+ P3_0 -------------------------> P3_2 -------->
+
+ For example, P4 will downchannel to P4_0, then it goes P4_1,
+ anyone may takes it for granted that P4_0 goes to P4_2 directly, right?
+
+ That's why they are wrong,
+ P4 should downchannel again with a different weights to P4_0_another,
+ then it goes to P4_2.
+
+And finally some common issues, their anchor decoder and encoder are different from the original one, but it's not the main reason that it performs badly.
+
+Also, Conv2dStaticSamePadding from [EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch) does not perform like TensorFlow, the padding strategy is different. So I implement a real tensorflow-style [Conv2dStaticSamePadding](efficientnet/utils_extra.py#L9) and [MaxPool2dStaticSamePadding](efficientnet/utils_extra.py#L55) myself.
+
+Despite of the above issues, they are great repositories that enlighten me, hence there is this repository.
+
+This repository is mainly based on [efficientdet](https://github.com/signatrix/efficientdet), with the changing that makes sure that it performs as closer as possible as the paper.
+
+Btw, debugging static-graph TensorFlow v1 is really painful. Don't try to export it with automation tools like tf-onnx or mmdnn, they will only cause more problems because of its custom/complex operations.
+
+And even if you succeeded, like I did, you will have to deal with the crazy messed up machine-generated code under the same class that takes more time to refactor than translating it from scratch.
+
+**Q3: What should I do when I find a bug?**
+
+A3: Check out the update log if it's been fixed, then pull the latest code to try again. If it doesn't help, create a new issue and describe it in detail.
+
+## Known issues
+
+1. Official EfficientDet use TensorFlow bilinear interpolation to resize image inputs, while it is different from many other methods (opencv/pytorch), so the output is definitely slightly different from the official one.
+
+## Visual Comparison
+
+Conclusion: They are providing almost the same precision. Tips: set `force_input_size=1920`. Official repo uses original image size while this repo uses default network input size. If you try to compare these two repos, you must make sure the input size is consistent.
+
+### This Repo
+
+
+
+### Official EfficientDet
+
+
+
+## References
+
+Appreciate the great work from the following repositories:
+
+- [google/automl](https://github.com/google/automl)
+- [lukemelas/EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch)
+- [signatrix/efficientdet](https://github.com/signatrix/efficientdet)
+- [vacancy/Synchronized-BatchNorm-PyTorch](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch)
+
+## Donation
+
+If you like this repository, or if you'd like to support the author for any reason, you can donate to the author. Feel free to send me your name or introducing pages, I will make sure your name(s) on the sponsors list.
+
+
+
+## Sponsors
+
+Sincerely thank you for your generosity.
+
+[cndylan](https://github.com/cndylan)
+[claire-s11](https://github.com/claire-s11)
diff --git a/efficient_det_ros/train.py b/efficient_det_ros/train.py
new file mode 100644
index 0000000..04133cb
--- /dev/null
+++ b/efficient_det_ros/train.py
@@ -0,0 +1,326 @@
+# original author: signatrix
+# adapted from https://github.com/signatrix/efficientdet/blob/master/train.py
+# modified by Zylo117
+
+import argparse
+import datetime
+import os
+import traceback
+
+import numpy as np
+import torch
+import yaml
+from tensorboardX import SummaryWriter
+from torch import nn
+from torch.utils.data import DataLoader
+from torchvision import transforms
+from tqdm.autonotebook import tqdm
+
+from backbone import EfficientDetBackbone
+from efficientdet.dataset import CocoDataset, Resizer, Normalizer, Augmenter, collater
+from efficientdet.loss import FocalLoss
+from utils.sync_batchnorm import patch_replication_callback
+from utils.utils import replace_w_sync_bn, CustomDataParallel, get_last_weights, init_weights, boolean_string
+
+
+class Params:
+ def __init__(self, project_file):
+ self.params = yaml.safe_load(open(project_file).read())
+
+ def __getattr__(self, item):
+ return self.params.get(item, None)
+
+
+def get_args():
+ parser = argparse.ArgumentParser('Yet Another EfficientDet Pytorch: SOTA object detection network - Zylo117')
+ parser.add_argument('-p', '--project', type=str, default='coco', help='project file that contains parameters')
+ parser.add_argument('-c', '--compound_coef', type=int, default=0, help='coefficients of efficientdet')
+ parser.add_argument('-n', '--num_workers', type=int, default=12, help='num_workers of dataloader')
+ parser.add_argument('--batch_size', type=int, default=12, help='The number of images per batch among all devices')
+ parser.add_argument('--head_only', type=boolean_string, default=False,
+ help='whether finetunes only the regressor and the classifier, '
+ 'useful in early stage convergence or small/easy dataset')
+ parser.add_argument('--lr', type=float, default=1e-4)
+ parser.add_argument('--optim', type=str, default='adamw', help='select optimizer for training, '
+ 'suggest using \'admaw\' until the'
+ ' very final stage then switch to \'sgd\'')
+ parser.add_argument('--num_epochs', type=int, default=500)
+ parser.add_argument('--val_interval', type=int, default=1, help='Number of epoches between valing phases')
+ parser.add_argument('--save_interval', type=int, default=500, help='Number of steps between saving')
+ parser.add_argument('--es_min_delta', type=float, default=0.0,
+ help='Early stopping\'s parameter: minimum change loss to qualify as an improvement')
+ parser.add_argument('--es_patience', type=int, default=0,
+ help='Early stopping\'s parameter: number of epochs with no improvement after which training will be stopped. Set to 0 to disable this technique.')
+ parser.add_argument('--data_path', type=str, default='datasets/', help='the root folder of dataset')
+ parser.add_argument('--log_path', type=str, default='logs/')
+ parser.add_argument('-w', '--load_weights', type=str, default=None,
+ help='whether to load weights from a checkpoint, set None to initialize, set \'last\' to load last checkpoint')
+ parser.add_argument('--saved_path', type=str, default='logs/')
+ parser.add_argument('--debug', type=boolean_string, default=False,
+ help='whether visualize the predicted boxes of training, '
+ 'the output images will be in test/')
+
+ args = parser.parse_args()
+ return args
+
+
+class ModelWithLoss(nn.Module):
+ def __init__(self, model, debug=False):
+ super().__init__()
+ self.criterion = FocalLoss()
+ self.model = model
+ self.debug = debug
+
+ def forward(self, imgs, annotations, obj_list=None):
+ _, regression, classification, anchors = self.model(imgs)
+ if self.debug:
+ cls_loss, reg_loss = self.criterion(classification, regression, anchors, annotations,
+ imgs=imgs, obj_list=obj_list)
+ else:
+ cls_loss, reg_loss = self.criterion(classification, regression, anchors, annotations)
+ return cls_loss, reg_loss
+
+
+def train(opt):
+ params = Params(f'projects/{opt.project}.yml')
+
+ if params.num_gpus == 0:
+ os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
+
+ if torch.cuda.is_available():
+ torch.cuda.manual_seed(42)
+ else:
+ torch.manual_seed(42)
+
+ opt.saved_path = opt.saved_path + f'/{params.project_name}/'
+ opt.log_path = opt.log_path + f'/{params.project_name}/tensorboard/'
+ os.makedirs(opt.log_path, exist_ok=True)
+ os.makedirs(opt.saved_path, exist_ok=True)
+
+ training_params = {'batch_size': opt.batch_size,
+ 'shuffle': True,
+ 'drop_last': True,
+ 'collate_fn': collater,
+ 'num_workers': opt.num_workers}
+
+ val_params = {'batch_size': opt.batch_size,
+ 'shuffle': False,
+ 'drop_last': True,
+ 'collate_fn': collater,
+ 'num_workers': opt.num_workers}
+
+ input_sizes = [512, 640, 768, 896, 1024, 1280, 1280, 1536, 1536]
+ training_set = CocoDataset(root_dir=os.path.join(opt.data_path, params.project_name), set=params.train_set,
+ transform=transforms.Compose([Normalizer(mean=params.mean, std=params.std),
+ Augmenter(),
+ Resizer(input_sizes[opt.compound_coef])]))
+ training_generator = DataLoader(training_set, **training_params)
+
+ val_set = CocoDataset(root_dir=os.path.join(opt.data_path, params.project_name), set=params.val_set,
+ transform=transforms.Compose([Normalizer(mean=params.mean, std=params.std),
+ Resizer(input_sizes[opt.compound_coef])]))
+ val_generator = DataLoader(val_set, **val_params)
+
+ model = EfficientDetBackbone(num_classes=len(params.obj_list), compound_coef=opt.compound_coef,
+ ratios=eval(params.anchors_ratios), scales=eval(params.anchors_scales))
+
+ # load last weights
+ if opt.load_weights is not None:
+ if opt.load_weights.endswith('.pth'):
+ weights_path = opt.load_weights
+ else:
+ weights_path = get_last_weights(opt.saved_path)
+ try:
+ last_step = int(os.path.basename(weights_path).split('_')[-1].split('.')[0])
+ except:
+ last_step = 0
+
+ try:
+ ret = model.load_state_dict(torch.load(weights_path), strict=False)
+ except RuntimeError as e:
+ print(f'[Warning] Ignoring {e}')
+ print(
+ '[Warning] Don\'t panic if you see this, this might be because you load a pretrained weights with different number of classes. The rest of the weights should be loaded already.')
+
+ print(f'[Info] loaded weights: {os.path.basename(weights_path)}, resuming checkpoint from step: {last_step}')
+ else:
+ last_step = 0
+ print('[Info] initializing weights...')
+ init_weights(model)
+
+ # freeze backbone if train head_only
+ if opt.head_only:
+ def freeze_backbone(m):
+ classname = m.__class__.__name__
+ for ntl in ['EfficientNet', 'BiFPN']:
+ if ntl in classname:
+ for param in m.parameters():
+ param.requires_grad = False
+
+ model.apply(freeze_backbone)
+ print('[Info] freezed backbone')
+
+ # https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
+ # apply sync_bn when using multiple gpu and batch_size per gpu is lower than 4
+ # useful when gpu memory is limited.
+ # because when bn is disable, the training will be very unstable or slow to converge,
+ # apply sync_bn can solve it,
+ # by packing all mini-batch across all gpus as one batch and normalize, then send it back to all gpus.
+ # but it would also slow down the training by a little bit.
+ if params.num_gpus > 1 and opt.batch_size // params.num_gpus < 4:
+ model.apply(replace_w_sync_bn)
+ use_sync_bn = True
+ else:
+ use_sync_bn = False
+
+ writer = SummaryWriter(opt.log_path + f'/{datetime.datetime.now().strftime("%Y%m%d-%H%M%S")}/')
+
+ # warp the model with loss function, to reduce the memory usage on gpu0 and speedup
+ model = ModelWithLoss(model, debug=opt.debug)
+
+ if params.num_gpus > 0:
+ model = model.cuda()
+ if params.num_gpus > 1:
+ model = CustomDataParallel(model, params.num_gpus)
+ if use_sync_bn:
+ patch_replication_callback(model)
+
+ if opt.optim == 'adamw':
+ optimizer = torch.optim.AdamW(model.parameters(), opt.lr)
+ else:
+ optimizer = torch.optim.SGD(model.parameters(), opt.lr, momentum=0.9, nesterov=True)
+
+ scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=3, verbose=True)
+
+ epoch = 0
+ best_loss = 1e5
+ best_epoch = 0
+ step = max(0, last_step)
+ model.train()
+
+ num_iter_per_epoch = len(training_generator)
+
+ try:
+ for epoch in range(opt.num_epochs):
+ last_epoch = step // num_iter_per_epoch
+ if epoch < last_epoch:
+ continue
+
+ epoch_loss = []
+ progress_bar = tqdm(training_generator)
+ for iter, data in enumerate(progress_bar):
+ if iter < step - last_epoch * num_iter_per_epoch:
+ progress_bar.update()
+ continue
+ try:
+ imgs = data['img']
+ annot = data['annot']
+
+ if params.num_gpus == 1:
+ # if only one gpu, just send it to cuda:0
+ # elif multiple gpus, send it to multiple gpus in CustomDataParallel, not here
+ imgs = imgs.cuda()
+ annot = annot.cuda()
+
+ optimizer.zero_grad()
+ cls_loss, reg_loss = model(imgs, annot, obj_list=params.obj_list)
+ cls_loss = cls_loss.mean()
+ reg_loss = reg_loss.mean()
+
+ loss = cls_loss + reg_loss
+ if loss == 0 or not torch.isfinite(loss):
+ continue
+
+ loss.backward()
+ # torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
+ optimizer.step()
+
+ epoch_loss.append(float(loss))
+
+ progress_bar.set_description(
+ 'Step: {}. Epoch: {}/{}. Iteration: {}/{}. Cls loss: {:.5f}. Reg loss: {:.5f}. Total loss: {:.5f}'.format(
+ step, epoch, opt.num_epochs, iter + 1, num_iter_per_epoch, cls_loss.item(),
+ reg_loss.item(), loss.item()))
+ writer.add_scalars('Loss', {'train': loss}, step)
+ writer.add_scalars('Regression_loss', {'train': reg_loss}, step)
+ writer.add_scalars('Classfication_loss', {'train': cls_loss}, step)
+
+ # log learning_rate
+ current_lr = optimizer.param_groups[0]['lr']
+ writer.add_scalar('learning_rate', current_lr, step)
+
+ step += 1
+
+ if step % opt.save_interval == 0 and step > 0:
+ save_checkpoint(model, f'efficientdet-d{opt.compound_coef}_{epoch}_{step}.pth')
+ print('checkpoint...')
+
+ except Exception as e:
+ print('[Error]', traceback.format_exc())
+ print(e)
+ continue
+ scheduler.step(np.mean(epoch_loss))
+
+ if epoch % opt.val_interval == 0:
+ model.eval()
+ loss_regression_ls = []
+ loss_classification_ls = []
+ for iter, data in enumerate(val_generator):
+ with torch.no_grad():
+ imgs = data['img']
+ annot = data['annot']
+
+ if params.num_gpus == 1:
+ imgs = imgs.cuda()
+ annot = annot.cuda()
+
+ cls_loss, reg_loss = model(imgs, annot, obj_list=params.obj_list)
+ cls_loss = cls_loss.mean()
+ reg_loss = reg_loss.mean()
+
+ loss = cls_loss + reg_loss
+ if loss == 0 or not torch.isfinite(loss):
+ continue
+
+ loss_classification_ls.append(cls_loss.item())
+ loss_regression_ls.append(reg_loss.item())
+
+ cls_loss = np.mean(loss_classification_ls)
+ reg_loss = np.mean(loss_regression_ls)
+ loss = cls_loss + reg_loss
+
+ print(
+ 'Val. Epoch: {}/{}. Classification loss: {:1.5f}. Regression loss: {:1.5f}. Total loss: {:1.5f}'.format(
+ epoch, opt.num_epochs, cls_loss, reg_loss, loss))
+ writer.add_scalars('Loss', {'val': loss}, step)
+ writer.add_scalars('Regression_loss', {'val': reg_loss}, step)
+ writer.add_scalars('Classfication_loss', {'val': cls_loss}, step)
+
+ if loss + opt.es_min_delta < best_loss:
+ best_loss = loss
+ best_epoch = epoch
+
+ save_checkpoint(model, f'efficientdet-d{opt.compound_coef}_{epoch}_{step}.pth')
+
+ model.train()
+
+ # Early stopping
+ if epoch - best_epoch > opt.es_patience > 0:
+ print('[Info] Stop training at epoch {}. The lowest loss achieved is {}'.format(epoch, best_loss))
+ break
+ except KeyboardInterrupt:
+ save_checkpoint(model, f'efficientdet-d{opt.compound_coef}_{epoch}_{step}.pth')
+ writer.close()
+ writer.close()
+
+
+def save_checkpoint(model, name):
+ if isinstance(model, CustomDataParallel):
+ torch.save(model.module.model.state_dict(), os.path.join(opt.saved_path, name))
+ else:
+ torch.save(model.model.state_dict(), os.path.join(opt.saved_path, name))
+
+
+if __name__ == '__main__':
+ opt = get_args()
+ train(opt)
diff --git a/efficient_det_ros/tutorial/train_shape.ipynb b/efficient_det_ros/tutorial/train_shape.ipynb
new file mode 100644
index 0000000..362d50b
--- /dev/null
+++ b/efficient_det_ros/tutorial/train_shape.ipynb
@@ -0,0 +1,643 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 2
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython2",
+ "version": "2.7.6"
+ },
+ "colab": {
+ "name": "train_shape.ipynb",
+ "provenance": []
+ },
+ "accelerator": "GPU"
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": true,
+ "pycharm": {
+ "name": "#%% md\n"
+ },
+ "id": "GI9KZ3F8TLSK",
+ "colab_type": "text"
+ },
+ "source": [
+ "# EfficientDet Training On A Custom Dataset\n",
+ "\n",
+ "\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%% md\n"
+ },
+ "id": "67-3S5_VTLSL",
+ "colab_type": "text"
+ },
+ "source": [
+ "## This tutorial will show you how to train a custom dataset.\n",
+ "\n",
+ "## For the sake of simplicity, I generated a dataset of different shapes, like rectangles, triangles, circles.\n",
+ "\n",
+ "## Please enable GPU support to accelerate on notebook setting if you are using colab.\n",
+ "\n",
+ "### 0. Install Requirements"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ },
+ "id": "90laRz20TLSN",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 952
+ },
+ "outputId": "cdd3d988-ed26-429a-c1aa-e5b8c6ee534c"
+ },
+ "source": [
+ "!pip install pycocotools numpy==1.16.0 opencv-python tqdm tensorboard tensorboardX pyyaml webcolors matplotlib\n",
+ "!pip install torch==1.4.0\n",
+ "!pip install torchvision==0.5.0"
+ ],
+ "execution_count": 1,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "Requirement already satisfied: pycocotools in /usr/local/lib/python3.6/dist-packages (2.0.0)\n",
+ "Collecting numpy==1.16.0\n",
+ "\u001b[?25l Downloading https://files.pythonhosted.org/packages/7b/74/54c5f9bb9bd4dae27a61ec1b39076a39d359b3fb7ba15da79ef23858a9d8/numpy-1.16.0-cp36-cp36m-manylinux1_x86_64.whl (17.3MB)\n",
+ "\u001b[K |████████████████████████████████| 17.3MB 215kB/s \n",
+ "\u001b[?25hRequirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (4.1.2.30)\n",
+ "Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (4.41.1)\n",
+ "Requirement already satisfied: tensorboard in /usr/local/lib/python3.6/dist-packages (2.2.2)\n",
+ "Collecting tensorboardX\n",
+ "\u001b[?25l Downloading https://files.pythonhosted.org/packages/35/f1/5843425495765c8c2dd0784a851a93ef204d314fc87bcc2bbb9f662a3ad1/tensorboardX-2.0-py2.py3-none-any.whl (195kB)\n",
+ "\u001b[K |████████████████████████████████| 204kB 38.5MB/s \n",
+ "\u001b[?25hRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (3.13)\n",
+ "Collecting webcolors\n",
+ " Downloading https://files.pythonhosted.org/packages/12/05/3350559de9714b202e443a9e6312937341bd5f79f4e4f625744295e7dd17/webcolors-1.11.1-py3-none-any.whl\n",
+ "Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.2.1)\n",
+ "Requirement already satisfied: wheel>=0.26; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorboard) (0.34.2)\n",
+ "Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (2.23.0)\n",
+ "Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (1.6.0.post3)\n",
+ "Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (47.1.1)\n",
+ "Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (1.12.0)\n",
+ "Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (0.4.1)\n",
+ "Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (3.10.0)\n",
+ "Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (1.0.1)\n",
+ "Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (0.9.0)\n",
+ "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (3.2.2)\n",
+ "Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (1.29.0)\n",
+ "Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard) (1.17.2)\n",
+ "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)\n",
+ "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7)\n",
+ "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.2.0)\n",
+ "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.8.1)\n",
+ "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard) (2.9)\n",
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard) (2020.4.5.2)\n",
+ "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard) (1.24.3)\n",
+ "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard) (3.0.4)\n",
+ "Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard) (1.3.0)\n",
+ "Requirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard) (1.6.1)\n",
+ "Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard) (0.2.8)\n",
+ "Requirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard) (4.6)\n",
+ "Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard) (4.1.0)\n",
+ "Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard) (3.1.0)\n",
+ "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard) (3.1.0)\n",
+ "Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard) (0.4.8)\n",
+ "\u001b[31mERROR: umap-learn 0.4.4 has requirement numpy>=1.17, but you'll have numpy 1.16.0 which is incompatible.\u001b[0m\n",
+ "\u001b[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.\u001b[0m\n",
+ "\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\n",
+ "Installing collected packages: numpy, tensorboardX, webcolors\n",
+ " Found existing installation: numpy 1.18.5\n",
+ " Uninstalling numpy-1.18.5:\n",
+ " Successfully uninstalled numpy-1.18.5\n",
+ "Successfully installed numpy-1.16.0 tensorboardX-2.0 webcolors-1.11.1\n"
+ ],
+ "name": "stdout"
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.colab-display-data+json": {
+ "pip_warning": {
+ "packages": [
+ "numpy"
+ ]
+ }
+ }
+ },
+ "metadata": {
+ "tags": []
+ }
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%% md\n"
+ },
+ "id": "-R5C4DaETLSS",
+ "colab_type": "text"
+ },
+ "source": [
+ "### 1. Prepare Custom Dataset/Pretrained Weights (Skip this part if you already have datasets and weights of your own)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ },
+ "id": "JmCQj3rhTLSS",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 921
+ },
+ "outputId": "0842b828-a6bf-4bd9-82ac-cc97d12aa9dd"
+ },
+ "source": [
+ "import os\n",
+ "import sys\n",
+ "if \"projects\" not in os.getcwd():\n",
+ " !git clone --depth 1 https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch\n",
+ " os.chdir('Yet-Another-EfficientDet-Pytorch')\n",
+ " sys.path.append('.')\n",
+ "else:\n",
+ " !git pull\n",
+ "\n",
+ "# download and unzip dataset\n",
+ "! mkdir datasets\n",
+ "! wget https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/releases/download/1.1/dataset_shape.tar.gz\n",
+ "! tar xzf dataset_shape.tar.gz\n",
+ "\n",
+ "# download pretrained weights\n",
+ "! mkdir weights\n",
+ "! wget https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/releases/download/1.0/efficientdet-d0.pth -O weights/efficientdet-d0.pth\n",
+ "\n",
+ "# prepare project file projects/shape.yml\n",
+ "# showing its contents here\n",
+ "! cat projects/shape.yml"
+ ],
+ "execution_count": 2,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "Cloning into 'Yet-Another-EfficientDet-Pytorch'...\n",
+ "remote: Enumerating objects: 43, done.\u001b[K\n",
+ "remote: Counting objects: 100% (43/43), done.\u001b[K\n",
+ "remote: Compressing objects: 100% (39/39), done.\u001b[K\n",
+ "remote: Total 43 (delta 3), reused 22 (delta 1), pack-reused 0\u001b[K\n",
+ "Unpacking objects: 100% (43/43), done.\n",
+ "--2020-06-18 02:41:28-- https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/releases/download/1.1/dataset_shape.tar.gz\n",
+ "Resolving github.com (github.com)... 140.82.118.4\n",
+ "Connecting to github.com (github.com)|140.82.118.4|:443... connected.\n",
+ "HTTP request sent, awaiting response... 302 Found\n",
+ "Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/253385242/b4de2a00-7e55-11ea-89ac-50cd8071e6ce?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200618%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200618T024128Z&X-Amz-Expires=300&X-Amz-Signature=2584a0dac8cf892da56cdf5d4845131e4346c765c3b6afae35879931b65f4e4e&X-Amz-SignedHeaders=host&actor_id=0&repo_id=253385242&response-content-disposition=attachment%3B%20filename%3Ddataset_shape.tar.gz&response-content-type=application%2Foctet-stream [following]\n",
+ "--2020-06-18 02:41:28-- https://github-production-release-asset-2e65be.s3.amazonaws.com/253385242/b4de2a00-7e55-11ea-89ac-50cd8071e6ce?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200618%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200618T024128Z&X-Amz-Expires=300&X-Amz-Signature=2584a0dac8cf892da56cdf5d4845131e4346c765c3b6afae35879931b65f4e4e&X-Amz-SignedHeaders=host&actor_id=0&repo_id=253385242&response-content-disposition=attachment%3B%20filename%3Ddataset_shape.tar.gz&response-content-type=application%2Foctet-stream\n",
+ "Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.37.92\n",
+ "Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.37.92|:443... connected.\n",
+ "HTTP request sent, awaiting response... 200 OK\n",
+ "Length: 5770263 (5.5M) [application/octet-stream]\n",
+ "Saving to: ‘dataset_shape.tar.gz’\n",
+ "\n",
+ "dataset_shape.tar.g 100%[===================>] 5.50M 7.61MB/s in 0.7s \n",
+ "\n",
+ "2020-06-18 02:41:29 (7.61 MB/s) - ‘dataset_shape.tar.gz’ saved [5770263/5770263]\n",
+ "\n",
+ "--2020-06-18 02:41:34-- https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/releases/download/1.0/efficientdet-d0.pth\n",
+ "Resolving github.com (github.com)... 140.82.118.4\n",
+ "Connecting to github.com (github.com)|140.82.118.4|:443... connected.\n",
+ "HTTP request sent, awaiting response... 302 Found\n",
+ "Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/253385242/9b9d2100-791d-11ea-80b2-d35899cf95fe?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200618%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200618T024135Z&X-Amz-Expires=300&X-Amz-Signature=c4d613ce694cbb959c9b5bec39f9e7ae9e57e90262ffee0f8d7c8c847fa1f4e5&X-Amz-SignedHeaders=host&actor_id=0&repo_id=253385242&response-content-disposition=attachment%3B%20filename%3Defficientdet-d0.pth&response-content-type=application%2Foctet-stream [following]\n",
+ "--2020-06-18 02:41:35-- https://github-production-release-asset-2e65be.s3.amazonaws.com/253385242/9b9d2100-791d-11ea-80b2-d35899cf95fe?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200618%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200618T024135Z&X-Amz-Expires=300&X-Amz-Signature=c4d613ce694cbb959c9b5bec39f9e7ae9e57e90262ffee0f8d7c8c847fa1f4e5&X-Amz-SignedHeaders=host&actor_id=0&repo_id=253385242&response-content-disposition=attachment%3B%20filename%3Defficientdet-d0.pth&response-content-type=application%2Foctet-stream\n",
+ "Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.82.216\n",
+ "Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.82.216|:443... connected.\n",
+ "HTTP request sent, awaiting response... 200 OK\n",
+ "Length: 15862583 (15M) [application/octet-stream]\n",
+ "Saving to: ‘weights/efficientdet-d0.pth’\n",
+ "\n",
+ "weights/efficientde 100%[===================>] 15.13M 15.1MB/s in 1.0s \n",
+ "\n",
+ "2020-06-18 02:41:36 (15.1 MB/s) - ‘weights/efficientdet-d0.pth’ saved [15862583/15862583]\n",
+ "\n",
+ "project_name: shape # also the folder name of the dataset that under data_path folder\n",
+ "train_set: train\n",
+ "val_set: val\n",
+ "num_gpus: 1\n",
+ "\n",
+ "# mean and std in RGB order, actually this part should remain unchanged as long as your dataset is similar to coco.\n",
+ "mean: [0.485, 0.456, 0.406]\n",
+ "std: [0.229, 0.224, 0.225]\n",
+ "\n",
+ "# this anchor is adapted to the dataset\n",
+ "anchors_scales: '[2 ** 0, 2 ** (1.0 / 3.0), 2 ** (2.0 / 3.0)]'\n",
+ "anchors_ratios: '[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]'\n",
+ "\n",
+ "obj_list: ['rectangle', 'circle']"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false,
+ "id": "7Q2onXNZTLSV",
+ "colab_type": "text"
+ },
+ "source": [
+ "### 2. Training"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ },
+ "id": "a-eznEu5TLSW",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 1000
+ },
+ "outputId": "0a2b3a08-39e1-45c0-8c4c-6a48c22611ba"
+ },
+ "source": [
+ "# consider this is a simple dataset, train head will be enough.\n",
+ "! python train.py -c 0 -p shape --head_only True --lr 1e-3 --batch_size 32 --load_weights weights/efficientdet-d0.pth --num_epochs 50 --save_interval 100\n",
+ "\n",
+ "# the loss will be high at first\n",
+ "# don't panic, be patient,\n",
+ "# just wait for a little bit longer"
+ ],
+ "execution_count": 3,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "loading annotations into memory...\n",
+ "Done (t=0.02s)\n",
+ "creating index...\n",
+ "index created!\n",
+ "loading annotations into memory...\n",
+ "Done (t=0.00s)\n",
+ "creating index...\n",
+ "index created!\n",
+ "[Warning] Ignoring Error(s) in loading state_dict for EfficientDetBackbone:\n",
+ "\tsize mismatch for classifier.header.pointwise_conv.conv.weight: copying a param with shape torch.Size([810, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 64, 1, 1]).\n",
+ "\tsize mismatch for classifier.header.pointwise_conv.conv.bias: copying a param with shape torch.Size([810]) from checkpoint, the shape in current model is torch.Size([18]).\n",
+ "[Warning] Don't panic if you see this, this might be because you load a pretrained weights with different number of classes. The rest of the weights should be loaded already.\n",
+ "[Info] loaded weights: efficientdet-d0.pth, resuming checkpoint from step: 0\n",
+ "[Info] freezed backbone\n",
+ "Step: 27. Epoch: 0/50. Iteration: 28/28. Cls loss: 26.29772. Reg loss: 0.01289. Total loss: 26.31061: 100% 28/28 [00:46<00:00, 1.66s/it]\n",
+ "Val. Epoch: 0/50. Classification loss: 12.20426. Regression loss: 0.01610. Total loss: 12.22037\n",
+ "Step: 55. Epoch: 1/50. Iteration: 28/28. Cls loss: 3.66639. Reg loss: 0.01443. Total loss: 3.68082: 100% 28/28 [00:46<00:00, 1.65s/it]\n",
+ "Val. Epoch: 1/50. Classification loss: 3.10739. Regression loss: 0.01396. Total loss: 3.12135\n",
+ "Step: 83. Epoch: 2/50. Iteration: 28/28. Cls loss: 2.61804. Reg loss: 0.01078. Total loss: 2.62881: 100% 28/28 [00:46<00:00, 1.66s/it]\n",
+ "Val. Epoch: 2/50. Classification loss: 1.99466. Regression loss: 0.01278. Total loss: 2.00744\n",
+ "Step: 111. Epoch: 3/50. Iteration: 28/28. Cls loss: 1.44927. Reg loss: 0.01206. Total loss: 1.46133: 100% 28/28 [00:46<00:00, 1.66s/it]\n",
+ "Val. Epoch: 3/50. Classification loss: 1.42343. Regression loss: 0.01165. Total loss: 1.43508\n",
+ "Step: 139. Epoch: 4/50. Iteration: 28/28. Cls loss: 1.44247. Reg loss: 0.01195. Total loss: 1.45442: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 4/50. Classification loss: 1.15894. Regression loss: 0.01040. Total loss: 1.16934\n",
+ "Step: 167. Epoch: 5/50. Iteration: 28/28. Cls loss: 0.96989. Reg loss: 0.01074. Total loss: 0.98064: 100% 28/28 [00:46<00:00, 1.66s/it]\n",
+ "Val. Epoch: 5/50. Classification loss: 0.94637. Regression loss: 0.00966. Total loss: 0.95603\n",
+ "Step: 195. Epoch: 6/50. Iteration: 28/28. Cls loss: 0.90316. Reg loss: 0.00981. Total loss: 0.91297: 100% 28/28 [00:46<00:00, 1.66s/it]\n",
+ "Val. Epoch: 6/50. Classification loss: 0.80626. Regression loss: 0.00944. Total loss: 0.81570\n",
+ "Step: 223. Epoch: 7/50. Iteration: 28/28. Cls loss: 0.83105. Reg loss: 0.01052. Total loss: 0.84157: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 7/50. Classification loss: 0.69999. Regression loss: 0.00907. Total loss: 0.70907\n",
+ "Step: 251. Epoch: 8/50. Iteration: 28/28. Cls loss: 0.68107. Reg loss: 0.01090. Total loss: 0.69197: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 8/50. Classification loss: 0.62273. Regression loss: 0.00883. Total loss: 0.63156\n",
+ "Step: 279. Epoch: 9/50. Iteration: 28/28. Cls loss: 0.63515. Reg loss: 0.01228. Total loss: 0.64743: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 9/50. Classification loss: 0.55948. Regression loss: 0.00851. Total loss: 0.56798\n",
+ "Step: 307. Epoch: 10/50. Iteration: 28/28. Cls loss: 0.50954. Reg loss: 0.01053. Total loss: 0.52007: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 10/50. Classification loss: 0.50945. Regression loss: 0.00836. Total loss: 0.51781\n",
+ "Step: 335. Epoch: 11/50. Iteration: 28/28. Cls loss: 0.52033. Reg loss: 0.00733. Total loss: 0.52766: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 11/50. Classification loss: 0.46788. Regression loss: 0.00800. Total loss: 0.47587\n",
+ "Step: 363. Epoch: 12/50. Iteration: 28/28. Cls loss: 0.49584. Reg loss: 0.00927. Total loss: 0.50511: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 12/50. Classification loss: 0.43143. Regression loss: 0.00792. Total loss: 0.43935\n",
+ "Step: 391. Epoch: 13/50. Iteration: 28/28. Cls loss: 0.45326. Reg loss: 0.00893. Total loss: 0.46219: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 13/50. Classification loss: 0.40211. Regression loss: 0.00764. Total loss: 0.40974\n",
+ "Step: 419. Epoch: 14/50. Iteration: 28/28. Cls loss: 0.40421. Reg loss: 0.00882. Total loss: 0.41303: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 14/50. Classification loss: 0.37800. Regression loss: 0.00736. Total loss: 0.38537\n",
+ "Step: 447. Epoch: 15/50. Iteration: 28/28. Cls loss: 0.38576. Reg loss: 0.00615. Total loss: 0.39191: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 15/50. Classification loss: 0.35435. Regression loss: 0.00746. Total loss: 0.36181\n",
+ "Step: 475. Epoch: 16/50. Iteration: 28/28. Cls loss: 0.38551. Reg loss: 0.01182. Total loss: 0.39733: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 16/50. Classification loss: 0.33601. Regression loss: 0.00737. Total loss: 0.34338\n",
+ "Step: 499. Epoch: 17/50. Iteration: 24/28. Cls loss: 0.35644. Reg loss: 0.00668. Total loss: 0.36312: 82% 23/28 [00:41<00:05, 1.15s/it]checkpoint...\n",
+ "Step: 503. Epoch: 17/50. Iteration: 28/28. Cls loss: 0.35166. Reg loss: 0.00812. Total loss: 0.35978: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 17/50. Classification loss: 0.31798. Regression loss: 0.00725. Total loss: 0.32523\n",
+ "Step: 531. Epoch: 18/50. Iteration: 28/28. Cls loss: 0.35137. Reg loss: 0.01101. Total loss: 0.36238: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 18/50. Classification loss: 0.30364. Regression loss: 0.00718. Total loss: 0.31082\n",
+ "Step: 559. Epoch: 19/50. Iteration: 28/28. Cls loss: 0.29872. Reg loss: 0.00653. Total loss: 0.30525: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 19/50. Classification loss: 0.29044. Regression loss: 0.00733. Total loss: 0.29776\n",
+ "Step: 587. Epoch: 20/50. Iteration: 28/28. Cls loss: 0.30086. Reg loss: 0.00810. Total loss: 0.30896: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 20/50. Classification loss: 0.27783. Regression loss: 0.00728. Total loss: 0.28511\n",
+ "Step: 615. Epoch: 21/50. Iteration: 28/28. Cls loss: 0.34610. Reg loss: 0.00809. Total loss: 0.35419: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 21/50. Classification loss: 0.26462. Regression loss: 0.00711. Total loss: 0.27173\n",
+ "Step: 643. Epoch: 22/50. Iteration: 28/28. Cls loss: 0.28175. Reg loss: 0.00807. Total loss: 0.28981: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 22/50. Classification loss: 0.25356. Regression loss: 0.00716. Total loss: 0.26071\n",
+ "Step: 671. Epoch: 23/50. Iteration: 28/28. Cls loss: 0.27373. Reg loss: 0.00875. Total loss: 0.28248: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 23/50. Classification loss: 0.24350. Regression loss: 0.00737. Total loss: 0.25087\n",
+ "Step: 699. Epoch: 24/50. Iteration: 28/28. Cls loss: 0.25727. Reg loss: 0.00815. Total loss: 0.26542: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 24/50. Classification loss: 0.23465. Regression loss: 0.00712. Total loss: 0.24177\n",
+ "Step: 727. Epoch: 25/50. Iteration: 28/28. Cls loss: 0.23017. Reg loss: 0.01109. Total loss: 0.24125: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 25/50. Classification loss: 0.22561. Regression loss: 0.00716. Total loss: 0.23277\n",
+ "Step: 755. Epoch: 26/50. Iteration: 28/28. Cls loss: 0.22237. Reg loss: 0.00591. Total loss: 0.22828: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 26/50. Classification loss: 0.21848. Regression loss: 0.00694. Total loss: 0.22542\n",
+ "Step: 783. Epoch: 27/50. Iteration: 28/28. Cls loss: 0.25054. Reg loss: 0.00917. Total loss: 0.25971: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 27/50. Classification loss: 0.21120. Regression loss: 0.00699. Total loss: 0.21819\n",
+ "Step: 811. Epoch: 28/50. Iteration: 28/28. Cls loss: 0.22907. Reg loss: 0.00829. Total loss: 0.23737: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 28/50. Classification loss: 0.20494. Regression loss: 0.00701. Total loss: 0.21195\n",
+ "Step: 839. Epoch: 29/50. Iteration: 28/28. Cls loss: 0.26674. Reg loss: 0.00852. Total loss: 0.27526: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 29/50. Classification loss: 0.19854. Regression loss: 0.00670. Total loss: 0.20523\n",
+ "Step: 867. Epoch: 30/50. Iteration: 28/28. Cls loss: 0.19063. Reg loss: 0.00593. Total loss: 0.19656: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 30/50. Classification loss: 0.19303. Regression loss: 0.00679. Total loss: 0.19982\n",
+ "Step: 895. Epoch: 31/50. Iteration: 28/28. Cls loss: 0.23191. Reg loss: 0.00678. Total loss: 0.23869: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 31/50. Classification loss: 0.18698. Regression loss: 0.00675. Total loss: 0.19373\n",
+ "Step: 923. Epoch: 32/50. Iteration: 28/28. Cls loss: 0.18452. Reg loss: 0.00685. Total loss: 0.19137: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 32/50. Classification loss: 0.18236. Regression loss: 0.00679. Total loss: 0.18915\n",
+ "Step: 951. Epoch: 33/50. Iteration: 28/28. Cls loss: 0.20275. Reg loss: 0.00758. Total loss: 0.21033: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 33/50. Classification loss: 0.17713. Regression loss: 0.00692. Total loss: 0.18405\n",
+ "Step: 979. Epoch: 34/50. Iteration: 28/28. Cls loss: 0.18318. Reg loss: 0.00577. Total loss: 0.18895: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 34/50. Classification loss: 0.17203. Regression loss: 0.00657. Total loss: 0.17860\n",
+ "Step: 999. Epoch: 35/50. Iteration: 20/28. Cls loss: 0.18499. Reg loss: 0.00838. Total loss: 0.19337: 68% 19/28 [00:37<00:10, 1.17s/it]checkpoint...\n",
+ "Step: 1007. Epoch: 35/50. Iteration: 28/28. Cls loss: 0.18154. Reg loss: 0.00630. Total loss: 0.18784: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 35/50. Classification loss: 0.16700. Regression loss: 0.00666. Total loss: 0.17366\n",
+ "Step: 1035. Epoch: 36/50. Iteration: 28/28. Cls loss: 0.18250. Reg loss: 0.00611. Total loss: 0.18861: 100% 28/28 [00:47<00:00, 1.68s/it]\n",
+ "Val. Epoch: 36/50. Classification loss: 0.16309. Regression loss: 0.00679. Total loss: 0.16989\n",
+ "Step: 1063. Epoch: 37/50. Iteration: 28/28. Cls loss: 0.15622. Reg loss: 0.00623. Total loss: 0.16245: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 37/50. Classification loss: 0.15933. Regression loss: 0.00666. Total loss: 0.16599\n",
+ "Step: 1091. Epoch: 38/50. Iteration: 28/28. Cls loss: 0.14960. Reg loss: 0.00556. Total loss: 0.15515: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 38/50. Classification loss: 0.15517. Regression loss: 0.00683. Total loss: 0.16201\n",
+ "Step: 1119. Epoch: 39/50. Iteration: 28/28. Cls loss: 0.17928. Reg loss: 0.00657. Total loss: 0.18585: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 39/50. Classification loss: 0.15171. Regression loss: 0.00657. Total loss: 0.15828\n",
+ "Step: 1147. Epoch: 40/50. Iteration: 28/28. Cls loss: 0.17436. Reg loss: 0.00468. Total loss: 0.17904: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 40/50. Classification loss: 0.14942. Regression loss: 0.00667. Total loss: 0.15609\n",
+ "Step: 1175. Epoch: 41/50. Iteration: 28/28. Cls loss: 0.16362. Reg loss: 0.00781. Total loss: 0.17143: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 41/50. Classification loss: 0.14597. Regression loss: 0.00686. Total loss: 0.15283\n",
+ "Step: 1203. Epoch: 42/50. Iteration: 28/28. Cls loss: 0.17241. Reg loss: 0.00837. Total loss: 0.18078: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 42/50. Classification loss: 0.14308. Regression loss: 0.00662. Total loss: 0.14969\n",
+ "Step: 1231. Epoch: 43/50. Iteration: 28/28. Cls loss: 0.17507. Reg loss: 0.00802. Total loss: 0.18309: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 43/50. Classification loss: 0.13933. Regression loss: 0.00666. Total loss: 0.14599\n",
+ "Step: 1259. Epoch: 44/50. Iteration: 28/28. Cls loss: 0.17234. Reg loss: 0.00580. Total loss: 0.17814: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 44/50. Classification loss: 0.13601. Regression loss: 0.00647. Total loss: 0.14247\n",
+ "Step: 1287. Epoch: 45/50. Iteration: 28/28. Cls loss: 0.16627. Reg loss: 0.00595. Total loss: 0.17222: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 45/50. Classification loss: 0.13402. Regression loss: 0.00653. Total loss: 0.14055\n",
+ "Step: 1315. Epoch: 46/50. Iteration: 28/28. Cls loss: 0.17035. Reg loss: 0.00682. Total loss: 0.17717: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 46/50. Classification loss: 0.13196. Regression loss: 0.00638. Total loss: 0.13834\n",
+ "Step: 1343. Epoch: 47/50. Iteration: 28/28. Cls loss: 0.12934. Reg loss: 0.00527. Total loss: 0.13461: 100% 28/28 [00:46<00:00, 1.68s/it]\n",
+ "Val. Epoch: 47/50. Classification loss: 0.12878. Regression loss: 0.00664. Total loss: 0.13542\n",
+ "Step: 1371. Epoch: 48/50. Iteration: 28/28. Cls loss: 0.12199. Reg loss: 0.00390. Total loss: 0.12589: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 48/50. Classification loss: 0.12630. Regression loss: 0.00681. Total loss: 0.13311\n",
+ "Step: 1399. Epoch: 49/50. Iteration: 28/28. Cls loss: 0.13337. Reg loss: 0.00523. Total loss: 0.13859: 100% 28/28 [00:46<00:00, 1.67s/it]\n",
+ "Val. Epoch: 49/50. Classification loss: 0.12423. Regression loss: 0.00635. Total loss: 0.13058\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false,
+ "id": "05mjrGRETLSZ",
+ "colab_type": "text"
+ },
+ "source": [
+ "### 3. Evaluation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ },
+ "id": "9yzNyaSxTLSZ",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 493
+ },
+ "outputId": "f38ef0e7-bf30-428a-cf93-43e43a60fdae"
+ },
+ "source": [
+ "#get latest weight file\n",
+ "%cd logs/shape\n",
+ "weight_file = !ls -Art | grep efficientdet\n",
+ "%cd ../..\n",
+ "\n",
+ "#uncomment the next line to specify a weight file\n",
+ "#weight_file[-1] = 'efficientdet-d0_49_1400.pth'\n",
+ "\n",
+ "! python coco_eval.py -c 0 -p shape -w \"logs/shape/{weight_file[-1]}\""
+ ],
+ "execution_count": 4,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "running coco-style evaluation on project shape, weights logs/shape/efficientdet-d0_49_1400.pth...\n",
+ "loading annotations into memory...\n",
+ "Done (t=0.00s)\n",
+ "creating index...\n",
+ "index created!\n",
+ "100% 100/100 [00:08<00:00, 11.80it/s]\n",
+ "Loading and preparing results...\n",
+ "DONE (t=0.63s)\n",
+ "creating index...\n",
+ "index created!\n",
+ "BBox\n",
+ "Running per image evaluation...\n",
+ "Evaluate annotation type *bbox*\n",
+ "DONE (t=1.46s).\n",
+ "Accumulating evaluation results...\n",
+ "DONE (t=0.14s).\n",
+ " Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.781\n",
+ " Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.947\n",
+ " Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.868\n",
+ " Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\n",
+ " Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.794\n",
+ " Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.740\n",
+ " Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.470\n",
+ " Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.841\n",
+ " Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.843\n",
+ " Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\n",
+ " Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.850\n",
+ " Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.819\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%% md\n"
+ },
+ "id": "zhV3bNF3TLSc",
+ "colab_type": "text"
+ },
+ "source": [
+ "### 4. Visualize"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ },
+ "id": "uEDHMAIJTLSc",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 269
+ },
+ "outputId": "cbeedcbd-cd4f-41a6-e0d6-875398081cd8"
+ },
+ "source": [
+ "import torch\n",
+ "from torch.backends import cudnn\n",
+ "\n",
+ "from backbone import EfficientDetBackbone\n",
+ "import cv2\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "\n",
+ "from efficientdet.utils import BBoxTransform, ClipBoxes\n",
+ "from utils.utils import preprocess, invert_affine, postprocess\n",
+ "\n",
+ "compound_coef = 0\n",
+ "force_input_size = None # set None to use default size\n",
+ "img_path = 'datasets/shape/val/999.jpg'\n",
+ "\n",
+ "threshold = 0.2\n",
+ "iou_threshold = 0.2\n",
+ "\n",
+ "use_cuda = True\n",
+ "use_float16 = False\n",
+ "cudnn.fastest = True\n",
+ "cudnn.benchmark = True\n",
+ "\n",
+ "obj_list = ['rectangle', 'circle']\n",
+ "\n",
+ "# tf bilinear interpolation is different from any other's, just make do\n",
+ "input_sizes = [512, 640, 768, 896, 1024, 1280, 1280, 1536]\n",
+ "input_size = input_sizes[compound_coef] if force_input_size is None else force_input_size\n",
+ "ori_imgs, framed_imgs, framed_metas = preprocess(img_path, max_size=input_size)\n",
+ "\n",
+ "if use_cuda:\n",
+ " x = torch.stack([torch.from_numpy(fi).cuda() for fi in framed_imgs], 0)\n",
+ "else:\n",
+ " x = torch.stack([torch.from_numpy(fi) for fi in framed_imgs], 0)\n",
+ "\n",
+ "x = x.to(torch.float32 if not use_float16 else torch.float16).permute(0, 3, 1, 2)\n",
+ "\n",
+ "model = EfficientDetBackbone(compound_coef=compound_coef, num_classes=len(obj_list),\n",
+ "\n",
+ " # replace this part with your project's anchor config\n",
+ " ratios=[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)],\n",
+ " scales=[2 ** 0, 2 ** (1.0 / 3.0), 2 ** (2.0 / 3.0)])\n",
+ "\n",
+ "model.load_state_dict(torch.load('logs/shape/'+weight_file[-1]))\n",
+ "model.requires_grad_(False)\n",
+ "model.eval()\n",
+ "\n",
+ "if use_cuda:\n",
+ " model = model.cuda()\n",
+ "if use_float16:\n",
+ " model = model.half()\n",
+ "\n",
+ "with torch.no_grad():\n",
+ " features, regression, classification, anchors = model(x)\n",
+ "\n",
+ " regressBoxes = BBoxTransform()\n",
+ " clipBoxes = ClipBoxes()\n",
+ "\n",
+ " out = postprocess(x,\n",
+ " anchors, regression, classification,\n",
+ " regressBoxes, clipBoxes,\n",
+ " threshold, iou_threshold)\n",
+ "\n",
+ "out = invert_affine(framed_metas, out)\n",
+ "\n",
+ "for i in range(len(ori_imgs)):\n",
+ " if len(out[i]['rois']) == 0:\n",
+ " continue\n",
+ " ori_imgs[i] = ori_imgs[i].copy()\n",
+ " for j in range(len(out[i]['rois'])):\n",
+ " (x1, y1, x2, y2) = out[i]['rois'][j].astype(np.int)\n",
+ " cv2.rectangle(ori_imgs[i], (x1, y1), (x2, y2), (255, 255, 0), 2)\n",
+ " obj = obj_list[out[i]['class_ids'][j]]\n",
+ " score = float(out[i]['scores'][j])\n",
+ "\n",
+ " cv2.putText(ori_imgs[i], '{}, {:.3f}'.format(obj, score),\n",
+ " (x1, y1 + 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5,\n",
+ " (255, 255, 0), 1)\n",
+ "\n",
+ " plt.imshow(ori_imgs[i])\n",
+ "\n"
+ ],
+ "execution_count": 5,
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQYAAAD8CAYAAACVSwr3AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO2deZQd1X3nP796S+/dUqtb+w5CIGxAIINkwMZgMOAF2/EaEmMfcsjYztiO45PByZwkM5M5J8nMxIknjj0cO2OcOAYb2wPYJJgADthmExYIkAC19r1braVb6uW9V/WbP+5971V3taRuqV/3a/r30Xnqqlu3qn5vud+6y+/+rqgqhmEYcYLJNsAwjOrDhMEwjAQmDIZhJDBhMAwjgQmDYRgJTBgMw0hQEWEQkRtF5DUR6RCROytxD8MwKoeMtx+DiKSA14HrgT3Ac8DHVXXTuN7IMIyKUYkaw+VAh6puU9UccA9wSwXuYxhGhUhX4JoLgN2x/T3AFac6oa2tTZcuXVoBUwzDKPL8888fUtX20eSthDCMChG5A7gDYPHixaxfv36yTDGMaYGI7Bxt3ko0JfYCi2L7C33aEFT1LlVdo6pr2ttHJWKGYUwQlRCG54AVIrJMRLLAx4AHKnAfwzAqxLg3JVS1ICK/BzwMpIB/UNVXxvs+hmFUjor0MajqQ8BDlbi2YRiVxzwfDcNIYMJgGEYCEwbDMBKYMBiGkcCEwTCMBCYMhmEkMGEwDCOBCYNhGAlMGAzDSGDCYBhGAhMGwzASmDAYhpHAhMEwjAQmDIZhJDBhMAwjgQmDYRgJTBgMw0hgwmAYRgITBsMwEpgwGIaRwITBMIwEJgyGYSQwYTAMI4EJg2EYCUwYDMNIYMJgGEYCEwbDMBKYMBiGkcCEwTCMBCYMhmEkMGEwDCOBCYNhGAlMGAzDSGDCYBhGgtMKg4j8g4h0isjLsbRWEXlERLb4vzN9uojIV0WkQ0Q2isillTTeMIzKMJoaw7eBG4el3Qk8qqorgEf9PsBNwAr/ugP4+viYaRjGRHJaYVDVJ4DDw5JvAe7223cD74+lf0cdTwMzRGTeeBlrGMbEcKZ9DHNUdb/fPgDM8dsLgN2xfHt8WgIRuUNE1ovI+q6urjM0wzCMSnDWnY+qqoCewXl3qeoaVV3T3t5+tmYYhjGOnKkwHCw2EfzfTp++F1gUy7fQpxmGMYU4U2F4ALjNb98G3B9L/4QfnVgLHIs1OQzDmCKkT5dBRL4HXAO0icge4E+BvwC+LyK3AzuBj/jsDwE3Ax1AH/CpCthsGEaFOa0wqOrHT3LouhHyKvDZszXKMIzJxTwfDcNIYMJgGEYCEwbDMBKYMBiGkcCEwTCMBCYMhmEkMGEwDCOBCYNhGAlMGAzDSGDCYBhGAhMGwzASmDAYhpHAhMEwjAQmDIZhJDBhMAwjgQmDYRgJTBgMw0hgwmAYRgITBsMwEpgwGIaRwITBMIwEJgyGYSQwYTAMI4EJg2EYCUwYDMNIYMJgGEYCEwbDMBKYMBiGkcCEwTCMBCYMhmEkMGEwDCOBCYNhGAlMGAzDSHBaYRCRRSLyuIhsEpFXROTzPr1VRB4RkS3+70yfLiLyVRHpEJGNInJppd+EYRjjy2hqDAXgD1R1FbAW+KyIrALuBB5V1RXAo34f4CZghX/dAXx93K02DKOinFYYVHW/qv7ab/cCm4EFwC3A3T7b3cD7/fYtwHfU8TQwQ0TmjbvlhmFUjDH1MYjIUmA18AwwR1X3+0MHgDl+ewGwO3baHp9mGMYUYdTCICKNwA+BL6hqT/yYqiqgY7mxiNwhIutFZH1XV9dYTjUMo8KMShhEJIMThe+q6o988sFiE8H/7fTpe4FFsdMX+rQhqOpdqrpGVde0t7efqf2GYVSA0YxKCPAtYLOq/nXs0APAbX77NuD+WPon/OjEWuBYrMlhGMYUID2KPFcCvw28JCIv+LQ/Av4C+L6I3A7sBD7ijz0E3Ax0AH3Ap8bVYsMwKs5phUFVfwHISQ5fN0J+BT57lnYZhjGJmOejYRgJTBgMw0hgwmAYRoLRdD4aVUs3rn93KjIHyE62EcZJMGGYsnQB1wPbOXnf8EjoGPLH8w4/r7g/0t8iJzsX4M+AL4zWaGOCMWGYsvQCW3Gjxp1ALW6O2z6gDoiAbcA6YAvOS/064Ge4Eegs8HNgGW6+23AUeBi42l/7QcrTYfpxnu7Lga8CHwJe99d/Dcj5e5/jz+0B4pNs/xjYdDZv3qgwJgxTGsW5kPwO8F2cw+k+oBl4HvgocAJYDLTgCvJ5wCU4f7SLgceAT5DsbgqBjcANQB54BSco8Xtvwo1MtwDz/fFaYAC4HLgPaAc+CCyJnTvrLN+3UWms83FKo7gmxXm4We8tPj3AFegluFrBO3BP+9n++GZc/0Qa+ADJJkCRbtzTn1ie4msTrqbydiCFm0dXJAe8BPwJbhb+w2fzJo1JwIRhSiO4ZsOPcVX9o0AGV+BrfJ6lwHeAH+EKbwbXdAhwtYIBXNPg67iaQZEAeBNuisxduFrGLlwtYBPwbeAIrtAvwQnUj4FH/Hkbcd7xLzO0tmBMBawpMaURXHNhOU4IZuGq9Cn/EuBa4BBOEOYA78GJyaeAQaAV9zNoINlxeAtOTEJ/ruBqHzXA53y+FK7p8ilcDeOtQBvwp8Axf87ccX7fRqUxYZjyNDB0MmvDsON1Jzk+O5amwK0kfw4BTmjitPm/TcPS6/2rSLN/GVMRa0oYuKe6PSOMMiYMhmEkMGEwDCOBCYNhGAlMGAzDSGDCYBhGAhMGwzASmDAYhpHAhMEwjAQmDIZhJDBhMAwjgQmDYRgJTBgMw0hgwmAYRgITBsMwEpgwGIaRwITBMIwEJgyGYSQwYTAMI4EJw1SmGMl9CuPewhvgjbzBsEB/UwiNbYkvTsMOTAnKq1goEcWnk/pV7May3J5RKUwYpjSK8jVkioRnL69g+QpQh/JHPr0G4XdB5/oMg7gl9uomx1DDhGEqUVoiVl2dQVAOH7mHnbtaEFGiSAgEEEUjl1sCBRVUYfbsE3R1NaC+hAaiROqvqhAE7hpFgkBRFRoachTCgIH+NBIoAkPyxamrKxAEyokTGdra+jh0qBxSvigMy5YdJSwEdHbtJpMNWTj/OHW17bjl7gZxC9xchFvlypgMTisMIlILPIFbZSQN3Keqfyoiy4B7cKucPA/8tqrmRKQGt/TRZbgVSD6qqjsqZP+0Q4e1G/L5gH/63psRUdb/eh5vu3IXK8/r5tv/dBGplPK2K3ehKjz5y0X8wRee5t77VrFpczvrLt/LurW7eeqZhfT3Z6ivz7NyRTe/eGoRr29p5bLV+1l3xV62bpvJ/oON9PTU0N1dx1Vv3c2ihT3c84MLOXykNmHf+Su7yWZDNr40m0/+9kYe/OkKug/7J78oQRDxP/78cQ511/PCxtnkCsLnP/M8dbUh5YV0lzHl2kdvMEbT+TgIXKuqF+PWKbtRRNYCfwl8RVXPxa1VdrvPfztwxKd/xeczxoFiUdHYw3pGyyD5XMCvnprPJ27dyJzZxwGYN6eXyy/by8OPLOPlV9o4cSLNxpdm8/LLbay7YjfbdzRz730XkMmENDcPEEXKgw+dw9q37OGy1fvo7q7ln+9d5e6qihBx4w0dNNTnaGjIcclFBxCixIvY9utbZjKrta98TJUli3o4//xuQJg1c5D9+xsZGEiDHAXZDBzErXy1h/K6mcZEc9oag6oqcNzvZvxLcWuf/aZPvxv4M9wCiLf4bXALHf6diIi/jnE2qBcHKX+UhULAjh3NCNDVWc9jjy+lpiYEIL38KFEYcLy3BiJBFNIppakhz0B/htraAkRCFAnbt83kUFcDfSeyNDUU2LM7S1QI0EhwrRNhcCDDY48vQVWoqSkws2WQgcG0K9ie/hNZZs3sob6uQPusfp59dgFz2vs51F0HAhde0M2WLa0l248dqyUMA9BeoBVkEW4F7xO4tTSzE/LRGkMZ1XCliKRE5AXc8saPAFuBo6pa8Fn2AAv89gJgN4A/fowR1j0XkTtEZL2IrO/q6jq7dzGNEGKd9wJ79zXR15+lEAlbtrVy/Q3bePNFnRzpqaXneJb3vGcL5644TEGFFzfN5vrrt/PSpnZueNc2rrxyNwe6Gth7oJHj/VkihC3bZ/LrF+dwzTt28q4bt1JTG9J5qIEjPbU88/w83vGOnbz9mp0UNOB973ud9tl9Q+oMO/e00Dqrn3e/u4PXOlohgMvW7ENSSiodsWNHCwODKRDl6efmce01u2huGsQtozcXuBG3vuZ7SS63Vw3EV/w+1SG/ofhOIT3t6dXEqDofVTUELhGRGbgljc8/2xur6l24XibWrFlT5R9TdVDufJTSzutbWuk9kQHgpZfbeenl9lL+J3+5aMj5r746i1dfdRr9vXtXjXiPDS/MAWDP3kYaG/IUCgG795TXqfzx3uK2smlzG4eP1DD8V/7gT1cM2f/pv5xbOueVzbNK2zt2NrN73wXcdMNW2tviZyygKlEt19i0POha3op/R66rVWNpolrKqCKl3thqHKAd06iEqh4VkceBdcAMEUn7WsFCYK/Pthcn/3tEJA204DohjbNFGPKDBBAp/vyKGcaHl16azazWfnbsmHGSqwrPrZ9/Bnct1XkQJFmiqpiT+o2USn+EElCuLQgiw7qLJb4nDFGOKuK0TQkRafc1BUSkDrge2Aw8DnzIZ7sNuN9vP+D38ccfs/6FqcexY7Vs2z6zPJxpEBe0+KeiUvTddG5npW0BJcT5ZChKhEq5llGWyOpjNDWGecDdIpLCCcn3VfUnIrIJuEdE/hzYAHzL5/8W8I8i0gEcBj5WAbsNYxLQ2P9uK/TFO4ChclHKlIo1PWRYZaN6q0ujGZXYCKweIX0bcPkI6QPAh8fFOsOoODng50D/aXNqrE+h2LOQim2XmxKCCJQ7FAAJQdNlCZCI09cZLsT5dOR9vuxp8o8f5vloTHMeAX4DmMHpWtau9yACUcKoh4FQUXHNBEpCIG4EotgfFKWoTUEqaEEkAg0Rav0gRQ5EEDK44dniKIwCXcBbgf8IHAU6gN+HCXJ/N2GYSpSGwcq0NA+y8rzDVT/8BZQKS1NTjmM92Sox+QRuNP2bwHpgBegKVxeQ3YjWoJwAuYKI++grHGXTsQaQf+TJ7hpmNhxjZeMhOvob6MzXEQEiESIRQZihLtfIBY0FsqmPck79fGbXQ8C7gQBlA6KPoFyFMIByrReYwwgfxo30v4xzDbqMEUb9K4YJwxRFtR6Yy7q127ns0v2Tbc6YEIGnnp3ne+iroX3dDewDfh/Vu4H5QC3o6ygXobyTrr6/5+kjW+nnOXYOwsUzDtLQWM/KhmMcDjMsqBlg52AjrvYgQIAAmbqjdAWDbOn9Pht70lzWfC6za89jYeNFpFkJvIySwzUXZvjKxs9R3oHIv+G8QE8APwNSwMUT8omYMEwxyt1fc1Au44knQ772jctQUUSrPLyGug66SCIOdtUhKt7myRaHQVz7vRG4DfQFV7i1hQHOZ9Phe+gu/BMv50DSecK0G31oywzyen8jJ6I0oQopiRCUkPJ7ioD2zCCvpgaJMr10hE+x/kAnlzZ9lEvaD1ATfA6iWpC/8T4Nh4ABlAX+CrOAq3HNjN4J+0RMGKYgcVeGvr4se/fMQEWJpDoq5yfFF5yila7NXg1ugDOBw6BfA+YAy1EyHMv38fShb7Jh4CFWtRzh0pYC/VGKzX1NnIjS7BysZ03jEfqiFLsH60mJcl5dL8/0lqv8x8M0+3K1vKWpm7og5OnjM1nYvIWOgf/N4f1p3tp+kBnZ81DeRERIileBq4B7/RXOAb6G+7Q+MGGfiAnDVEKKf6TU3aCiREGE+lH0cg/5SE/hsVTdx3qNUVxbh/oACIpUhZg1AJ8EmrwXc8SB/pd4cN+TdEkHmhlgQ38jogGBf5+b+pwH6BM9ZZfN+iBk10D9kCurCtsGGtiWq/XT3wO2RSkIcmwvROzYuZf3zb+VhQ1rCEgR6VUEUhwABbgJ/NjHRNasTBimFC4WgiqlKZbnndvN7Z/aUCp0U4GydCkSKK0zB6pgRF9830DE0dw2Htz7VxxIv+qmikuBtKbKzl4niTTVF6Xpi4YWqaLzk0QpRINSA0OBMAjprHmdB/f9T35j0X9jds0F/qT45V1/xURjwjCFiA9KCMrPn1jE7PY0V1+xz/vwT55tZ4ar5wwMnovq23ztYfLehAL92sMz3d+nM7UZggiAiIBAg5Kz8xnjm1LgZqsGQEEiOrOv8avu7/CueV+kTtpOfY0JwoRhilEqOqIsP/dL9PR+mqBaOvdHS2xuQagBaBPCRBWI4d6LlGphSp4N3ffz6/4fE2QGnXtzlIYoTUGiUjNirB91ySFKXJOv6AWJCmlVNJXnlcFHaD6wmKvn3UZGkgFwRnoPQ5NOY9UYjTZhmFKUPeoEYfHCuQyd21fdzQktTkNGvJtw+ZhMqNnu3hFCULIjZE/fBp7tuYcw3edHUsV7KJ55Zb78tmKNpaISSXG6lRBl+tgw8COWHL+AZY1vdX0xJ+vKOcV7Gq+v34RhCiHDtobvVzsyZI7x5M03Vv90DaBUnkL62HDop/Sm9oKfAAVxwTpTY0ful0gHEVc3H6I1nS/NuJToADPSXyKQZThPx53Ax0d/72JTrDT3+wxNxtaVMKYZSnEyk5RrL0DXwHa2DD5ZqiFU2opZ6Rxrmw5TF4S+e1EhiOjK7+B4vhPnAQHOwSn2ku0gg27uRfxVmsUZgvoXO0H6QPLAv47JQqsxGNMMHRbSQkFCdvS8wInMEYrSUUkWZvv5YNs+0qLMyQ640Y6STUJdsAnnCdkHPD3s7H5cwLTRWDmAq3UIo5kkFseEwZh2iI+uVGwm9BeO81LvT9BsjkoEvRnOrEyOGek8AD2FDL/qbSUThMyvGSQMA/L5etbNWkiKfcA1QA+wAlgLPIxqF/ArYDnQAHo5UWEWvUdmc6jrMDV1L6BRhtr6veQG2uk9chnLLvzvwOhd560pYUwrpDgigO/4UyEf5emRLgKKIewnBlU4XMhwYLCeelH68hm68lkO5OopRL3elpU478e1wBqcl+atwAdBv8Zg/yp2vn4+XQc288rzFzNwopYjB99ONnuUwuAscgNz6O9djkg4JttMGIxpR3nkJvJt9IgwyCNRZkJHdRTY0t/EFc3dDCIcCdPsy9Uwu+4wG7pzqDai+h5Ue1AdRPVFVN8CehOFXJY9O37EjlcXsadjNegxmmdsJD/YxrGuqzl+7Hx2b/ldsrWdtC+8n2xN55hsM2EwphdDyn2AFN2NBaJSx+PEicPBfA0bTzRTiAJ6wzQhAfvzNTRnjuHiMDSB3gI0gy5Goz0c6fwZB3dn2L/tOsJ8C6oZBvoW0Xd8Mc2tz9PS/ksyNYeJoiy9Ry7m0P4bicL601gyFOtjMAyg6FGgxVnTE8j+XC0DUQoFDheyHCtkWSwncM2G+SDzgK0cP9JOx6sXEOXzFHLvIz/YRiE3E41S7NtxK2G+kaNdV5FKnwCEQqGZI51vIwrrGOhbAGwZtU0mDMb0RWIzO6XynY4nIyTgUKGmtJ+LhBOhWxLA+V5lOLBrHnu39dLft7zsHyVCIZ9BRAjzLThhC2id8zg9h1eDBoSFZkTyY7bJhMGYvgxxI1akuNZD1SBoCHs7DrNrSy9RFAsZh6LqbC7+Beg/fg47X/scaIqzETkTBmMaE/MhLjo3VEtUKXUFfs/WbnZv6UcjXMxIP3tTJCo7VZbaPsX3cvbF2oTBmMbIKXcnjBFrKUL/iRy7O44RRVkICsR9nYcHoi9NvIBxeR82KmFMOzRegkrlSyByMx4nozmhUoxQARJlCMKsi7sR+XD0URo0QIMIJiC4jQmDMc0oz2oUimHfhcAvJzeJVvkZHEJ6oJkWWeSaE+DsEud/EeBHTaToqFV22Ipvni0mDMb0o9iH53dSpGkszCYUJQzC0jJyE2QK4EQhDCIiicjkWqkZbD/leZXGhMGYZkhJFIpNivp0Exc2XI+Erro+oWjgZ3oGaJiGfB2thfOpzc2ZWDuGYcJgTEMkVusWIM25M66gNmya8EA3xWhciCJBgUyhjtbBC5B8zelOrSgmDMa0Ix7vqrg3q24Zy9NrSUfj1kwfHRLh5mxEpMIaZvSfQ1PvmylHhp4cTBiMaYWWXsW6gROGNDNYM+vD1Ofm+5EJHZJbNZYUv9Bo7xi/3PDjGqBhhlR/G/N7rieTb/RxpCcPEwZj+jEkqlxQ6vFf1HgJq5s/QLrggrEWQ78Xo1mrBsNCyUpJMPQk5bikBX7VreTAQYBoQKpQQ/uJtbSeuIigUJPINdGYMBjTilLBjMVgFCAQSEkta9o+yMrMdRBl0EgIYp6GEvM1QMpTtk85iqGuY1FE3eJAMKQ2UBSOuf1rWdR7A0GhDkQJrMZgGBPLEHEY9mBuSM3hyvZbac0vQQgIfcdgSp04qJ+a7YK8uJUmTvVsFyBQiIKIMAiJfA2hhAa05BayfPBmGo4vRMIsquIDykweJgyG4RH/r712Fe+Z94e05c5xMxQR5xSpgR9eBNU0GqXdGRpfeC+JSuRqCSqkdGihm5FbyE1zvkBbdCFanKYhE+tLMRKjFgYRSYnIBhH5id9fJiLPiEiHiNwrIlmfXuP3O/zxpZUx3TAqQ0CahY2X8/4Ff8Kq4J2k8vWEUQBEBBpAlAIiSBUoT7wauSD7wPCkVUipi7kQ+TBrQsB1c36PxZmrGegJCUUolI5NLmOpMXwe2Bzb/0vgK6p6LnAEuN2n3w4c8elf8fkMY+qgEJBhdt1qbp7/Ja6s/RhNhdkEQBDk/XoUxbW6/fTmUzzhi00D0cjVMMKij4Iwq2YZUT5FYUBJRQGRn7IxJZoSIrIQeDfwTb8vwLXAfT7L3cD7/fYtfh9//DqRiY6JYxhnTrHXIKVCQzCPt8/9LO9t/88sLdxAOtdCIFGsn0DdRKeRCnIpfIJS0DRBoYUl4TouyF5XzlIsGqIEEpKJnChMxOoWp2K0067/BvhDoMnvzwKOqmrB7+8BFvjtBcBuAFUtiMgxn/9Q/IIicgdwB8DixYvP1H7DGCcKuLUcKC0DWOyhTJFlRfPVLGq8iD0nNrLr+NO8evzfOR51EgZKJKHrY4jiwefd8GQQBdRpEytq38U5rVewpPF8auWnwP/1FY0QoYBKHoKUWzhXBQnGFtV5vDmtMIjIe4BOVX1eRK4Zrxur6l3AXQBr1qypqrg5xnQiC3QBVwOpZLzH2C+zLoAVTcq5TXnekusmF/U7XwcZOjJRDvUiEAVkJEtzTScB9/jUIz5niPBxahtrWH1VnrInppDOHmawf/74v91RMpoaw5XA+0TkZqAWaAb+FpghImlfa1gI7PX59wKLgD0ikgZagO5xt9wwxoV3An8HHB/5cGxGs6qWtpszxX6FiHiLvJhDSg2S+DzO4jWfBe5x/RR6C4XcXA7uOYJGsUhMCr1HVvsQbRPPaYVBVb8MfBnA1xi+pKq3isgPgA8B9wC3Aff7Ux7w+0/544+pnswvzDAmm0bgd0aVU0vuSL7Iu6gqFB0itLjCFfiyXczrhiqR4lW+jXCPP/Zb5HMXsG/HTrSQ8urhOjYnc2jibPwY/hPwRRHpwPUhfMunfwuY5dO/CNx5diYaRnVQCotS9IIUiqFTyuvjloJAxYcw3erZEW7EIXFRdGhUKZiQKE2nYkwxH1X158DP/fY24PIR8gwAHx4H2wyjelBiT/2hyVBMDhHSPjUE0kPnTgkEKvETKNYMRJSoVEmY/Aq2eT4axmiR+Ea5CGuxcJMuKwDl4UxBnQ9EcQmLYeqSzWZobKwbKjqTrA0WJdowRsOITQDxjYhYp2FxqNMLgxTTNHkR8bO705mAdDYF5F2H5CQ3I8BqDIYxalw9If5v6H6sE2Loi+I28crGkHbIzNa62CjGOEZ1PUNMGAxjkigPfkLLrHpSpSHQyR2RABMGw5h8BOpbaqlvTKMSupGNSR7hN2EwjEmi3GgQJIC5i5uRVMik9zxiwmAYk8bwRa9mzW2moTmDCcN0phwMsPyvGD9weJ5Y3vIAmVvt2EcNiWcypijpjLBgUavrZ/DNieJ3rPHvfwK+ZhOGSUATYqClLz75DxLCEdOAUp5TBCQ1qpTh35cIrXOaqGtOlQcshFiot4n7gs2PoeL046b0DiUiKg14gXOqLXnie9f6oVP91HdWi5+2o/466kfNTxZerA77mquT5LelpLMBy1a08+rGTqJB/3sQUCJEU5RrhpUdtrBfTEU5AnwEeC1xJBj2ABC/X5SKxMMktl2cb6eiJTeak/Ne4Hr/dxPwnL/6x4HXgV8DfcAVwFtG8Z6McSMxxVtRgZntjcye38uBnX1IlHKiEM88AUOZJgwV5SjwNC7GTQ9u1voFCIdBa9yzXvegsopI9wGddPavon/wcQ4Nvol8VMOM9EY0u4yevhnUpQLOndVCNhDSAkEE8EuQS3A1g0dBbvD37sXNYXsNmIsLuPVz4JPAy7iazIX+2MPAeRPyiRgx4g8GQHFh5knDsvPnMti/l+4DBVdTKE3xnhhMGCaEPPBB4MfAOuCXKPOA9eTlA/RFLTzV8a/sOBKx4UAzS5tm8/SupaxesJE881g99zm+9qv/QEZSzG2tZ3FrLe9901IuaGukKTiE8kmEAsJRXA0FIIcr8K/gBKAR2AZ8B9iBE40MTiA6ga3ApRP2iRjDGepencoIS85r48TxAwz0KkjkPaUnpq/BhKHiKLALJwgrgYL/WgMO5dbx5M42wtzX+fsnr6VfViKizG5K0zZzL0vbO9h6eAnPHbiIY2TQCDoP5Xi1K8dT21/i6iWNfGbdIWY0FMgmZv69gIumtwh4EXgHsBz4LPAk0AE0AMuATwM/wIRhsogX9HLBb2yp5YJL5rL5xX0MHi/Ov4iNcVawSWHCUEnUBxaVNPDvoH3AORQQ9vUM8M1fdfLYzpB1S+pZvexp+gpZdhxZRqQpDh6fw6ETMznQO49MKkc2KHDtsid4tONaBiWgK6/8ZOtRamsGuPH8+1jRBg2cj+peYCOQQ9gJ0o4Lw5nCNWWeAPYD5wD/FbjRG7vYhfgAAAsySURBVNs64R/PtOd0sykFmmbWsuqS+Wx+YR8DJ0KI0qDiV8VyI1qufyreB1Ec0hw+xXv0mDBUHCHSDxPQjDKfvC7jyR0h3/jldvb2z2NAA57aeykLm/czWMhy8Hg7v9h5OSfy9Tyx863UZ/o5cHwpkQottb0EqQISZhEg1BQ/2vROXty3k0XNwqffdglL6muB8whkMcp9iO7ABfhuAvkIrhkxF1iBq00cwD2JVk7Kp2N4hrQQYgUfpXFGLW9+yyJ2bT3EgT3HkSiDljqqZagIFKNFJWZqjQ0ThkpSCsbTCnoJeeCJXQf5yr8fYP9AfSlbX76B17vPLe335lww7j09C2IXU/7f5psZDLNA2QEl1BQdR5ay46gy8OjrfOHai1jSsNyfMQMX2HtBcaoOcHHsmnP8y5hcRniye78FEdfzUNuQ4ZyVc6lvPMyerUfJD5anemupBhFvY5xdP4Q5OFWQoiOSoBSAX+w+yFce38z+vtQwVR8NwkBYQynCoLrAH6pCSMCgpnlmXz//69EX2T8wOGyk27wipyxFRzZRghph4bJZvOktC2hfWINkipFf4s5w5f3S3zP47q3GMAFEwMH+HN99dgcH+gMkiDj1UqinJ3525J2eIoRf7+/nwVd28YlLz6U+KCp/BGLPgOrkKC58/ckp+rgUnwiNM2HljAzHj0Xs3X6Eo105t2yeRM5RTqG4xpMqSFAY8zoVJgwTQE7hBxu2s6lroBTfS/VUy6COjGiy5ViuPKpbaj2CH76wh8sXtrF6bnnO/yRP7zdGJAQ+wIjFcNgXFneCdQUfmmbAykuUQn6oA5SipX31j6B0pmdMlpkwVIqY5+quYyd4uKOfHAGiIWGxw+gMSmvx4RHvhA4oRgMLyAHdYcgPX9rOynalIeVGRoIzu51REebi+n4GcV6nI3CyIclhrQIJIFszmnkyKZwQjQ4Thgngoc27ONy/kLRmGAwAlXHp3CnVLnFVx8j3TkcCz+w+yvbOo7x5Xnk2hlEt3AA8z0hzaEbHSA7zw/sSYn1YpYfQqlHfwYShQhRHkBRl8/5eIk0RBlGpandGDPNtiTcrRF1TIkSQKEVvrsDWQ8d587xyfqNaSOGGi0/Byb6wkh/b0CpFab3N015gdFiPVKWQsmf7AAE58VV6DZy+j3Mk4MgrReBHK3Ii9MvQn5DxRmKkUa2TBJI9g9iyJgwVpPjhFhA/dOQWHJEKBE6ISvcR0opfpr24nLrVF4yxYcIwAQhSWmykUh4FAYA6IXA1EymugFihOxpvZEwYKkixD7gcRMW7myTWWj97AhVSuCXZQ1GicfCVMKYvJgwVJJRiSEbfLSzqlz/Ucf/gQ3Fj1+mir4OWayeReT4aY8SEoVJo0ZfJdSNHEiEqBFqM4liJW7oJNOqXZhcZ2nNtGKPFhKGCpMEH33BFM9DANysqR4Rbct11chZTz6Bb2pjWmDBUiHg3gqspBKhERKI+4m+lUH8/KbldmyQYY8WEoULEI/S5wqlEohREUZEKiIOUZtahQt73ZwSUI0obxmgxYZgAhAKIEviOwVC0omE9BdAgJOVmZRjGmDFhqBBC+cOdXavOZVkDF/q9OEoxrvhIkj5oR6OEtGbG+RbGtGFUwiAiO0TkJRF5QUTW+7RWEXlERLb4vzN9uojIV0WkQ0Q2isj0jDAaC7113YWLaQggRAhR0ggBI6w5VRxVPEnNf+ih4uhGPFVLKxYtb67j4sVzK/gGjTcyY6kxvENVL1HVNX7/TuBRVV0BPOr3AW7CzRBZAdwBfH28jJ1qFIvrmoWzuWhOI4GEThJi8yRUIpDRNiyKQgApdfuRj9LjvByFQhCRkYj3rlpEa13GPBiMM+JsmhK3AHf77buB98fSv6OOp4EZIjLvLO4zRSkXx+Z0ivdfspjZWaEmElJRQISQiQIyUQqJUm4UQdTNwDzJBCvXPHHSEBIQaQBa7l5MR+76F86uZ92KOaSwgUrjzBitMCjwMxF5XkTu8GlzVHW/3z5AOaroAmB37Nw9Pm0IInKHiKwXkfVdXacObTUliU2DzRBx+eLZ3HBeG5mio7SE5IOoFMMxFPVLzp38+V70iEjhvBkFyCgEfoYEAjOzIbevPY+5dTWlM6zaYIyV0cZjuEpV94rIbOAREXk1flBVVWRs84hV9S7gLoA1a9a8QX+25Ui+dQK3Xb6S3oGQB7d1lWZalqRA1HUckjp5GVbnvKQoGYkoiFBQN7tfRZmZzfO7V63kkrmtfn3LcvThSnR3Gm9cRlVjULeKCaraiVtn7XLgYLGJ4P92+ux7cQsWFFno06YZEiuX7ik/K5viM1et4qZzZlEHBFEK9U2BjHdKilROKgzOe9LVLvLi40ZqiiAKaEvBZ69ayU3nLqRO/Rfr2xEqYqpgjInTCoOINIhIU3EbF5fqZeAB4Daf7Tbgfr/9APAJPzqxFjgWa3JMK8rBW/MoBSLytNdEfO6qlfzGylZm1w2STeVIS0gQRKRTIalUnnRQGPGVShUIUgUyEpKRiHoJaUgNcP6siN+/ahk3nzuHWvIgOZQ8ZRcrm2dpjI3RNCXmAD/24ajTwD+r6r+KyHPA90XkdmAn5dVUHwJuxi2O2Ad8atytnlJEwG8h1LnCKdBeA59/O/zm6j4GQ7fIuT90yir/8ClRRY/KlroMLbXZISrv8hwArjBRMMaMaAWiCY3ZCJFe3HrtU4E23Gqx1c5UsROmjq1TxU4Y2dYlqto+mpOrJRjsazH/iKpGRNZPBVunip0wdWydKnbC2dtqLtGGYSQwYTAMI0G1CMNdk23AGJgqtk4VO2Hq2DpV7ISztLUqOh8Nw6guqqXGYBhGFTHpwiAiN4rIa36a9p2nP6OitvyDiHSKyMuxtKqcXi4ii0TkcRHZJCKviMjnq9FeEakVkWdF5EVv53/x6ctE5Blvz70ikvXpNX6/wx9fOhF2xuxNicgGEflJldtZ2VAIqjppL5yb/1ZgOZAFXgRWTaI9bwMuBV6Opf0VcKffvhP4S799M/AvOF+itcAzE2zrPOBSv90EvI5btbSq7PX3a/TbGeAZf//vAx/z6d8APu23PwN8w29/DLh3gj/XLwL/DPzE71ernTuAtmFp4/bdT9gbOcmbWwc8HNv/MvDlSbZp6TBheA2Y57fn4XwuAP4P8PGR8k2S3fcD11ezvUA98GvgCpzzTXr47wB4GFjnt9M+n0yQfQtxsUWuBX7iC1LV2envOZIwjNt3P9lNiVFN0Z5kzmp6+UTgq7GrcU/jqrPXV89fwE20ewRXSzyqqsV14OO2lOz0x48BsybCTuBvgD+kPMlkVpXaCRUIhRCnWjwfpwSqY59eXmlEpBH4IfAFVe2RWNz6arFXVUPgEhGZgZude/4km5RARN4DdKrq8yJyzWTbMwrGPRRCnMmuMUyFKdpVO71cRDI4Ufiuqv7IJ1etvap6FHgcVyWfISLFB1PclpKd/ngL0D0B5l0JvE9EdgD34JoTf1uFdgKVD4Uw2cLwHLDC9/xmcZ04D0yyTcOpyunl4qoG3wI2q+pfV6u9ItLuawqISB2uH2QzTiA+dBI7i/Z/CHhMfcO4kqjql1V1oaouxf0OH1PVW6vNTpigUAgT1Vlyik6Um3E96luBP55kW74H7AfyuHbY7bh246PAFuDfgFafV4CvebtfAtZMsK1X4dqZG4EX/OvmarMXuAjY4O18GfgTn74ceBY3Pf8HQI1Pr/X7Hf748kn4HVxDeVSi6uz0Nr3oX68Uy814fvfm+WgYRoLJbkoYhlGFmDAYhpHAhMEwjAQmDIZhJDBhMAwjgQmDYRgJTBgMw0hgwmAYRoL/D8KFcd8svlJAAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {
+ "tags": [],
+ "needs_background": "light"
+ }
+ }
+ ]
+ }
+ ]
+}
diff --git a/efficient_det_ros/utils/__pycache__/utils.cpython-37.pyc b/efficient_det_ros/utils/__pycache__/utils.cpython-37.pyc
new file mode 100644
index 0000000..19b088c
Binary files /dev/null and b/efficient_det_ros/utils/__pycache__/utils.cpython-37.pyc differ
diff --git a/efficient_det_ros/utils/sync_batchnorm/__init__.py b/efficient_det_ros/utils/sync_batchnorm/__init__.py
new file mode 100644
index 0000000..a10989f
--- /dev/null
+++ b/efficient_det_ros/utils/sync_batchnorm/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+# File : __init__.py
+# Author : Jiayuan Mao
+# Email : maojiayuan@gmail.com
+# Date : 27/01/2018
+#
+# This file is part of Synchronized-BatchNorm-PyTorch.
+# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
+# Distributed under MIT License.
+
+from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d
+from .batchnorm import patch_sync_batchnorm, convert_model
+from .replicate import DataParallelWithCallback, patch_replication_callback
diff --git a/efficient_det_ros/utils/sync_batchnorm/__pycache__/__init__.cpython-37.pyc b/efficient_det_ros/utils/sync_batchnorm/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000..73ab357
Binary files /dev/null and b/efficient_det_ros/utils/sync_batchnorm/__pycache__/__init__.cpython-37.pyc differ
diff --git a/efficient_det_ros/utils/sync_batchnorm/__pycache__/batchnorm.cpython-37.pyc b/efficient_det_ros/utils/sync_batchnorm/__pycache__/batchnorm.cpython-37.pyc
new file mode 100644
index 0000000..67fb643
Binary files /dev/null and b/efficient_det_ros/utils/sync_batchnorm/__pycache__/batchnorm.cpython-37.pyc differ
diff --git a/efficient_det_ros/utils/sync_batchnorm/__pycache__/comm.cpython-37.pyc b/efficient_det_ros/utils/sync_batchnorm/__pycache__/comm.cpython-37.pyc
new file mode 100644
index 0000000..46bf04b
Binary files /dev/null and b/efficient_det_ros/utils/sync_batchnorm/__pycache__/comm.cpython-37.pyc differ
diff --git a/efficient_det_ros/utils/sync_batchnorm/__pycache__/replicate.cpython-37.pyc b/efficient_det_ros/utils/sync_batchnorm/__pycache__/replicate.cpython-37.pyc
new file mode 100644
index 0000000..aa0d86a
Binary files /dev/null and b/efficient_det_ros/utils/sync_batchnorm/__pycache__/replicate.cpython-37.pyc differ
diff --git a/efficient_det_ros/utils/sync_batchnorm/batchnorm.py b/efficient_det_ros/utils/sync_batchnorm/batchnorm.py
new file mode 100644
index 0000000..e1bf74f
--- /dev/null
+++ b/efficient_det_ros/utils/sync_batchnorm/batchnorm.py
@@ -0,0 +1,394 @@
+# -*- coding: utf-8 -*-
+# File : batchnorm.py
+# Author : Jiayuan Mao
+# Email : maojiayuan@gmail.com
+# Date : 27/01/2018
+#
+# This file is part of Synchronized-BatchNorm-PyTorch.
+# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
+# Distributed under MIT License.
+
+import collections
+import contextlib
+
+import torch
+import torch.nn.functional as F
+
+from torch.nn.modules.batchnorm import _BatchNorm
+
+try:
+ from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
+except ImportError:
+ ReduceAddCoalesced = Broadcast = None
+
+try:
+ from jactorch.parallel.comm import SyncMaster
+ from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback
+except ImportError:
+ from .comm import SyncMaster
+ from .replicate import DataParallelWithCallback
+
+__all__ = [
+ 'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d',
+ 'patch_sync_batchnorm', 'convert_model'
+]
+
+
+def _sum_ft(tensor):
+ """sum over the first and last dimention"""
+ return tensor.sum(dim=0).sum(dim=-1)
+
+
+def _unsqueeze_ft(tensor):
+ """add new dimensions at the front and the tail"""
+ return tensor.unsqueeze(0).unsqueeze(-1)
+
+
+_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
+_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
+
+
+class _SynchronizedBatchNorm(_BatchNorm):
+ def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True):
+ assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.'
+
+ super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)
+
+ self._sync_master = SyncMaster(self._data_parallel_master)
+
+ self._is_parallel = False
+ self._parallel_id = None
+ self._slave_pipe = None
+
+ def forward(self, input):
+ # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
+ if not (self._is_parallel and self.training):
+ return F.batch_norm(
+ input, self.running_mean, self.running_var, self.weight, self.bias,
+ self.training, self.momentum, self.eps)
+
+ # Resize the input to (B, C, -1).
+ input_shape = input.size()
+ input = input.view(input.size(0), self.num_features, -1)
+
+ # Compute the sum and square-sum.
+ sum_size = input.size(0) * input.size(2)
+ input_sum = _sum_ft(input)
+ input_ssum = _sum_ft(input ** 2)
+
+ # Reduce-and-broadcast the statistics.
+ if self._parallel_id == 0:
+ mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
+ else:
+ mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
+
+ # Compute the output.
+ if self.affine:
+ # MJY:: Fuse the multiplication for speed.
+ output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
+ else:
+ output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
+
+ # Reshape it.
+ return output.view(input_shape)
+
+ def __data_parallel_replicate__(self, ctx, copy_id):
+ self._is_parallel = True
+ self._parallel_id = copy_id
+
+ # parallel_id == 0 means master device.
+ if self._parallel_id == 0:
+ ctx.sync_master = self._sync_master
+ else:
+ self._slave_pipe = ctx.sync_master.register_slave(copy_id)
+
+ def _data_parallel_master(self, intermediates):
+ """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
+
+ # Always using same "device order" makes the ReduceAdd operation faster.
+ # Thanks to:: Tete Xiao (http://tetexiao.com/)
+ intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
+
+ to_reduce = [i[1][:2] for i in intermediates]
+ to_reduce = [j for i in to_reduce for j in i] # flatten
+ target_gpus = [i[1].sum.get_device() for i in intermediates]
+
+ sum_size = sum([i[1].sum_size for i in intermediates])
+ sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
+ mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
+
+ broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
+
+ outputs = []
+ for i, rec in enumerate(intermediates):
+ outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
+
+ return outputs
+
+ def _compute_mean_std(self, sum_, ssum, size):
+ """Compute the mean and standard-deviation with sum and square-sum. This method
+ also maintains the moving average on the master device."""
+ assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
+ mean = sum_ / size
+ sumvar = ssum - sum_ * mean
+ unbias_var = sumvar / (size - 1)
+ bias_var = sumvar / size
+
+ if hasattr(torch, 'no_grad'):
+ with torch.no_grad():
+ self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
+ self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
+ else:
+ self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
+ self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
+
+ return mean, bias_var.clamp(self.eps) ** -0.5
+
+
+class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
+ r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
+ mini-batch.
+
+ .. math::
+
+ y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
+
+ This module differs from the built-in PyTorch BatchNorm1d as the mean and
+ standard-deviation are reduced across all devices during training.
+
+ For example, when one uses `nn.DataParallel` to wrap the network during
+ training, PyTorch's implementation normalize the tensor on each device using
+ the statistics only on that device, which accelerated the computation and
+ is also easy to implement, but the statistics might be inaccurate.
+ Instead, in this synchronized version, the statistics will be computed
+ over all training samples distributed on multiple devices.
+
+ Note that, for one-GPU or CPU-only case, this module behaves exactly same
+ as the built-in PyTorch implementation.
+
+ The mean and standard-deviation are calculated per-dimension over
+ the mini-batches and gamma and beta are learnable parameter vectors
+ of size C (where C is the input size).
+
+ During training, this layer keeps a running estimate of its computed mean
+ and variance. The running sum is kept with a default momentum of 0.1.
+
+ During evaluation, this running mean/variance is used for normalization.
+
+ Because the BatchNorm is done over the `C` dimension, computing statistics
+ on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
+
+ Args:
+ num_features: num_features from an expected input of size
+ `batch_size x num_features [x width]`
+ eps: a value added to the denominator for numerical stability.
+ Default: 1e-5
+ momentum: the value used for the running_mean and running_var
+ computation. Default: 0.1
+ affine: a boolean value that when set to ``True``, gives the layer learnable
+ affine parameters. Default: ``True``
+
+ Shape::
+ - Input: :math:`(N, C)` or :math:`(N, C, L)`
+ - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
+
+ Examples:
+ >>> # With Learnable Parameters
+ >>> m = SynchronizedBatchNorm1d(100)
+ >>> # Without Learnable Parameters
+ >>> m = SynchronizedBatchNorm1d(100, affine=False)
+ >>> input = torch.autograd.Variable(torch.randn(20, 100))
+ >>> output = m(input)
+ """
+
+ def _check_input_dim(self, input):
+ if input.dim() != 2 and input.dim() != 3:
+ raise ValueError('expected 2D or 3D input (got {}D input)'
+ .format(input.dim()))
+ super(SynchronizedBatchNorm1d, self)._check_input_dim(input)
+
+
+class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
+ r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
+ of 3d inputs
+
+ .. math::
+
+ y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
+
+ This module differs from the built-in PyTorch BatchNorm2d as the mean and
+ standard-deviation are reduced across all devices during training.
+
+ For example, when one uses `nn.DataParallel` to wrap the network during
+ training, PyTorch's implementation normalize the tensor on each device using
+ the statistics only on that device, which accelerated the computation and
+ is also easy to implement, but the statistics might be inaccurate.
+ Instead, in this synchronized version, the statistics will be computed
+ over all training samples distributed on multiple devices.
+
+ Note that, for one-GPU or CPU-only case, this module behaves exactly same
+ as the built-in PyTorch implementation.
+
+ The mean and standard-deviation are calculated per-dimension over
+ the mini-batches and gamma and beta are learnable parameter vectors
+ of size C (where C is the input size).
+
+ During training, this layer keeps a running estimate of its computed mean
+ and variance. The running sum is kept with a default momentum of 0.1.
+
+ During evaluation, this running mean/variance is used for normalization.
+
+ Because the BatchNorm is done over the `C` dimension, computing statistics
+ on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
+
+ Args:
+ num_features: num_features from an expected input of
+ size batch_size x num_features x height x width
+ eps: a value added to the denominator for numerical stability.
+ Default: 1e-5
+ momentum: the value used for the running_mean and running_var
+ computation. Default: 0.1
+ affine: a boolean value that when set to ``True``, gives the layer learnable
+ affine parameters. Default: ``True``
+
+ Shape::
+ - Input: :math:`(N, C, H, W)`
+ - Output: :math:`(N, C, H, W)` (same shape as input)
+
+ Examples:
+ >>> # With Learnable Parameters
+ >>> m = SynchronizedBatchNorm2d(100)
+ >>> # Without Learnable Parameters
+ >>> m = SynchronizedBatchNorm2d(100, affine=False)
+ >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
+ >>> output = m(input)
+ """
+
+ def _check_input_dim(self, input):
+ if input.dim() != 4:
+ raise ValueError('expected 4D input (got {}D input)'
+ .format(input.dim()))
+ super(SynchronizedBatchNorm2d, self)._check_input_dim(input)
+
+
+class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
+ r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
+ of 4d inputs
+
+ .. math::
+
+ y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
+
+ This module differs from the built-in PyTorch BatchNorm3d as the mean and
+ standard-deviation are reduced across all devices during training.
+
+ For example, when one uses `nn.DataParallel` to wrap the network during
+ training, PyTorch's implementation normalize the tensor on each device using
+ the statistics only on that device, which accelerated the computation and
+ is also easy to implement, but the statistics might be inaccurate.
+ Instead, in this synchronized version, the statistics will be computed
+ over all training samples distributed on multiple devices.
+
+ Note that, for one-GPU or CPU-only case, this module behaves exactly same
+ as the built-in PyTorch implementation.
+
+ The mean and standard-deviation are calculated per-dimension over
+ the mini-batches and gamma and beta are learnable parameter vectors
+ of size C (where C is the input size).
+
+ During training, this layer keeps a running estimate of its computed mean
+ and variance. The running sum is kept with a default momentum of 0.1.
+
+ During evaluation, this running mean/variance is used for normalization.
+
+ Because the BatchNorm is done over the `C` dimension, computing statistics
+ on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
+ or Spatio-temporal BatchNorm
+
+ Args:
+ num_features: num_features from an expected input of
+ size batch_size x num_features x depth x height x width
+ eps: a value added to the denominator for numerical stability.
+ Default: 1e-5
+ momentum: the value used for the running_mean and running_var
+ computation. Default: 0.1
+ affine: a boolean value that when set to ``True``, gives the layer learnable
+ affine parameters. Default: ``True``
+
+ Shape::
+ - Input: :math:`(N, C, D, H, W)`
+ - Output: :math:`(N, C, D, H, W)` (same shape as input)
+
+ Examples:
+ >>> # With Learnable Parameters
+ >>> m = SynchronizedBatchNorm3d(100)
+ >>> # Without Learnable Parameters
+ >>> m = SynchronizedBatchNorm3d(100, affine=False)
+ >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
+ >>> output = m(input)
+ """
+
+ def _check_input_dim(self, input):
+ if input.dim() != 5:
+ raise ValueError('expected 5D input (got {}D input)'
+ .format(input.dim()))
+ super(SynchronizedBatchNorm3d, self)._check_input_dim(input)
+
+
+@contextlib.contextmanager
+def patch_sync_batchnorm():
+ import torch.nn as nn
+
+ backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d
+
+ nn.BatchNorm1d = SynchronizedBatchNorm1d
+ nn.BatchNorm2d = SynchronizedBatchNorm2d
+ nn.BatchNorm3d = SynchronizedBatchNorm3d
+
+ yield
+
+ nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup
+
+
+def convert_model(module):
+ """Traverse the input module and its child recursively
+ and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d
+ to SynchronizedBatchNorm*N*d
+
+ Args:
+ module: the input module needs to be convert to SyncBN model
+
+ Examples:
+ >>> import torch.nn as nn
+ >>> import torchvision
+ >>> # m is a standard pytorch model
+ >>> m = torchvision.models.resnet18(True)
+ >>> m = nn.DataParallel(m)
+ >>> # after convert, m is using SyncBN
+ >>> m = convert_model(m)
+ """
+ if isinstance(module, torch.nn.DataParallel):
+ mod = module.module
+ mod = convert_model(mod)
+ mod = DataParallelWithCallback(mod, device_ids=module.device_ids)
+ return mod
+
+ mod = module
+ for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d,
+ torch.nn.modules.batchnorm.BatchNorm2d,
+ torch.nn.modules.batchnorm.BatchNorm3d],
+ [SynchronizedBatchNorm1d,
+ SynchronizedBatchNorm2d,
+ SynchronizedBatchNorm3d]):
+ if isinstance(module, pth_module):
+ mod = sync_module(module.num_features, module.eps, module.momentum, module.affine)
+ mod.running_mean = module.running_mean
+ mod.running_var = module.running_var
+ if module.affine:
+ mod.weight.data = module.weight.data.clone().detach()
+ mod.bias.data = module.bias.data.clone().detach()
+
+ for name, child in module.named_children():
+ mod.add_module(name, convert_model(child))
+
+ return mod
diff --git a/efficient_det_ros/utils/sync_batchnorm/batchnorm_reimpl.py b/efficient_det_ros/utils/sync_batchnorm/batchnorm_reimpl.py
new file mode 100644
index 0000000..18145c3
--- /dev/null
+++ b/efficient_det_ros/utils/sync_batchnorm/batchnorm_reimpl.py
@@ -0,0 +1,74 @@
+#! /usr/bin/env python3
+# -*- coding: utf-8 -*-
+# File : batchnorm_reimpl.py
+# Author : acgtyrant
+# Date : 11/01/2018
+#
+# This file is part of Synchronized-BatchNorm-PyTorch.
+# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
+# Distributed under MIT License.
+
+import torch
+import torch.nn as nn
+import torch.nn.init as init
+
+__all__ = ['BatchNorm2dReimpl']
+
+
+class BatchNorm2dReimpl(nn.Module):
+ """
+ A re-implementation of batch normalization, used for testing the numerical
+ stability.
+
+ Author: acgtyrant
+ See also:
+ https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues/14
+ """
+ def __init__(self, num_features, eps=1e-5, momentum=0.1):
+ super().__init__()
+
+ self.num_features = num_features
+ self.eps = eps
+ self.momentum = momentum
+ self.weight = nn.Parameter(torch.empty(num_features))
+ self.bias = nn.Parameter(torch.empty(num_features))
+ self.register_buffer('running_mean', torch.zeros(num_features))
+ self.register_buffer('running_var', torch.ones(num_features))
+ self.reset_parameters()
+
+ def reset_running_stats(self):
+ self.running_mean.zero_()
+ self.running_var.fill_(1)
+
+ def reset_parameters(self):
+ self.reset_running_stats()
+ init.uniform_(self.weight)
+ init.zeros_(self.bias)
+
+ def forward(self, input_):
+ batchsize, channels, height, width = input_.size()
+ numel = batchsize * height * width
+ input_ = input_.permute(1, 0, 2, 3).contiguous().view(channels, numel)
+ sum_ = input_.sum(1)
+ sum_of_square = input_.pow(2).sum(1)
+ mean = sum_ / numel
+ sumvar = sum_of_square - sum_ * mean
+
+ self.running_mean = (
+ (1 - self.momentum) * self.running_mean
+ + self.momentum * mean.detach()
+ )
+ unbias_var = sumvar / (numel - 1)
+ self.running_var = (
+ (1 - self.momentum) * self.running_var
+ + self.momentum * unbias_var.detach()
+ )
+
+ bias_var = sumvar / numel
+ inv_std = 1 / (bias_var + self.eps).pow(0.5)
+ output = (
+ (input_ - mean.unsqueeze(1)) * inv_std.unsqueeze(1) *
+ self.weight.unsqueeze(1) + self.bias.unsqueeze(1))
+
+ return output.view(channels, batchsize, height, width).permute(1, 0, 2, 3).contiguous()
+
diff --git a/efficient_det_ros/utils/sync_batchnorm/comm.py b/efficient_det_ros/utils/sync_batchnorm/comm.py
new file mode 100644
index 0000000..922f8c4
--- /dev/null
+++ b/efficient_det_ros/utils/sync_batchnorm/comm.py
@@ -0,0 +1,137 @@
+# -*- coding: utf-8 -*-
+# File : comm.py
+# Author : Jiayuan Mao
+# Email : maojiayuan@gmail.com
+# Date : 27/01/2018
+#
+# This file is part of Synchronized-BatchNorm-PyTorch.
+# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
+# Distributed under MIT License.
+
+import queue
+import collections
+import threading
+
+__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster']
+
+
+class FutureResult(object):
+ """A thread-safe future implementation. Used only as one-to-one pipe."""
+
+ def __init__(self):
+ self._result = None
+ self._lock = threading.Lock()
+ self._cond = threading.Condition(self._lock)
+
+ def put(self, result):
+ with self._lock:
+ assert self._result is None, 'Previous result has\'t been fetched.'
+ self._result = result
+ self._cond.notify()
+
+ def get(self):
+ with self._lock:
+ if self._result is None:
+ self._cond.wait()
+
+ res = self._result
+ self._result = None
+ return res
+
+
+_MasterRegistry = collections.namedtuple('MasterRegistry', ['result'])
+_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result'])
+
+
+class SlavePipe(_SlavePipeBase):
+ """Pipe for master-slave communication."""
+
+ def run_slave(self, msg):
+ self.queue.put((self.identifier, msg))
+ ret = self.result.get()
+ self.queue.put(True)
+ return ret
+
+
+class SyncMaster(object):
+ """An abstract `SyncMaster` object.
+
+ - During the replication, as the data parallel will trigger an callback of each module, all slave devices should
+ call `register(id)` and obtain an `SlavePipe` to communicate with the master.
+ - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected,
+ and passed to a registered callback.
+ - After receiving the messages, the master device should gather the information and determine to message passed
+ back to each slave devices.
+ """
+
+ def __init__(self, master_callback):
+ """
+
+ Args:
+ master_callback: a callback to be invoked after having collected messages from slave devices.
+ """
+ self._master_callback = master_callback
+ self._queue = queue.Queue()
+ self._registry = collections.OrderedDict()
+ self._activated = False
+
+ def __getstate__(self):
+ return {'master_callback': self._master_callback}
+
+ def __setstate__(self, state):
+ self.__init__(state['master_callback'])
+
+ def register_slave(self, identifier):
+ """
+ Register an slave device.
+
+ Args:
+ identifier: an identifier, usually is the device id.
+
+ Returns: a `SlavePipe` object which can be used to communicate with the master device.
+
+ """
+ if self._activated:
+ assert self._queue.empty(), 'Queue is not clean before next initialization.'
+ self._activated = False
+ self._registry.clear()
+ future = FutureResult()
+ self._registry[identifier] = _MasterRegistry(future)
+ return SlavePipe(identifier, self._queue, future)
+
+ def run_master(self, master_msg):
+ """
+ Main entry for the master device in each forward pass.
+ The messages were first collected from each devices (including the master device), and then
+ an callback will be invoked to compute the message to be sent back to each devices
+ (including the master device).
+
+ Args:
+ master_msg: the message that the master want to send to itself. This will be placed as the first
+ message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example.
+
+ Returns: the message to be sent back to the master device.
+
+ """
+ self._activated = True
+
+ intermediates = [(0, master_msg)]
+ for i in range(self.nr_slaves):
+ intermediates.append(self._queue.get())
+
+ results = self._master_callback(intermediates)
+ assert results[0][0] == 0, 'The first result should belongs to the master.'
+
+ for i, res in results:
+ if i == 0:
+ continue
+ self._registry[i].result.put(res)
+
+ for i in range(self.nr_slaves):
+ assert self._queue.get() is True
+
+ return results[0][1]
+
+ @property
+ def nr_slaves(self):
+ return len(self._registry)
diff --git a/efficient_det_ros/utils/sync_batchnorm/replicate.py b/efficient_det_ros/utils/sync_batchnorm/replicate.py
new file mode 100644
index 0000000..b71c7b8
--- /dev/null
+++ b/efficient_det_ros/utils/sync_batchnorm/replicate.py
@@ -0,0 +1,94 @@
+# -*- coding: utf-8 -*-
+# File : replicate.py
+# Author : Jiayuan Mao
+# Email : maojiayuan@gmail.com
+# Date : 27/01/2018
+#
+# This file is part of Synchronized-BatchNorm-PyTorch.
+# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
+# Distributed under MIT License.
+
+import functools
+
+from torch.nn.parallel.data_parallel import DataParallel
+
+__all__ = [
+ 'CallbackContext',
+ 'execute_replication_callbacks',
+ 'DataParallelWithCallback',
+ 'patch_replication_callback'
+]
+
+
+class CallbackContext(object):
+ pass
+
+
+def execute_replication_callbacks(modules):
+ """
+ Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.
+
+ The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
+
+ Note that, as all modules are isomorphism, we assign each sub-module with a context
+ (shared among multiple copies of this module on different devices).
+ Through this context, different copies can share some information.
+
+ We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback
+ of any slave copies.
+ """
+ master_copy = modules[0]
+ nr_modules = len(list(master_copy.modules()))
+ ctxs = [CallbackContext() for _ in range(nr_modules)]
+
+ for i, module in enumerate(modules):
+ for j, m in enumerate(module.modules()):
+ if hasattr(m, '__data_parallel_replicate__'):
+ m.__data_parallel_replicate__(ctxs[j], i)
+
+
+class DataParallelWithCallback(DataParallel):
+ """
+ Data Parallel with a replication callback.
+
+ An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by
+ original `replicate` function.
+ The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
+
+ Examples:
+ > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
+ > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
+ # sync_bn.__data_parallel_replicate__ will be invoked.
+ """
+
+ def replicate(self, module, device_ids):
+ modules = super(DataParallelWithCallback, self).replicate(module, device_ids)
+ execute_replication_callbacks(modules)
+ return modules
+
+
+def patch_replication_callback(data_parallel):
+ """
+ Monkey-patch an existing `DataParallel` object. Add the replication callback.
+ Useful when you have customized `DataParallel` implementation.
+
+ Examples:
+ > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
+ > sync_bn = DataParallel(sync_bn, device_ids=[0, 1])
+ > patch_replication_callback(sync_bn)
+ # this is equivalent to
+ > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
+ > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
+ """
+
+ assert isinstance(data_parallel, DataParallel)
+
+ old_replicate = data_parallel.replicate
+
+ @functools.wraps(old_replicate)
+ def new_replicate(module, device_ids):
+ modules = old_replicate(module, device_ids)
+ execute_replication_callbacks(modules)
+ return modules
+
+ data_parallel.replicate = new_replicate
diff --git a/efficient_det_ros/utils/sync_batchnorm/unittest.py b/efficient_det_ros/utils/sync_batchnorm/unittest.py
new file mode 100644
index 0000000..bed56f1
--- /dev/null
+++ b/efficient_det_ros/utils/sync_batchnorm/unittest.py
@@ -0,0 +1,29 @@
+# -*- coding: utf-8 -*-
+# File : unittest.py
+# Author : Jiayuan Mao
+# Email : maojiayuan@gmail.com
+# Date : 27/01/2018
+#
+# This file is part of Synchronized-BatchNorm-PyTorch.
+# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
+# Distributed under MIT License.
+
+import unittest
+import torch
+
+
+class TorchTestCase(unittest.TestCase):
+ def assertTensorClose(self, x, y):
+ adiff = float((x - y).abs().max())
+ if (y == 0).all():
+ rdiff = 'NaN'
+ else:
+ rdiff = float((adiff / y).abs().max())
+
+ message = (
+ 'Tensor close check failed\n'
+ 'adiff={}\n'
+ 'rdiff={}\n'
+ ).format(adiff, rdiff)
+ self.assertTrue(torch.allclose(x, y), message)
+
diff --git a/efficient_det_ros/utils/utils.py b/efficient_det_ros/utils/utils.py
new file mode 100644
index 0000000..0b69340
--- /dev/null
+++ b/efficient_det_ros/utils/utils.py
@@ -0,0 +1,314 @@
+# Author: Zylo117
+
+import math
+import os
+import uuid
+from glob import glob
+from typing import Union
+
+import cv2
+import numpy as np
+import torch
+import webcolors
+from torch import nn
+from torch.nn.init import _calculate_fan_in_and_fan_out, _no_grad_normal_
+from torchvision.ops.boxes import batched_nms
+
+from utils.sync_batchnorm import SynchronizedBatchNorm2d
+
+
+def invert_affine(metas: Union[float, list, tuple], preds):
+ for i in range(len(preds)):
+ if len(preds[i]['rois']) == 0:
+ continue
+ else:
+ if metas is float:
+ preds[i]['rois'][:, [0, 2]] = preds[i]['rois'][:, [0, 2]] / metas
+ preds[i]['rois'][:, [1, 3]] = preds[i]['rois'][:, [1, 3]] / metas
+ else:
+ new_w, new_h, old_w, old_h, padding_w, padding_h = metas[i]
+ preds[i]['rois'][:, [0, 2]] = preds[i]['rois'][:, [0, 2]] / (new_w / old_w)
+ preds[i]['rois'][:, [1, 3]] = preds[i]['rois'][:, [1, 3]] / (new_h / old_h)
+ return preds
+
+
+def aspectaware_resize_padding(image, width, height, interpolation=None, means=None):
+ old_h, old_w, c = image.shape
+ if old_w > old_h:
+ new_w = width
+ new_h = int(width / old_w * old_h)
+ else:
+ new_w = int(height / old_h * old_w)
+ new_h = height
+
+ canvas = np.zeros((height, height, c), np.float32)
+ if means is not None:
+ canvas[...] = means
+
+ if new_w != old_w or new_h != old_h:
+ if interpolation is None:
+ image = cv2.resize(image, (new_w, new_h))
+ else:
+ image = cv2.resize(image, (new_w, new_h), interpolation=interpolation)
+
+ padding_h = height - new_h
+ padding_w = width - new_w
+
+ if c > 1:
+ canvas[:new_h, :new_w] = image
+ else:
+ if len(image.shape) == 2:
+ canvas[:new_h, :new_w, 0] = image
+ else:
+ canvas[:new_h, :new_w] = image
+
+ return canvas, new_w, new_h, old_w, old_h, padding_w, padding_h,
+
+
+def preprocess(*image_path, max_size=512, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)):
+ ori_imgs = [cv2.imread(img_path)[..., ::-1] for img_path in image_path]
+ normalized_imgs = [(img / 255 - mean) / std for img in ori_imgs]
+ imgs_meta = [aspectaware_resize_padding(img, max_size, max_size,
+ means=None) for img in normalized_imgs]
+ framed_imgs = [img_meta[0] for img_meta in imgs_meta]
+ framed_metas = [img_meta[1:] for img_meta in imgs_meta]
+
+ return ori_imgs, framed_imgs, framed_metas
+
+
+def preprocess_video(*frame_from_video, max_size=512, mean=(0.406, 0.456, 0.485), std=(0.225, 0.224, 0.229)):
+ ori_imgs = frame_from_video
+ normalized_imgs = [(img / 255 - mean) / std for img in ori_imgs]
+ imgs_meta = [aspectaware_resize_padding(img[..., ::-1], max_size, max_size,
+ means=None) for img in normalized_imgs]
+ framed_imgs = [img_meta[0] for img_meta in imgs_meta]
+ framed_metas = [img_meta[1:] for img_meta in imgs_meta]
+
+ return ori_imgs, framed_imgs, framed_metas
+
+
+def postprocess(x, anchors, regression, classification, regressBoxes, clipBoxes, threshold, iou_threshold):
+ transformed_anchors = regressBoxes(anchors, regression)
+ transformed_anchors = clipBoxes(transformed_anchors, x)
+ scores = torch.max(classification, dim=2, keepdim=True)[0]
+ scores_over_thresh = (scores > threshold)[:, :, 0]
+ out = []
+ for i in range(x.shape[0]):
+ if scores_over_thresh[i].sum() == 0:
+ out.append({
+ 'rois': np.array(()),
+ 'class_ids': np.array(()),
+ 'scores': np.array(()),
+ })
+ continue
+
+ classification_per = classification[i, scores_over_thresh[i, :], ...].permute(1, 0)
+ transformed_anchors_per = transformed_anchors[i, scores_over_thresh[i, :], ...]
+ scores_per = scores[i, scores_over_thresh[i, :], ...]
+ scores_, classes_ = classification_per.max(dim=0)
+ anchors_nms_idx = batched_nms(transformed_anchors_per, scores_per[:, 0], classes_, iou_threshold=iou_threshold)
+
+ if anchors_nms_idx.shape[0] != 0:
+ classes_ = classes_[anchors_nms_idx]
+ scores_ = scores_[anchors_nms_idx]
+ boxes_ = transformed_anchors_per[anchors_nms_idx, :]
+
+ out.append({
+ 'rois': boxes_.cpu().numpy(),
+ 'class_ids': classes_.cpu().numpy(),
+ 'scores': scores_.cpu().numpy(),
+ })
+ else:
+ out.append({
+ 'rois': np.array(()),
+ 'class_ids': np.array(()),
+ 'scores': np.array(()),
+ })
+
+ return out
+
+
+def display(preds, imgs, obj_list, imshow=True, imwrite=False):
+ for i in range(len(imgs)):
+ if len(preds[i]['rois']) == 0:
+ continue
+
+ imgs[i] = imgs[i].copy()
+
+ for j in range(len(preds[i]['rois'])):
+ (x1, y1, x2, y2) = preds[i]['rois'][j].astype(np.int)
+ obj = obj_list[preds[i]['class_ids'][j]]
+ score = float(preds[i]['scores'][j])
+
+ plot_one_box(imgs[i], [x1, y1, x2, y2], label=obj, score=score,
+ color=color_list[get_index_label(obj, obj_list)])
+ if imshow:
+ cv2.imshow('img', imgs[i])
+ cv2.waitKey(0)
+
+ if imwrite:
+ os.makedirs('test/', exist_ok=True)
+ cv2.imwrite(f'test/{uuid.uuid4().hex}.jpg', imgs[i])
+
+
+def replace_w_sync_bn(m):
+ for var_name in dir(m):
+ target_attr = getattr(m, var_name)
+ if type(target_attr) == torch.nn.BatchNorm2d:
+ num_features = target_attr.num_features
+ eps = target_attr.eps
+ momentum = target_attr.momentum
+ affine = target_attr.affine
+
+ # get parameters
+ running_mean = target_attr.running_mean
+ running_var = target_attr.running_var
+ if affine:
+ weight = target_attr.weight
+ bias = target_attr.bias
+
+ setattr(m, var_name,
+ SynchronizedBatchNorm2d(num_features, eps, momentum, affine))
+
+ target_attr = getattr(m, var_name)
+ # set parameters
+ target_attr.running_mean = running_mean
+ target_attr.running_var = running_var
+ if affine:
+ target_attr.weight = weight
+ target_attr.bias = bias
+
+ for var_name, children in m.named_children():
+ replace_w_sync_bn(children)
+
+
+class CustomDataParallel(nn.DataParallel):
+ """
+ force splitting data to all gpus instead of sending all data to cuda:0 and then moving around.
+ """
+
+ def __init__(self, module, num_gpus):
+ super().__init__(module)
+ self.num_gpus = num_gpus
+
+ def scatter(self, inputs, kwargs, device_ids):
+ # More like scatter and data prep at the same time. The point is we prep the data in such a way
+ # that no scatter is necessary, and there's no need to shuffle stuff around different GPUs.
+ devices = ['cuda:' + str(x) for x in range(self.num_gpus)]
+ splits = inputs[0].shape[0] // self.num_gpus
+
+ if splits == 0:
+ raise Exception('Batchsize must be greater than num_gpus.')
+
+ return [(inputs[0][splits * device_idx: splits * (device_idx + 1)].to(f'cuda:{device_idx}', non_blocking=True),
+ inputs[1][splits * device_idx: splits * (device_idx + 1)].to(f'cuda:{device_idx}', non_blocking=True))
+ for device_idx in range(len(devices))], \
+ [kwargs] * len(devices)
+
+
+def get_last_weights(weights_path):
+ weights_path = glob(weights_path + f'/*.pth')
+ weights_path = sorted(weights_path,
+ key=lambda x: int(x.rsplit('_')[-1].rsplit('.')[0]),
+ reverse=True)[0]
+ print(f'using weights {weights_path}')
+ return weights_path
+
+
+def init_weights(model):
+ for name, module in model.named_modules():
+ is_conv_layer = isinstance(module, nn.Conv2d)
+
+ if is_conv_layer:
+ if "conv_list" or "header" in name:
+ variance_scaling_(module.weight.data)
+ else:
+ nn.init.kaiming_uniform_(module.weight.data)
+
+ if module.bias is not None:
+ if "classifier.header" in name:
+ bias_value = -np.log((1 - 0.01) / 0.01)
+ torch.nn.init.constant_(module.bias, bias_value)
+ else:
+ module.bias.data.zero_()
+
+
+def variance_scaling_(tensor, gain=1.):
+ # type: (Tensor, float) -> Tensor
+ r"""
+ initializer for SeparableConv in Regressor/Classifier
+ reference: https://keras.io/zh/initializers/ VarianceScaling
+ """
+ fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
+ std = math.sqrt(gain / float(fan_in))
+
+ return _no_grad_normal_(tensor, 0., std)
+
+
+STANDARD_COLORS = [
+ 'LawnGreen', 'Chartreuse', 'Aqua', 'Beige', 'Azure', 'BlanchedAlmond', 'Bisque',
+ 'Aquamarine', 'BlueViolet', 'BurlyWood', 'CadetBlue', 'AntiqueWhite',
+ 'Chocolate', 'Coral', 'CornflowerBlue', 'Cornsilk', 'Crimson', 'Cyan',
+ 'DarkCyan', 'DarkGoldenRod', 'DarkGrey', 'DarkKhaki', 'DarkOrange',
+ 'DarkOrchid', 'DarkSalmon', 'DarkSeaGreen', 'DarkTurquoise', 'DarkViolet',
+ 'DeepPink', 'DeepSkyBlue', 'DodgerBlue', 'FireBrick', 'FloralWhite',
+ 'ForestGreen', 'Fuchsia', 'Gainsboro', 'GhostWhite', 'Gold', 'GoldenRod',
+ 'Salmon', 'Tan', 'HoneyDew', 'HotPink', 'IndianRed', 'Ivory', 'Khaki',
+ 'Lavender', 'LavenderBlush', 'AliceBlue', 'LemonChiffon', 'LightBlue',
+ 'LightCoral', 'LightCyan', 'LightGoldenRodYellow', 'LightGray', 'LightGrey',
+ 'LightGreen', 'LightPink', 'LightSalmon', 'LightSeaGreen', 'LightSkyBlue',
+ 'LightSlateGray', 'LightSlateGrey', 'LightSteelBlue', 'LightYellow', 'Lime',
+ 'LimeGreen', 'Linen', 'Magenta', 'MediumAquaMarine', 'MediumOrchid',
+ 'MediumPurple', 'MediumSeaGreen', 'MediumSlateBlue', 'MediumSpringGreen',
+ 'MediumTurquoise', 'MediumVioletRed', 'MintCream', 'MistyRose', 'Moccasin',
+ 'NavajoWhite', 'OldLace', 'Olive', 'OliveDrab', 'Orange', 'OrangeRed',
+ 'Orchid', 'PaleGoldenRod', 'PaleGreen', 'PaleTurquoise', 'PaleVioletRed',
+ 'PapayaWhip', 'PeachPuff', 'Peru', 'Pink', 'Plum', 'PowderBlue', 'Purple',
+ 'Red', 'RosyBrown', 'RoyalBlue', 'SaddleBrown', 'Green', 'SandyBrown',
+ 'SeaGreen', 'SeaShell', 'Sienna', 'Silver', 'SkyBlue', 'SlateBlue',
+ 'SlateGray', 'SlateGrey', 'Snow', 'SpringGreen', 'SteelBlue', 'GreenYellow',
+ 'Teal', 'Thistle', 'Tomato', 'Turquoise', 'Violet', 'Wheat', 'White',
+ 'WhiteSmoke', 'Yellow', 'YellowGreen'
+]
+
+
+def from_colorname_to_bgr(color):
+ rgb_color = webcolors.name_to_rgb(color)
+ result = (rgb_color.blue, rgb_color.green, rgb_color.red)
+ return result
+
+
+def standard_to_bgr(list_color_name):
+ standard = []
+ for i in range(len(list_color_name) - 36): # -36 used to match the len(obj_list)
+ standard.append(from_colorname_to_bgr(list_color_name[i]))
+ return standard
+
+
+def get_index_label(label, obj_list):
+ index = int(obj_list.index(label))
+ return index
+
+
+def plot_one_box(img, coord, label=None, score=None, color=None, line_thickness=None):
+ tl = line_thickness or int(round(0.001 * max(img.shape[0:2]))) # line thickness
+ color = color
+ c1, c2 = (int(coord[0]), int(coord[1])), (int(coord[2]), int(coord[3]))
+ cv2.rectangle(img, c1, c2, color, thickness=tl)
+ if label:
+ tf = max(tl - 2, 1) # font thickness
+ s_size = cv2.getTextSize(str('{:.0%}'.format(score)), 0, fontScale=float(tl) / 3, thickness=tf)[0]
+ t_size = cv2.getTextSize(label, 0, fontScale=float(tl) / 3, thickness=tf)[0]
+ c2 = c1[0] + t_size[0] + s_size[0] + 15, c1[1] - t_size[1] - 3
+ cv2.rectangle(img, c1, c2, color, -1) # filled
+ cv2.putText(img, '{}: {:.0%}'.format(label, score), (c1[0], c1[1] - 2), 0, float(tl) / 3, [0, 0, 0],
+ thickness=tf, lineType=cv2.FONT_HERSHEY_SIMPLEX)
+
+
+color_list = standard_to_bgr(STANDARD_COLORS)
+
+
+def boolean_string(s):
+ if s not in {'False', 'True'}:
+ raise ValueError('Not a valid boolean string')
+ return s == 'True'
diff --git a/efficient_det_ros/weights/efficientdet-d2.pth b/efficient_det_ros/weights/efficientdet-d2.pth
new file mode 100644
index 0000000..9db9775
Binary files /dev/null and b/efficient_det_ros/weights/efficientdet-d2.pth differ
diff --git a/gating_control_ros/.images/object_visualizer_layout.png b/gating_control_ros/.images/object_visualizer_layout.png
new file mode 100644
index 0000000..c992c7a
Binary files /dev/null and b/gating_control_ros/.images/object_visualizer_layout.png differ
diff --git a/gating_control_ros/.images/visualizer_layout.png b/gating_control_ros/.images/visualizer_layout.png
new file mode 100644
index 0000000..c992c7a
Binary files /dev/null and b/gating_control_ros/.images/visualizer_layout.png differ
diff --git a/gating_control_ros/CMakeLists.txt b/gating_control_ros/CMakeLists.txt
new file mode 100644
index 0000000..acc5207
--- /dev/null
+++ b/gating_control_ros/CMakeLists.txt
@@ -0,0 +1,76 @@
+cmake_minimum_required(VERSION 2.8.3)
+project(kitti_visualizer)
+
+## Compile as C++11, supported in ROS Kinetic and newer
+add_compile_options(-std=c++11)
+
+## Find catkin macros and libraries
+find_package(catkin REQUIRED COMPONENTS
+ cv_bridge
+ image_transport
+ pcl_conversions
+ pcl_ros
+ roscpp
+ rospy
+ rviz
+ sensor_msgs
+ visualization_msgs
+ autoware_tracker
+)
+
+## The catkin_package macro generates cmake config files for your package
+catkin_package(
+ CATKIN_DEPENDS cv_bridge image_transport pcl_conversions pcl_ros roscpp rospy rviz sensor_msgs visualization_msgs
+)
+
+## Specify additional locations of header files
+include_directories(
+ include
+ ${catkin_INCLUDE_DIRS}
+)
+
+## Declare a C++ executable
+add_executable(object_visualizer_node src/object_visualizer/object_visualizer.cc src/object_visualizer/object_visualizer_node.cc)
+add_executable(track_visualizer_node src/track_visualizer/track_visualizer.cc src/track_visualizer/track_visualizer_node.cc)
+
+## Specify libraries to link a library or executable target against
+target_link_libraries(object_visualizer_node ${catkin_LIBRARIES})
+target_link_libraries(object_visualizer_node ${OpenCV_LIBRARIES})
+target_link_libraries(track_visualizer_node ${catkin_LIBRARIES})
+target_link_libraries(track_visualizer_node ${OpenCV_LIBRARIES})
+
+## This setting causes Qt's "MOC" generation to happen automatically
+set(CMAKE_AUTOMOC ON)
+
+## Use the Qt version that rviz used so they are compatible
+if(rviz_QT_VERSION VERSION_LESS "5")
+ message(STATUS "Using Qt4 based on the rviz_QT_VERSION: ${rviz_QT_VERSION}")
+ find_package(Qt4 ${rviz_QT_VERSION} EXACT REQUIRED QtCore QtGui)
+ ## pull in all required include dirs, define QT_LIBRARIES, etc
+ include(${QT_USE_FILE})
+else()
+ message(STATUS "Using Qt5 based on the rviz_QT_VERSION: ${rviz_QT_VERSION}")
+ find_package(Qt5 ${rviz_QT_VERSION} EXACT REQUIRED Core Widgets)
+ ## make target_link_libraries(${QT_LIBRARIES}) pull in all required dependencies
+ set(QT_LIBRARIES Qt5::Widgets)
+endif()
+
+## Define QT_NO_KEYWORDS, avoid defining "emit", "slots", etc
+add_definitions(-DQT_NO_KEYWORDS)
+
+## Specify the list of source files
+set(SRC_FILES
+ src/common/rviz_command_button.cc
+ ${QT_MOC}
+)
+
+## Specify the list of header files
+set(HEADER_FILES
+ include/common/rviz_command_button.h
+)
+
+## Declare library
+add_library(${PROJECT_NAME}_button ${SRC_FILES} ${HEADER_FILES})
+
+## Specify libraries to link a library or executable target against
+target_link_libraries(${PROJECT_NAME}_button ${QT_LIBRARIES} ${catkin_LIBRARIES})
diff --git a/gating_control_ros/README.md b/gating_control_ros/README.md
new file mode 100644
index 0000000..45fc469
--- /dev/null
+++ b/gating_control_ros/README.md
@@ -0,0 +1,121 @@
+# Visualize KITTI Data Based on ROS and Rviz
+
+
+
+This package is used to visualize kitti data using ROS and Rviz. So far, there has the following main features
+- [x] [Visualize object data](#Visualize-Object-Data)
+- [x] [Visualize track data](#Visualize-Track-Data)
+- [ ] Visualize road data
+- [ ] Visualize raw data
+
+## Dependencies
+- [ROS](https://www.ros.org/)
+
+ All procedures are based on ROS, which is commonly use in robotics and self-driving cars. We tested this package on the `Kinetic` version, but we believe that it can be used on other versions as well.
+
+- [jsk_recognition_msgs](https://jsk-docs.readthedocs.io/projects/jsk_recognition/en/latest/jsk_recognition_msgs/index.html)
+
+ This is required for visualizing 3D bounding boxes of objects.
+
+## Usage
+
+### Installation
+
+#### 1. ROS
+[Ubuntu install of ROS Kinetic](http://wiki.ros.org/kinetic/Installation/Ubuntu)
+
+#### 2. jsk_recognition_msgs
+```bash
+sudo apt-get install ros-kinetic-jsk-recognition-msgs
+sudo apt-get install ros-kinetic-jsk-rviz-plugins
+```
+#### 3. kitti_visualizer package
+```bash
+cd ros_workspace/src
+git clone git@github.com:xiaoliangabc/kitti_visualizer.git
+cd ros_workspace
+catkin_make
+source devel/setup.bash
+```
+
+### Visualize Object Data
+
+#### Download Object Data
+Download object data (velodyne, image_2, calib, label_2) from [KITTI Object Detection Dataset](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d) and set the folder structure as following:
+```
+object
+ testing
+ calib
+ image_2
+ results
+ velodyne
+ training
+ calib
+ image_2
+ label_2
+ velodyne
+```
+
+#### Modify Config File
+Open [launch/object_visualizer.launch](launch/object_visualizer.launch) file, change the following configs:
+- `data_path`: folder that contains object data
+- `dataset`: which dataset want to visualize (`training` / `testing`)
+- `frame_size`: number of frames for the corresponding dataset (`training: 7481` / `tesing: 7518`)
+- `current_frame`: frame index want to start visualization
+
+#### Launch object_visualizer
+Run
+```
+roslaunch kitti_visualizer object_visualizer.launch
+```
+Then `Rviz` will be launched, the layout of `Rviz` is look like
+
+
+
+#### Switch Frames
+Move the mouse to the bottom left of the screen:
+- Click the **Prev** button: switch to the previous frame
+- Click the **Next** button: switch to the next frame
+- Type frame number to **Frame string** box: jump to the frame you specified
+
+
+### Visualize Track Data
+
+#### Download Track Data
+Download track data (velodyne, image_2, calib, label_2) from [KITTI Object Tracking Dataset](http://www.cvlibs.net/datasets/kitti/eval_tracking.php) and set the folder structure as following:
+```
+tracking
+ testing
+ calib
+ image_02
+ results
+ velodyne
+ oxts
+ training
+ calib
+ image_02
+ label_02
+ velodyne
+ oxts
+```
+
+#### Modify Config File
+Open [launch/track_visualizer.launch](launch/track_visualizer.launch) file, change the following configs:
+- `data_path`: folder that contains track data
+- `dataset`: which dataset want to visualize (`training` / `testing`)
+- `scene`: which scene want to visualize (`00xx`)
+- `current_frame`: frame index want to start visualization
+
+#### Launch track_visualizer
+Run
+```
+roslaunch kitti_visualizer track_visualizer.launch
+```
+Then `Rviz` will be launched, the layout of `Rviz` is look like
+
+
+
+## Reference
+[kitti_object_vis](https://github.com/kuixu/kitti_object_vis)
+
+[second-ros](https://github.com/lgsvl/second-ros)
diff --git a/gating_control_ros/include/common/rviz_command_button.h b/gating_control_ros/include/common/rviz_command_button.h
new file mode 100644
index 0000000..64ddee4
--- /dev/null
+++ b/gating_control_ros/include/common/rviz_command_button.h
@@ -0,0 +1,66 @@
+#ifndef KITTI_VISUALIZER_COMMON_RVIZ_COMMAND_BUTTON_H_
+#define KITTI_VISUALIZER_COMMON_RVIZ_COMMAND_BUTTON_H_
+
+#include
+#include
+
+#include
+
+#include
+
+#include
+
+#include
+#include
+#include
+#include
+#include
+#include
+
+#include
+
+namespace kitti_visualizer {
+// Declare Button subclass of rviz::Panel. Every panel which can be added via
+// the Panels/Add_New_Panel menu is a subclass of rviz::Panel.
+class CommandButton : public rviz::Panel {
+ // This class uses Qt slots and is a subclass of QObject, so it needs
+ // the Q_OBJECT macro.
+ Q_OBJECT
+ public:
+ // QWidget subclass constructors usually take a parent widget
+ // parameter (which usually defaults to 0). At the same time,
+ // pluginlib::ClassLoader creates instances by calling the default
+ // constructor (with no arguments). Taking the parameter and giving
+ // a default of 0 lets the default constructor work and also lets
+ // someone using the class for something else to pass in a parent
+ // widget as they normally would with Qt.
+ CommandButton(QWidget* parent = 0);
+
+ // Declare overrides of rviz::Panel functions for saving and loading data from
+ // the config file.
+ virtual void load(const rviz::Config& config);
+ virtual void save(rviz::Config config) const;
+
+ // Declare some internal slots.
+ protected Q_SLOTS:
+ // Call when button is clicked.
+ void ButtonResponse(QString command);
+
+ // updateTopic() reads the topic name from the QLineEdit
+ void UpdateFrame();
+
+ // Protected member variables.
+ protected:
+ // The ROS node handle.
+ ros::NodeHandle nh_;
+
+ // The ROS publisher for the command.
+ ros::Publisher command_publisher_;
+
+ // One-line text editor for entering the outgoing ROS topic name.
+ QLineEdit* output_frame_editor_;
+}; // class CommandButton
+
+} // namespace kitti_visualizer
+
+#endif // KITTI_VISUALIZER_COMMON_RVIZ_COMMAND_BUTTON_H_
diff --git a/gating_control_ros/include/common/transform_utils.h b/gating_control_ros/include/common/transform_utils.h
new file mode 100644
index 0000000..6dae883
--- /dev/null
+++ b/gating_control_ros/include/common/transform_utils.h
@@ -0,0 +1,200 @@
+#ifndef KITTI_VISUALIZER_COMMON_TRANSFORM_UTILS_H_
+#define KITTI_VISUALIZER_COMMON_TRANSFORM_UTILS_H_
+
+#include
+
+// Part 1: Roatation vector
+// Rotation vector to rotation matrix
+inline Eigen::Matrix3d RotationVectorToRotationMatrix(
+ const Eigen::Vector3d& rotation_vector) {
+ double norm = rotation_vector.norm();
+ Eigen::AngleAxisd angle_axis(norm, rotation_vector / norm);
+ return angle_axis.matrix();
+}
+
+// Rotation vector to euler angles
+inline Eigen::Vector3d RotationVectorToEulerAngles(
+ const Eigen::Vector3d& rotation_vector) {
+ double norm = rotation_vector.norm();
+ Eigen::AngleAxisd angle_axis(norm, rotation_vector / norm);
+ return angle_axis.matrix().eulerAngles(2, 1, 0);
+}
+
+// Rotation vector quaternion
+inline Eigen::Quaterniond RotationVectorToQuaternion(
+ const Eigen::Vector3d& rotation_vector) {
+ double norm = rotation_vector.norm();
+ Eigen::AngleAxisd angle_axis(norm, rotation_vector / norm);
+ Eigen::Quaterniond quaternion;
+ quaternion = angle_axis;
+ return quaternion;
+}
+
+// Part 2: Roatation matrix
+// Rotation matrix to rotation vector
+inline Eigen::Vector3d RotationMatrixToRotationVector(
+ const Eigen::Matrix3d& rotation_matrix) {
+ Eigen::AngleAxisd angle_axis(rotation_matrix);
+ return angle_axis.angle() * angle_axis.axis();
+}
+
+// Rotation matrix to euler angles
+inline Eigen::Vector3d RotationMatrixToEulerAngles(
+ const Eigen::Matrix3d& rotation_matrix) {
+ return rotation_matrix.eulerAngles(2, 1, 0);
+}
+
+// Rotation matrix to quaternion
+inline Eigen::Quaterniond RotationMatrixToQuaternion(
+ const Eigen::Matrix3d& rotation_matrix) {
+ Eigen::Quaterniond quaternion;
+ quaternion = rotation_matrix;
+ return quaternion;
+}
+
+// Part 3: Euler angles
+// Euler angles to rotation vector
+inline Eigen::Vector3d EulerAnglesToRotationVector(
+ const Eigen::Vector3d& euler_angles) {
+ Eigen::AngleAxisd roll_angle(
+ Eigen::AngleAxisd(euler_angles(2), Eigen::Vector3d::UnitX()));
+ Eigen::AngleAxisd pitch_angle(
+ Eigen::AngleAxisd(euler_angles(1), Eigen::Vector3d::UnitY()));
+ Eigen::AngleAxisd yaw_angle(
+ Eigen::AngleAxisd(euler_angles(0), Eigen::Vector3d::UnitZ()));
+ Eigen::AngleAxisd angle_axis;
+ angle_axis = yaw_angle * pitch_angle * roll_angle;
+ return angle_axis.angle() * angle_axis.axis();
+}
+
+// Euler angles to rotation matrix
+inline Eigen::Matrix3d EulerAnglesToRotationMatrix(
+ const Eigen::Vector3d& euler_angles) {
+ Eigen::AngleAxisd roll_angle(
+ Eigen::AngleAxisd(euler_angles(2), Eigen::Vector3d::UnitX()));
+ Eigen::AngleAxisd pitch_angle(
+ Eigen::AngleAxisd(euler_angles(1), Eigen::Vector3d::UnitY()));
+ Eigen::AngleAxisd yaw_angle(
+ Eigen::AngleAxisd(euler_angles(0), Eigen::Vector3d::UnitZ()));
+ Eigen::Matrix3d rotation_matrix;
+ rotation_matrix = yaw_angle * pitch_angle * roll_angle;
+ return rotation_matrix;
+}
+
+// Euler angles to quaternion
+inline Eigen::Quaterniond EulerAnglesToQuaternion(
+ const Eigen::Vector3d& euler_angles) {
+ Eigen::AngleAxisd roll_angle(
+ Eigen::AngleAxisd(euler_angles(2), Eigen::Vector3d::UnitX()));
+ Eigen::AngleAxisd pitch_angle(
+ Eigen::AngleAxisd(euler_angles(1), Eigen::Vector3d::UnitY()));
+ Eigen::AngleAxisd yaw_angle(
+ Eigen::AngleAxisd(euler_angles(0), Eigen::Vector3d::UnitZ()));
+ Eigen::Quaterniond quaternion;
+ quaternion = yaw_angle * pitch_angle * roll_angle;
+ return quaternion;
+}
+
+// Part 4: Quaternion
+// Quaternion to rotation vector
+inline Eigen::Vector3d QuaternionToRotationVector(
+ const Eigen::Quaterniond& quaternion) {
+ Eigen::AngleAxisd angle_axis(quaternion);
+ return angle_axis.angle() * angle_axis.axis();
+}
+
+// Quaternion to rotation matrix
+inline Eigen::Matrix3d QuaternionToRotationMatrix(
+ const Eigen::Quaterniond& quaternion) {
+ return quaternion.matrix();
+}
+
+// Quaternion to euler angles
+inline Eigen::Vector3d QuaternionToEulerAngles(
+ const Eigen::Quaterniond& quaternion) {
+ return quaternion.matrix().eulerAngles(2, 1, 0);
+}
+
+// Part 5: To Transform affine
+// Rotation vector and translation vector to transform affine
+inline Eigen::Affine3d RotationVectorToTransformAffine(
+ const Eigen::Vector3d& rotation_vector,
+ const Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine = Eigen::Affine3d::Identity();
+ transform_affine.linear() = RotationVectorToRotationMatrix(rotation_vector);
+ transform_affine.translation() = translation_vector;
+ return transform_affine;
+}
+
+// Rotation matrix and translation vector to transform affine
+inline Eigen::Affine3d RotationMatrixToTransformAffine(
+ const Eigen::Matrix3d& rotation_matrix,
+ const Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine = Eigen::Affine3d::Identity();
+ transform_affine.linear() = rotation_matrix;
+ transform_affine.translation() = translation_vector;
+ return transform_affine;
+}
+
+// Euler angles and translation vector to transform affine
+inline Eigen::Affine3d EulerAnglesToTransformAffine(
+ const Eigen::Vector3d& euler_angles,
+ const Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine = Eigen::Affine3d::Identity();
+ transform_affine.linear() = EulerAnglesToRotationMatrix(euler_angles);
+ transform_affine.translation() = translation_vector;
+ return transform_affine;
+}
+
+// Quaternion and translation vector to transform affine
+inline Eigen::Affine3d QuaternionToTransformAffine(
+ const Eigen::Quaterniond& quaternion,
+ const Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine = Eigen::Affine3d::Identity();
+ transform_affine.linear() = QuaternionToRotationMatrix(quaternion);
+ transform_affine.translation() = translation_vector;
+ return transform_affine;
+}
+
+// Part 6: From Transform matrix
+// Transform matrix to rotation vector and translation vector
+inline void TransformMatrixToRotationVector(
+ const Eigen::Matrix4d& transform_matrix, Eigen::Vector3d& rotation_vector,
+ Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine;
+ transform_affine.matrix() = transform_matrix;
+ rotation_vector = RotationMatrixToRotationVector(transform_affine.linear());
+ translation_vector = transform_affine.translation();
+}
+
+// Transform matrix to rotation matrix and translation vector
+inline void TransformMatrixToRotationMatrix(
+ const Eigen::Matrix4d& transform_matrix, Eigen::Matrix3d& rotation_matrix,
+ Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine;
+ transform_affine.matrix() = transform_matrix;
+ rotation_matrix = transform_affine.linear();
+ translation_vector = transform_affine.translation();
+}
+
+// Transform matrix to euler angles and translation vector
+inline void TransformMatrixToEulerAngles(
+ const Eigen::Matrix4d& transform_matrix, Eigen::Vector3d& euler_angles,
+ Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine;
+ transform_affine.matrix() = transform_matrix;
+ euler_angles = RotationMatrixToEulerAngles(transform_affine.linear());
+ translation_vector = transform_affine.translation();
+}
+
+// Transform matrix to quaternion and translation vector
+inline void TransformMatrixToQuaternion(const Eigen::Matrix4d& transform_matrix,
+ Eigen::Quaterniond& quaternion,
+ Eigen::Vector3d& translation_vector) {
+ Eigen::Affine3d transform_affine;
+ transform_affine.matrix() = transform_matrix;
+ quaternion = RotationMatrixToQuaternion(transform_affine.linear());
+ translation_vector = transform_affine.translation();
+}
+
+#endif // KITTI_VISUALIZER_COMMON_TRANSFORM_UTILS_H_
diff --git a/gating_control_ros/include/common/utils.h b/gating_control_ros/include/common/utils.h
new file mode 100644
index 0000000..674323e
--- /dev/null
+++ b/gating_control_ros/include/common/utils.h
@@ -0,0 +1,176 @@
+#ifndef KITTI_VISUALIZER_COMMON_UTILS_H_
+#define KITTI_VISUALIZER_COMMON_UTILS_H_
+
+#include
+
+#include
+#include
+#include
+
+#include
+
+namespace kitti_visualizer {
+static int FolderFilesNumber(std::string path) {
+ DIR *dir;
+ struct dirent *ent;
+ int files_number = 0;
+ if ((dir = opendir(path.c_str())) != NULL) {
+ // Print all the files and directories within directory
+ while ((ent = readdir(dir)) != NULL) {
+ if (strcmp(ent->d_name, ".") == 0 || strcmp(ent->d_name, "..") == 0)
+ continue;
+ files_number++;
+ }
+ closedir(dir);
+ } else {
+ // Could not open directory
+ ROS_ERROR("Could not open directory: %s", path.c_str());
+ ros::shutdown();
+ }
+
+ return files_number;
+}
+
+static void ReadPointCloud(const std::string &in_file,
+ pcl::PointCloud::Ptr raw_cloud) {
+ // Load point cloud from .bin file
+ std::fstream input(in_file.c_str(), std::ios::in | std::ios::binary);
+ if (!input.good()) {
+ ROS_ERROR("Could not read file: %s", in_file.c_str());
+ exit(EXIT_FAILURE);
+ }
+ input.seekg(0, std::ios::beg);
+
+ // Transform .bin file to pcl cloud
+ for (size_t i = 0; input.good() && !input.eof(); i++) {
+ pcl::PointXYZI pt;
+ // Read data
+ input.read((char *)&pt.x, 3 * sizeof(float));
+ input.read((char *)&pt.intensity, sizeof(float));
+ raw_cloud->points.push_back(pt);
+ }
+ input.close();
+}
+
+// static void ReadCalibMatrix(const std::string &file_path,
+// const std::string &matrix_name,
+// Eigen::MatrixXd &trans_matrix) {
+// // Open calib file
+// std::ifstream ifs(file_path);
+// if (!ifs) {
+// ROS_ERROR("File %s does not exist", file_path.c_str());
+// ros::shutdown();
+// }
+
+// // Read matrix
+// std::string temp_str;
+// while (std::getline(ifs, temp_str)) {
+// std::istringstream iss(temp_str);
+// std::string name;
+// iss >> name;
+// if (name == matrix_name) {
+// float temp_float;
+// for (int i = 0; i < trans_matrix.rows(); ++i) {
+// for (int j = 0; j < trans_matrix.cols(); ++j) {
+// if (iss.rdbuf()->in_avail() != 0) {
+// iss >> temp_float;
+// trans_matrix(i, j) = temp_float;
+// }
+// }
+// }
+// }
+// }
+// }
+
+static void ReadCalibMatrix(const std::string &file_path,
+ const std::string &matrix_name,
+ Eigen::MatrixXd &trans_matrix) {
+ // Open calib file
+ std::ifstream ifs(file_path);
+ if (!ifs) {
+ ROS_ERROR("File %s does not exist", file_path.c_str());
+ ros::shutdown();
+ }
+
+ // Read matrix
+ std::string temp_str;
+ while (std::getline(ifs, temp_str)) {
+ std::istringstream iss(temp_str);
+ std::string name;
+ iss >> name;
+ if (name == matrix_name) {
+ if (matrix_name == "P2:") {
+ trans_matrix = Eigen::MatrixXd::Zero(3, 4);
+ float temp_float;
+ for (int i = 0; i < 3; ++i) {
+ for (int j = 0; j < 4; ++j) {
+ iss >> temp_float;
+ trans_matrix(i, j) = temp_float;
+ }
+ }
+ return;
+ } else if (matrix_name == "R0_rect:") {
+ trans_matrix = Eigen::MatrixXd::Zero(4, 4);
+ float temp_float;
+ for (int i = 0; i < 3; ++i) {
+ for (int j = 0; j < 3; ++j) {
+ iss >> temp_float;
+ trans_matrix(i, j) = temp_float;
+ }
+ }
+ trans_matrix(3, 3) = 1.0;
+ return;
+ } else if (matrix_name == "R_rect") {
+ trans_matrix = Eigen::MatrixXd::Zero(4, 4);
+ float temp_float;
+ for (int i = 0; i < 3; ++i) {
+ for (int j = 0; j < 3; ++j) {
+ iss >> temp_float;
+ trans_matrix(i, j) = temp_float;
+ }
+ }
+ trans_matrix(3, 3) = 1.0;
+ return;
+ } else if (matrix_name == "Tr_velo_to_cam:") {
+ trans_matrix = Eigen::MatrixXd::Zero(4, 4);
+ float temp_float;
+ for (int i = 0; i < 3; ++i) {
+ for (int j = 0; j < 4; ++j) {
+ iss >> temp_float;
+ trans_matrix(i, j) = temp_float;
+ }
+ }
+ trans_matrix(3, 3) = 1.0;
+ return;
+ } else if (matrix_name == "Tr_velo_cam") {
+ trans_matrix = Eigen::MatrixXd::Zero(4, 4);
+ float temp_float;
+ for (int i = 0; i < 3; ++i) {
+ for (int j = 0; j < 4; ++j) {
+ iss >> temp_float;
+ trans_matrix(i, j) = temp_float;
+ }
+ }
+ trans_matrix(3, 3) = 1.0;
+ return;
+ } else if (matrix_name == "Tr_cam_to_road:") {
+ trans_matrix = Eigen::MatrixXd::Zero(4, 4);
+ float temp_float;
+ for (int i = 0; i < 3; ++i) {
+ for (int j = 0; j < 4; ++j) {
+ iss >> temp_float;
+ trans_matrix(i, j) = temp_float;
+ }
+ }
+ trans_matrix(3, 3) = 1.0;
+ return;
+ }
+ }
+ }
+ ROS_ERROR("Transform matrix %s does not exist", matrix_name.c_str());
+ ros::shutdown();
+}
+
+} // namespace kitti_visualizer
+
+#endif // KITTI_VISUALIZER_COMMON_UTILS_H_
diff --git a/gating_control_ros/include/object_visualizer/object_visualizer.h b/gating_control_ros/include/object_visualizer/object_visualizer.h
new file mode 100644
index 0000000..013ee3c
--- /dev/null
+++ b/gating_control_ros/include/object_visualizer/object_visualizer.h
@@ -0,0 +1,122 @@
+#ifndef KITTI_VISUALIZER_OBJECT_VISUALIZER_OBJECT_VISUALIZER_H_
+#define KITTI_VISUALIZER_OBJECT_VISUALIZER_OBJECT_VISUALIZER_H_
+
+#include
+
+#include
+#include
+#include
+#include
+#include
+
+#include
+#include
+#include
+#include
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#include
+
+#include "common/transform_utils.h"
+#include "common/utils.h"
+
+// Rui
+#include
+#include
+#include
+#include
+#include
+#include
+
+#include
+#include
+
+
+namespace kitti_visualizer {
+
+class ObjectVisualizer {
+ public:
+ ObjectVisualizer(ros::NodeHandle nh, ros::NodeHandle pnh);
+
+ // Visualize object data
+ void Visualizer();
+
+ private:
+ // Rui
+ void ObjectCallback(const pcl::PointCloud::Ptr input);
+
+ // Visualize point cloud
+ void PointCloudVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Visualize image
+ void ImageVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Draw 2D bounding boxes in image
+ void Draw2DBoundingBoxes(const std::string& file_prefix, cv::Mat& raw_image);
+
+ // Visualize 3D bounding boxes
+ void BoundingBoxesVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Transform 3D bounding boxes form camera to velodyne
+ jsk_recognition_msgs::BoundingBoxArray TransformBoundingBoxes(
+ const std::vector> detections,
+ const std::string& file_prefix);
+
+ // Parse detections from file
+ std::vector> ParseDetections(
+ const std::string& file_prefix);
+
+ // Subscribe command from Rviz
+ void CommandButtonCallback(const std_msgs::String::ConstPtr& in_command);
+
+ // Judge whether the files number are valid
+ void AssertFilesNumber();
+
+ private:
+ ros::NodeHandle nh_;
+ ros::NodeHandle pnh_;
+
+ // Subscriber
+ ros::Subscriber sub_command_button_;
+
+ // Publisher
+ ros::Publisher pub_point_cloud_;
+ ros::Publisher pub_image_;
+ ros::Publisher pub_bounding_boxes_;
+
+ // Object data path
+ std::string data_path_;
+ std::string dataset_;
+
+ // Frame
+ int frame_size_;
+ int current_frame_;
+
+ // Rui
+ int count_car_;
+ int count_pedestrian_;
+ int count_cyclist_;
+
+ ros::Publisher pub_objects_;
+ ros::Publisher pub_objects_vis_;
+
+ pcl::PointCloud::Ptr cloud_final {new pcl::PointCloud};
+ sensor_msgs::PointCloud2 cloud_out;
+ pcl::CropBox crop_box;
+
+};
+
+} // namespace kitti_visualizer
+
+#endif // KITTI_VISUALIZER_OBJECT_VISUALIZER_OBJECT_VISUALIZER_H_
diff --git a/gating_control_ros/include/object_visualizer/object_visualizer.h.old b/gating_control_ros/include/object_visualizer/object_visualizer.h.old
new file mode 100644
index 0000000..8bb8a26
--- /dev/null
+++ b/gating_control_ros/include/object_visualizer/object_visualizer.h.old
@@ -0,0 +1,93 @@
+#ifndef KITTI_VISUALIZER_OBJECT_VISUALIZER_OBJECT_VISUALIZER_H_
+#define KITTI_VISUALIZER_OBJECT_VISUALIZER_OBJECT_VISUALIZER_H_
+
+#include
+
+#include
+#include
+#include
+#include
+
+#include
+#include
+#include
+#include
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#include
+
+#include "common/transform_utils.h"
+#include "common/utils.h"
+
+namespace kitti_visualizer {
+
+class ObjectVisualizer {
+ public:
+ ObjectVisualizer(ros::NodeHandle nh, ros::NodeHandle pnh);
+
+ // Visualize object data
+ void Visualizer();
+
+ private:
+ // Visualize point cloud
+ void PointCloudVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Visualize image
+ void ImageVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Draw 2D bounding boxes in image
+ void Draw2DBoundingBoxes(const std::string& file_prefix, cv::Mat& raw_image);
+
+ // Visualize 3D bounding boxes
+ void BoundingBoxesVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Transform 3D bounding boxes form camera to velodyne
+ jsk_recognition_msgs::BoundingBoxArray TransformBoundingBoxes(
+ const std::vector> detections,
+ const std::string& file_prefix);
+
+ // Parse detections from file
+ std::vector> ParseDetections(
+ const std::string& file_prefix);
+
+ // Subscribe command from Rviz
+ void CommandButtonCallback(const std_msgs::String::ConstPtr& in_command);
+
+ // Judge whether the files number are valid
+ void AssertFilesNumber();
+
+ private:
+ ros::NodeHandle nh_;
+ ros::NodeHandle pnh_;
+
+ // Subscriber
+ ros::Subscriber sub_command_button_;
+
+ // Publisher
+ ros::Publisher pub_point_cloud_;
+ ros::Publisher pub_image_;
+ ros::Publisher pub_bounding_boxes_;
+
+ // Object data path
+ std::string data_path_;
+ std::string dataset_;
+
+ // Frame
+ int frame_size_;
+ int current_frame_;
+};
+
+} // namespace kitti_visualizer
+
+#endif // KITTI_VISUALIZER_OBJECT_VISUALIZER_OBJECT_VISUALIZER_H_
diff --git a/gating_control_ros/include/track_visualizer/track_visualizer.h b/gating_control_ros/include/track_visualizer/track_visualizer.h
new file mode 100644
index 0000000..bc8d04d
--- /dev/null
+++ b/gating_control_ros/include/track_visualizer/track_visualizer.h
@@ -0,0 +1,99 @@
+#ifndef KITTI_VISUALIZER_TRACK_VISUALIZER_TRACK_VISUALIZER_H_
+#define KITTI_VISUALIZER_TRACK_VISUALIZER_TRACK_VISUALIZER_H_
+
+#include
+
+#include
+#include
+#include
+#include
+
+#include
+#include
+#include
+#include
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#include
+
+#include "common/transform_utils.h"
+#include "common/utils.h"
+
+namespace kitti_visualizer {
+
+class TrackVisualizer {
+ public:
+ TrackVisualizer(ros::NodeHandle nh, ros::NodeHandle pnh);
+
+ // Visualize object data
+ void Visualizer(const int& frame);
+
+ // Get frame number
+ int GetFrameNumber();
+
+ private:
+ // Visualize point cloud
+ void PointCloudVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Visualize image
+ void ImageVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher);
+
+ // Draw 2D bounding boxes in image
+ void Draw2DBoundingBoxes(const std::string& file_prefix,
+ const std::string& folder, cv::Mat& raw_image);
+
+ // Visualize 3D bounding boxes
+ void BoundingBoxesVisualizer(const std::string& file_prefix,
+ const std::string& folder,
+ const ros::Publisher publisher);
+
+ // Transform 3D bounding boxes form camera to velodyne
+ jsk_recognition_msgs::BoundingBoxArray TransformBoundingBoxes(
+ const std::vector> detections,
+ const std::string& file_prefix);
+
+ // Parse detections from file
+ std::vector> ParseTracks(const std::string& file_prefix,
+ const std::string& folder);
+
+ // Subscribe command from Rviz
+ void CommandButtonCallback(const std_msgs::String::ConstPtr& in_command);
+
+ private:
+ ros::NodeHandle nh_;
+ ros::NodeHandle pnh_;
+
+ // Subscriber
+ ros::Subscriber sub_command_button_;
+
+ // Publisher
+ ros::Publisher pub_point_cloud_;
+ ros::Publisher pub_image_;
+ ros::Publisher pub_bounding_boxes_;
+ ros::Publisher pub_tracking_result_;
+
+ // Object data path
+ std::string data_path_;
+ std::string dataset_;
+
+ // Scene
+ std::string scene_;
+
+ // Frame
+ int frame_size_;
+ int current_frame_;
+};
+
+} // namespace kitti_visualizer
+
+#endif // KITTI_VISUALIZER_TRACK_VISUALIZER_TRACK_VISUALIZER_H_
diff --git a/gating_control_ros/launch/feature_evaluation_testing.launch b/gating_control_ros/launch/feature_evaluation_testing.launch
new file mode 100644
index 0000000..6ee67ea
--- /dev/null
+++ b/gating_control_ros/launch/feature_evaluation_testing.launch
@@ -0,0 +1,38 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/gating_control_ros/launch/feature_evaluation_training.launch b/gating_control_ros/launch/feature_evaluation_training.launch
new file mode 100644
index 0000000..4391b6a
--- /dev/null
+++ b/gating_control_ros/launch/feature_evaluation_training.launch
@@ -0,0 +1,38 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/gating_control_ros/launch/object_visualizer.launch b/gating_control_ros/launch/object_visualizer.launch
new file mode 100755
index 0000000..73bc727
--- /dev/null
+++ b/gating_control_ros/launch/object_visualizer.launch
@@ -0,0 +1,38 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/gating_control_ros/launch/track_visualizer.launch b/gating_control_ros/launch/track_visualizer.launch
new file mode 100755
index 0000000..e25443b
--- /dev/null
+++ b/gating_control_ros/launch/track_visualizer.launch
@@ -0,0 +1,18 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/gating_control_ros/package.xml b/gating_control_ros/package.xml
new file mode 100644
index 0000000..d35b000
--- /dev/null
+++ b/gating_control_ros/package.xml
@@ -0,0 +1,47 @@
+
+
+ kitti_visualizer
+ 0.0.0
+ The kitti visualizer package
+
+ wxl
+
+ BSD
+
+
+
+ wxl
+
+ catkin
+ cv_bridge
+ image_transport
+ pcl_conversions
+ pcl_ros
+ roscpp
+ rospy
+ rviz
+ sensor_msgs
+ visualization_msgs
+ cv_bridge
+ image_transport
+ pcl_conversions
+ pcl_ros
+ roscpp
+ rospy
+ rviz
+ sensor_msgs
+ visualization_msgs
+ cv_bridge
+ image_transport
+ pcl_conversions
+ pcl_ros
+ roscpp
+ rospy
+ rviz
+ sensor_msgs
+ visualization_msgs
+
+
+
+
+
diff --git a/gating_control_ros/param/button_layout.yaml b/gating_control_ros/param/button_layout.yaml
new file mode 100644
index 0000000..b1c8ad7
--- /dev/null
+++ b/gating_control_ros/param/button_layout.yaml
@@ -0,0 +1,2 @@
+- name: 'Prev'
+- name: 'Next'
diff --git a/gating_control_ros/rviz/object_visualizer.rviz b/gating_control_ros/rviz/object_visualizer.rviz
new file mode 100644
index 0000000..a9e4803
--- /dev/null
+++ b/gating_control_ros/rviz/object_visualizer.rviz
@@ -0,0 +1,199 @@
+Panels:
+ - Class: rviz/Displays
+ Help Height: 0
+ Name: Displays
+ Property Tree Widget:
+ Expanded: ~
+ Splitter Ratio: 0.5
+ Tree Height: 251
+ - Class: rviz/Selection
+ Name: Selection
+ - Class: rviz/Tool Properties
+ Expanded:
+ - /2D Pose Estimate1
+ - /2D Nav Goal1
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.5886790156364441
+ - Class: rviz/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+ - Class: rviz/Time
+ Experimental: false
+ Name: Time
+ SyncMode: 0
+ SyncSource: PointCloud2
+ - Class: kitti_visualizer/Button
+ Name: Button
+ - Class: rviz/Selection
+ Name: Selection
+Preferences:
+ PromptSaveOnExit: true
+Toolbars:
+ toolButtonStyle: 2
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Class: rviz/Axes
+ Enabled: true
+ Length: 2
+ Name: Axes
+ Radius: 0.10000000149011612
+ Reference Frame:
+ Value: true
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: FlatColor
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 0.9900000095367432
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Queue Size: 10
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.029999999329447746
+ Style: Flat Squares
+ Topic: /kitti_visualizer/object/point_cloud
+ Unreliable: false
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ - Class: rviz/Image
+ Enabled: true
+ Image Topic: /kitti_visualizer/object/image
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Queue Size: 2
+ Transport Hint: raw
+ Unreliable: false
+ Value: true
+ - Class: jsk_rviz_plugin/BoundingBoxArray
+ Enabled: true
+ Name: BoundingBoxArray
+ Topic: /kitti_visualizer/object/bounding_boxes
+ Unreliable: false
+ Value: true
+ alpha: 0.800000011920929
+ color: 25; 255; 0
+ coloring: Auto
+ line width: 0.20000000298023224
+ only edge: true
+ show coords: false
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: Intensity
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 0.6600000262260437
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Queue Size: 10
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.05000000074505806
+ Style: Flat Squares
+ Topic: /kitti_visualizer/object/objects_vis
+ Unreliable: false
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ Enabled: true
+ Global Options:
+ Background Color: 48; 48; 48
+ Default Light: true
+ Fixed Frame: base_link
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz/Interact
+ Hide Inactive Objects: true
+ - Class: rviz/MoveCamera
+ - Class: rviz/Select
+ - Class: rviz/FocusCamera
+ - Class: rviz/Measure
+ - Class: rviz/SetInitialPose
+ Theta std deviation: 0.2617993950843811
+ Topic: /initialpose
+ X std deviation: 0.5
+ Y std deviation: 0.5
+ - Class: rviz/SetGoal
+ Topic: /move_base_simple/goal
+ - Class: rviz/PublishPoint
+ Single click: true
+ Topic: /clicked_point
+ Value: true
+ Views:
+ Current:
+ Class: rviz/XYOrbit
+ Distance: 14.051715850830078
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.05999999865889549
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Focal Point:
+ X: 0
+ Y: 0
+ Z: 0
+ Focal Shape Fixed Size: true
+ Focal Shape Size: 0.05000000074505806
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.009999999776482582
+ Pitch: 0.6003983020782471
+ Target Frame:
+ Value: XYOrbit (rviz)
+ Yaw: 3.055414915084839
+ Saved: ~
+Window Geometry:
+ Button:
+ collapsed: false
+ Displays:
+ collapsed: false
+ Height: 780
+ Hide Left Dock: false
+ Hide Right Dock: true
+ Image:
+ collapsed: false
+ QMainWindow State: 000000ff00000000fd0000000400000000000002980000026efc020000000bfb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d00000138000000c900fffffffb0000001200530065006c0065006300740069006f006e00000001b4000000810000005c00fffffffb0000000a0049006d006100670065010000017b000000d70000001600fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000c0042007500740074006f006e0100000258000000530000004100ffffff000000010000010f0000026efc0200000002fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d0000026e000000a400ffffff0000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000005a00000003efc0100000002fb0000000800540069006d00650100000000000005a0000002eb00fffffffb0000000800540069006d00650100000000000004500000000000000000000003020000026e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Time:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: true
+ Width: 1440
+ X: 1440
+ Y: 27
diff --git a/gating_control_ros/rviz/track_visualizer.rviz b/gating_control_ros/rviz/track_visualizer.rviz
new file mode 100644
index 0000000..0598a9f
--- /dev/null
+++ b/gating_control_ros/rviz/track_visualizer.rviz
@@ -0,0 +1,172 @@
+Panels:
+ - Class: rviz/Displays
+ Help Height: 0
+ Name: Displays
+ Property Tree Widget:
+ Expanded:
+ - /BoundingBoxArray2
+ Splitter Ratio: 0.5
+ Tree Height: 399
+ - Class: rviz/Selection
+ Name: Selection
+ - Class: rviz/Tool Properties
+ Expanded:
+ - /2D Pose Estimate1
+ - /2D Nav Goal1
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.588679016
+ - Class: rviz/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+ - Class: rviz/Time
+ Experimental: false
+ Name: Time
+ SyncMode: 0
+ SyncSource: PointCloud2
+ - Class: kitti_visualizer/Button
+ Name: Button
+ - Class: rviz/Selection
+ Name: Selection
+Toolbars:
+ toolButtonStyle: 2
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Class: rviz/Axes
+ Enabled: true
+ Length: 2
+ Name: Axes
+ Radius: 0.100000001
+ Reference Frame:
+ Value: true
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: FlatColor
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 0.99000001
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Queue Size: 10
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.150000006
+ Style: Flat Squares
+ Topic: /kitti_visualizer/object/point_cloud
+ Unreliable: false
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ - Class: rviz/Image
+ Enabled: true
+ Image Topic: /kitti_visualizer/object/image
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Queue Size: 2
+ Transport Hint: raw
+ Unreliable: false
+ Value: true
+ - Class: jsk_rviz_plugin/BoundingBoxArray
+ Enabled: true
+ Name: BoundingBoxArray
+ Topic: /kitti_visualizer/object/bounding_boxes
+ Unreliable: false
+ Value: true
+ alpha: 0.800000012
+ color: 25; 255; 0
+ coloring: Flat color
+ line width: 0.300000012
+ only edge: true
+ show coords: false
+ - Class: jsk_rviz_plugin/BoundingBoxArray
+ Enabled: true
+ Name: BoundingBoxArray
+ Topic: /kitti_visualizer/object/tracking_result
+ Unreliable: false
+ Value: true
+ alpha: 0.800000012
+ color: 25; 255; 0
+ coloring: Label
+ line width: 0.00499999989
+ only edge: false
+ show coords: false
+ Enabled: true
+ Global Options:
+ Background Color: 48; 48; 48
+ Default Light: true
+ Fixed Frame: base_link
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz/Interact
+ Hide Inactive Objects: true
+ - Class: rviz/MoveCamera
+ - Class: rviz/Select
+ - Class: rviz/FocusCamera
+ - Class: rviz/Measure
+ - Class: rviz/SetInitialPose
+ Topic: /initialpose
+ - Class: rviz/SetGoal
+ Topic: /move_base_simple/goal
+ - Class: rviz/PublishPoint
+ Single click: true
+ Topic: /clicked_point
+ Value: true
+ Views:
+ Current:
+ Angle: 0.0149983382
+ Class: rviz/TopDownOrtho
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.0599999987
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.00999999978
+ Scale: 17.8780117
+ Target Frame:
+ Value: TopDownOrtho (rviz)
+ X: 35.8659821
+ Y: 4.4915905
+ Saved: ~
+Window Geometry:
+ Button:
+ collapsed: false
+ Displays:
+ collapsed: false
+ Height: 1000
+ Hide Left Dock: false
+ Hide Right Dock: true
+ Image:
+ collapsed: false
+ QMainWindow State: 000000ff00000000fd0000000400000000000001d30000035efc020000000bfb0000001200530065006c0065006300740069006f006e00000001e10000009b0000006100fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c0061007900730100000028000001d0000000d700fffffffb0000001200530065006c0065006300740069006f006e00000001b4000000810000006100fffffffb0000000a0049006d00610067006501000001fe0000013f0000001600fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000c0042007500740074006f006e0100000343000000430000004300ffffff000000010000010f0000035efc0200000002fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a0056006900650077007300000000280000035e000000ad00ffffff0000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000005ff0000003efc0100000002fb0000000800540069006d00650100000000000005ff0000030000fffffffb0000000800540069006d00650100000000000004500000000000000000000004260000035e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Time:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: true
+ Width: 1535
+ X: 65
+ Y: 24
diff --git a/gating_control_ros/rviz_command_button.xml b/gating_control_ros/rviz_command_button.xml
new file mode 100644
index 0000000..dfb8d7e
--- /dev/null
+++ b/gating_control_ros/rviz_command_button.xml
@@ -0,0 +1,9 @@
+
+
+
+ A panel widget allowing simple diff-drive style robot base control.
+
+
+
diff --git a/gating_control_ros/src/common/rviz_command_button.cc b/gating_control_ros/src/common/rviz_command_button.cc
new file mode 100644
index 0000000..4f96ddc
--- /dev/null
+++ b/gating_control_ros/src/common/rviz_command_button.cc
@@ -0,0 +1,79 @@
+#include "common/rviz_command_button.h"
+
+namespace kitti_visualizer {
+// We start with the constructor, doing the standard Qt thing of
+// passing the optional *parent* argument on to the superclass
+// constructor, and also zero-ing the velocities we will be
+// publishing.
+CommandButton::CommandButton(QWidget* parent) : rviz::Panel(parent) {
+ // Get button parameters
+ std::string current_path = ros::package::getPath("kitti_visualizer");
+ std::string button_layout_file = current_path + "/param/button_layout.yaml";
+ YAML::Node node = YAML::LoadFile(button_layout_file);
+ std::vector button_names;
+ for (int i = 0; i < node.size(); ++i) {
+ button_names.push_back(node[i]["name"].as());
+ }
+
+ // Lay out buttons using QPushButton in a QHBoxLayout.
+ QHBoxLayout* layout = new QHBoxLayout;
+ QSignalMapper* signal_mapper = new QSignalMapper(this);
+ for (auto button_name : button_names) {
+ QPushButton* button = new QPushButton(QString::fromStdString(button_name));
+ layout->addWidget(button);
+ button->setEnabled(true);
+ connect(button, SIGNAL(clicked()), signal_mapper, SLOT(map()));
+ signal_mapper->setMapping(button, QString::fromStdString(button_name));
+ }
+ connect(signal_mapper, SIGNAL(mapped(QString)), this,
+ SLOT(ButtonResponse(QString)));
+
+ // Lay out the "frame string" text entry field using a QLabel and a QLineEdit
+ // in a QHBoxLayout.
+ QHBoxLayout* frame_layout = new QHBoxLayout;
+ frame_layout->addWidget(new QLabel("Frame string:"));
+ output_frame_editor_ = new QLineEdit;
+ frame_layout->addWidget(output_frame_editor_);
+ output_frame_editor_->setPlaceholderText("000000");
+ output_frame_editor_->setText("000000");
+ layout->addLayout(frame_layout);
+ connect(output_frame_editor_, SIGNAL(returnPressed()), this,
+ SLOT(UpdateFrame()));
+
+ setLayout(layout);
+
+ // Publisher
+ command_publisher_ =
+ nh_.advertise("/kitti_visualizer/command_button", 1);
+}
+
+void CommandButton::ButtonResponse(QString command) {
+ std_msgs::String command_msg;
+ command_msg.data = command.toStdString();
+ command_publisher_.publish(command_msg);
+}
+
+void CommandButton::UpdateFrame() {
+ std_msgs::String command_msg;
+ command_msg.data = output_frame_editor_->text().toStdString();
+ command_publisher_.publish(command_msg);
+ std::cout << output_frame_editor_->text().toStdString() << std::endl;
+}
+
+// Save all configuration data from this panel to the given Config object.
+void CommandButton::save(rviz::Config config) const {
+ rviz::Panel::save(config);
+}
+
+// Load all configuration data for this panel from the given Config object.
+void CommandButton::load(const rviz::Config& config) {
+ rviz::Panel::load(config);
+}
+
+} // kitti_visualizer
+
+// Tell pluginlib about this class. Every class which should be
+// loadable by pluginlib::ClassLoader must have these two lines
+// compiled in its .cpp file, outside of any namespace scope.
+#include
+PLUGINLIB_EXPORT_CLASS(kitti_visualizer::CommandButton, rviz::Panel)
diff --git a/gating_control_ros/src/object_visualizer/object_visualizer.cc b/gating_control_ros/src/object_visualizer/object_visualizer.cc
new file mode 100644
index 0000000..f2e6edc
--- /dev/null
+++ b/gating_control_ros/src/object_visualizer/object_visualizer.cc
@@ -0,0 +1,453 @@
+#include "object_visualizer/object_visualizer.h"
+
+// .msg from autoware_tracker
+#include "autoware_tracker/DetectedObjectArray.h"
+#include "autoware_tracker/DetectedObject.h"
+
+namespace kitti_visualizer {
+
+ObjectVisualizer::ObjectVisualizer(ros::NodeHandle nh, ros::NodeHandle pnh)
+ : nh_(nh), pnh_(pnh) {
+ pnh_.param("data_path", data_path_, "");
+ pnh_.param("dataset", dataset_, "");
+ pnh_.param("frame_size", frame_size_, 0);
+ pnh_.param("current_frame", current_frame_, 0);
+
+ // Judge whether the files number are valid
+ AssertFilesNumber();
+
+ // Subscriber
+ sub_command_button_ =
+ nh_.subscribe("/kitti_visualizer/command_button", 2,
+ &ObjectVisualizer::CommandButtonCallback, this);
+
+ // Publisher
+ pub_point_cloud_ = nh_.advertise >(
+ "kitti_visualizer/object/point_cloud", 2);
+ pub_image_ =
+ nh_.advertise("kitti_visualizer/object/image", 2);
+ pub_bounding_boxes_ = nh_.advertise(
+ "kitti_visualizer/object/bounding_boxes", 2);
+
+ // Publisher: objects & objects_vis
+ pub_objects_ = nh_.advertise(
+ "kitti_visualizer/object/objects", 1000);
+ pub_objects_vis_ = nh_.advertise >(
+ "kitti_visualizer/object/objects_vis", 2);
+ // Statistics
+ count_car_ = 0;
+ count_pedestrian_ = 0;
+ count_cyclist_ = 0;
+}
+
+
+void ObjectVisualizer::Visualizer() {
+ if (dataset_ == "training") {
+ for (int frame = 0; frame < frame_size_; frame++) {
+ // Get current file name
+ std::ostringstream file_prefix;
+ file_prefix << std::setfill('0') << std::setw(6) << current_frame_;
+ ROS_INFO("Visualizing frame %s ...", file_prefix.str().c_str());
+
+ // Visualize point cloud
+ PointCloudVisualizer(file_prefix.str(), pub_point_cloud_);
+
+ // Visualize image
+ //ImageVisualizer(file_prefix.str(), pub_image_);
+
+ // Visualize 3D bounding boxes
+ //BoundingBoxesVisualizer(file_prefix.str(), pub_bounding_boxes_);
+
+ //current_frame_ = (frame_size_ + current_frame_ + 1) % frame_size_;
+ current_frame_++;
+ }
+ std::cerr << "The Number of Car: " << count_car_ << std::endl;
+ std::cerr << "The Number of Pedestrian: " << count_pedestrian_ << std::endl;
+ std::cerr << "The Number of Cyclist: " << count_cyclist_ << std::endl;
+ } else if (dataset_ == "testing") {
+
+ // Get current file name
+ std::ostringstream file_prefix;
+ file_prefix << std::setfill('0') << std::setw(6) << current_frame_;
+ ROS_INFO("Visualizing frame %s ...", file_prefix.str().c_str());
+
+ // Visualize image
+ ImageVisualizer(file_prefix.str(), pub_image_);
+
+ // Visualize 3D bounding boxes
+ BoundingBoxesVisualizer(file_prefix.str(), pub_bounding_boxes_);
+
+ // Visualize point cloud
+ PointCloudVisualizer(file_prefix.str(), pub_point_cloud_);
+
+ // for (int frame = 0; frame < 200; frame++) {
+ // // Get current file name
+ // std::ostringstream file_prefix;
+ // file_prefix << std::setfill('0') << std::setw(6) << current_frame_;
+ // ROS_INFO("Visualizing frame %s ...", file_prefix.str().c_str());
+ //
+ // // Visualize image
+ // ImageVisualizer(file_prefix.str(), pub_image_);
+ //
+ // // Visualize 3D bounding boxes
+ // BoundingBoxesVisualizer(file_prefix.str(), pub_bounding_boxes_);
+ //
+ // // Visualize point cloud
+ // PointCloudVisualizer(file_prefix.str(), pub_point_cloud_);
+ //
+ // current_frame_++;
+ // }
+ }
+}
+
+void ObjectVisualizer::PointCloudVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ // Read point cloud
+ std::string cloud_file_name =
+ data_path_ + dataset_ + "/velodyne/" + file_prefix + ".bin";
+ pcl::PointCloud::Ptr raw_cloud(
+ new pcl::PointCloud);
+ ReadPointCloud(cloud_file_name, raw_cloud);
+
+ // Publish point cloud
+ //raw_cloud->header.frame_id = "base_link";
+ raw_cloud->header.frame_id = "velodyne";
+ publisher.publish(raw_cloud);
+
+ // Publish extracted objects
+ ROS_INFO("Extracting frame: %s ...", file_prefix.c_str());
+ ROS_INFO("Visualizing frame %s ... Velodyne", file_prefix.c_str());
+
+ std::vector > detections = ParseDetections(file_prefix);
+ jsk_recognition_msgs::BoundingBoxArray bounding_box_array =
+ TransformBoundingBoxes(detections, file_prefix);
+
+ autoware_tracker::DetectedObjectArray detected_objects;
+ pcl::PointCloud::Ptr cloud_count {new pcl::PointCloud};
+
+ for (const auto bounding_box : bounding_box_array.boxes) {
+ /* Rotate the points first and then build the box */
+ Eigen::Vector3f min_raw(bounding_box.pose.position.x - (bounding_box.dimensions.x / 2.0),
+ bounding_box.pose.position.y - (bounding_box.dimensions.y / 2.0),
+ bounding_box.pose.position.z - (bounding_box.dimensions.z / 2.0));
+ Eigen::Vector3f max_raw(bounding_box.pose.position.x + (bounding_box.dimensions.x / 2.0),
+ bounding_box.pose.position.y + (bounding_box.dimensions.y / 2.0),
+ bounding_box.pose.position.z + (bounding_box.dimensions.z / 2.0));
+ tf::Quaternion quat;
+ tf::quaternionMsgToTF(bounding_box.pose.orientation, quat);
+ double roll, pitch, yaw;
+ tf::Matrix3x3(quat).getRPY(roll, pitch, yaw);
+ float min_x = (min_raw(0) - bounding_box.pose.position.x) * cos(yaw) - (min_raw(1) - bounding_box.pose.position.y) * sin(yaw) + bounding_box.pose.position.x;
+ float min_y = (min_raw(1) - bounding_box.pose.position.y) * cos(yaw) + (min_raw(0) - bounding_box.pose.position.x) * sin(yaw) + bounding_box.pose.position.y;
+ float max_x = (max_raw(0) - bounding_box.pose.position.x) * cos(yaw) - (max_raw(1) - bounding_box.pose.position.y) * sin(yaw) + bounding_box.pose.position.x;
+ float max_y = (max_raw(1) - bounding_box.pose.position.y) * cos(yaw) + (max_raw(0) - bounding_box.pose.position.x) * sin(yaw) + bounding_box.pose.position.y;
+ crop_box.setMin(
+ Eigen::Vector4f((min_xmax_x ? min_x : max_x), (min_y>max_y ? min_y : max_y), max_raw(2), 1.0));
+
+ /* Build the box first, then rotate the box
+ Eigen::Vector3f min_raw(bounding_box.pose.position.x - (bounding_box.dimensions.x / 2.0),
+ bounding_box.pose.position.y - (bounding_box.dimensions.y / 2.0),
+ bounding_box.pose.position.z - (bounding_box.dimensions.z / 2.0));
+ Eigen::Vector3f max_raw(bounding_box.pose.position.x + (bounding_box.dimensions.x / 2.0),
+ bounding_box.pose.position.y + (bounding_box.dimensions.y / 2.0),
+ bounding_box.pose.position.z + (bounding_box.dimensions.z / 2.0));
+ tf::Quaternion quat;
+ tf::quaternionMsgToTF(bounding_box.pose.orientation, quat);
+ double roll, pitch, yaw;
+ tf::Matrix3x3(quat).getRPY(roll, pitch, yaw);
+ Eigen::Vector3d eigen_euler;
+ eigen_euler << roll, pitch, yaw;
+ Eigen::Matrix3d rotation;
+ // rotation = Eigen::AngleAxisd(eigen_euler[2], Eigen::Vector3d::UnitZ()) *
+ // Eigen::AngleAxisd(eigen_euler[1], Eigen::Vector3d::UnitY()) *
+ // Eigen::AngleAxisd(eigen_euler[0], Eigen::Vector3d::UnitX());
+ // Eigen::Vector3d eigen_ret_euler = rotation.eulerAngles(0,1,2);
+ // Eigen::Vector3f eigen_ret_euler_f = eigen_ret_euler.cast ();
+ Eigen::Vector3f eigen_ret_euler_f = eigen_euler.cast ();
+
+ crop_box.setRotation(eigen_ret_euler_f);
+ crop_box.setMin(
+ Eigen::Vector4f(min_raw(0), min_raw(1), min_raw(2), 1.0));
+ crop_box.setMax(
+ Eigen::Vector4f(max_raw(0), max_raw(1), max_raw(2), 1.0));
+ */
+
+ // std::cerr << "The Min_Raw: " << min_raw(0) << " " << min_raw(1) << " " << min_raw(2) << std::endl;
+ // std::cerr << "The Max_Raw: " << max_raw(0) << " " << max_raw(1) << " " << max_raw(2) << std::endl;
+ // std::cerr << "The Min_New: " << min_x << " " << min_y << " " << min_raw(2) << std::endl;
+ // std::cerr << "The Max_New: " << max_x << " " << max_y << " " << max_raw(2) << std::endl;
+ // std::cerr << "The Min_Corr: " << (min_x_newmax_x_new ? min_x_new : max_x_new) << " " << (min_y_new>max_y_new ? min_y_new : max_y_new) << " " << max_raw(2) << std::endl;
+
+ crop_box.setInputCloud(raw_cloud);
+ crop_box.filter(*cloud_final);
+
+ // Publish point cloud count
+ *cloud_count = *cloud_count + *cloud_final;
+ //cloud_final->header.frame_id = "base_link";
+ //cloud_count->header.frame_id = "base_link";
+ cloud_final->header.frame_id = "velodyne";
+ cloud_count->header.frame_id = "velodyne";
+ //cloud_count->header.stamp = ros::Time::now();
+
+ pcl::toROSMsg(*cloud_final,cloud_out);
+ //cloud_out.header.frame_id = "base_link";
+ cloud_out.header.frame_id = "velodyne";
+
+ autoware_tracker::DetectedObject detected_object;
+ detected_object.label = std::to_string(bounding_box.label);
+ detected_object.score = 1;
+ detected_object.space_frame = cloud_out.header.frame_id;
+ detected_object.pose = bounding_box.pose;
+ detected_object.dimensions = bounding_box.dimensions;
+ detected_object.pointcloud = cloud_out;
+ detected_object.valid = true;
+
+ detected_objects.objects.push_back(detected_object);
+ }
+ if(cloud_count->points.size()!= 0) {
+ pub_objects_vis_.publish(cloud_count);
+ }
+ if(detected_objects.objects.size() != 0) {
+ pub_objects_.publish(detected_objects);
+ }
+ if (dataset_ == "training") {
+ usleep(80000); // Set up when training
+ } else if (dataset_ == "testing") {
+ usleep(15000000); // Set up when testing
+ }
+}
+
+void ObjectVisualizer::ImageVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ ROS_INFO("Visualizing frame %s ... Image", file_prefix.c_str());
+ // Read image
+ std::string image_file_name =
+ data_path_ + dataset_ + "/image_2/" + file_prefix + ".png";
+ cv::Mat raw_image = cv::imread(image_file_name.c_str());
+
+ // Draw 2D bounding boxes in image
+ Draw2DBoundingBoxes(file_prefix, raw_image);
+
+ // Publish image
+ sensor_msgs::ImagePtr raw_image_msg =
+ cv_bridge::CvImage(std_msgs::Header(), "bgr8", raw_image).toImageMsg();
+ //raw_image_msg->header.frame_id = "base_link";
+ raw_image_msg->header.frame_id = "velodyne";
+ publisher.publish(raw_image_msg);
+}
+
+void ObjectVisualizer::Draw2DBoundingBoxes(const std::string& file_prefix,
+ cv::Mat& raw_image) {
+ // Read bounding boxes data
+ std::vector > detections = ParseDetections(file_prefix);
+
+ // Draw bounding boxes in image
+ for (const auto detection : detections) {
+ cv::rectangle(raw_image, cv::Point(detection[3], detection[4]),
+ cv::Point(detection[5], detection[6]), cv::Scalar(0, 255, 0),
+ 2, 8, 0);
+ }
+}
+
+void ObjectVisualizer::BoundingBoxesVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ // Read bounding boxes data
+ std::vector > detections = ParseDetections(file_prefix);
+
+ // Transform bounding boxes to jsk_recognition_msgs
+ jsk_recognition_msgs::BoundingBoxArray bounding_box_array =
+ TransformBoundingBoxes(detections, file_prefix);
+
+ // Publish bounding boxes
+ //bounding_box_array.header.frame_id = "base_link";
+ bounding_box_array.header.frame_id = "velodyne";
+ publisher.publish(bounding_box_array);
+}
+
+jsk_recognition_msgs::BoundingBoxArray ObjectVisualizer::TransformBoundingBoxes(
+ const std::vector > detections,
+ const std::string& file_prefix) {
+ // Read transform matrixs from calib file
+ std::string calib_file_name =
+ data_path_ + dataset_ + "/calib/" + file_prefix + ".txt";
+ Eigen::MatrixXd trans_velo_to_cam = Eigen::MatrixXd::Identity(4, 4);
+ ReadCalibMatrix(calib_file_name, "Tr_velo_to_cam:", trans_velo_to_cam);
+ Eigen::MatrixXd trans_cam_to_rect = Eigen::MatrixXd::Identity(4, 4);
+ ReadCalibMatrix(calib_file_name, "R0_rect:", trans_cam_to_rect);
+
+ // Set bounding boxes to jsk_recognition_msgs::BoundingBoxArray
+ jsk_recognition_msgs::BoundingBoxArray bounding_box_array;
+ srand(time(0)); // Use the time function to get a "seed” value for srand
+ for (const auto detection : detections) {
+ jsk_recognition_msgs::BoundingBox bounding_box;
+ // Bounding box position
+ Eigen::Vector4d rect_position(detection[10], detection[11], detection[12],
+ 1.0);
+ Eigen::MatrixXd velo_position = trans_velo_to_cam.inverse() *
+ trans_cam_to_rect.inverse() * rect_position;
+ bounding_box.pose.position.x = velo_position(0);
+ bounding_box.pose.position.y = velo_position(1);
+ bounding_box.pose.position.z = velo_position(2) + detection[7] / 2.0;
+
+ // Rui: only learn objects whose distance from the center is less than 50
+ //std::cerr << "Distance: " << sqrt(pow(bounding_box.pose.position.x, 2) + pow(bounding_box.pose.position.y, 2)) << std::endl;
+ //if (sqrt(pow(bounding_box.pose.position.x, 2) + pow(bounding_box.pose.position.y, 2)) > 50) continue;
+
+ // Bounding box orientation
+ tf::Quaternion bounding_box_quat =
+ tf::createQuaternionFromRPY(0.0, 0.0, 0.0 - detection[13]);
+ tf::quaternionTFToMsg(bounding_box_quat, bounding_box.pose.orientation);
+ // Bounding box dimensions
+ bounding_box.dimensions.x = detection[8];
+ bounding_box.dimensions.y = detection[9];
+ bounding_box.dimensions.z = detection[7];
+
+ // Bounding box header
+ bounding_box.header.stamp = ros::Time::now();
+ //bounding_box.header.frame_id = "base_link";
+ bounding_box.header.frame_id = "velodyne";
+
+ // Bounding box label
+ bounding_box.label = detection[14];
+ if(bounding_box.label == 0) {
+ /* normal_sampling */
+ count_car_++;
+ bounding_box_array.boxes.push_back(bounding_box);
+
+ /* down_sampling
+ int i = int(rand()%15);
+ if(i==1) {
+ count_car_++;
+ bounding_box_array.boxes.push_back(bounding_box);
+ } */
+ } else if(bounding_box.label == 1) {
+ /* normal_sampling */
+ count_pedestrian_++;
+ bounding_box_array.boxes.push_back(bounding_box);
+
+ /* over_sampling
+ count_pedestrian_ = count_pedestrian_ + 5;
+ for(int i=0; i<5; i++){
+ bounding_box_array.boxes.push_back(bounding_box);
+ } */
+ /* down_sampling
+ int i= int(rand()%3);
+ if(i==1) {
+ count_pedestrian_++;
+ bounding_box_array.boxes.push_back(bounding_box);
+ } */
+ } else if(bounding_box.label == 2) {
+ count_cyclist_++;
+ bounding_box_array.boxes.push_back(bounding_box);
+ /* over_sampling
+ count_cyclist_ = count_cyclist_ + 15;
+ for(int i=0; i<15; i++) {
+ bounding_box_array.boxes.push_back(bounding_box);
+ } */
+
+ }
+ //bounding_box_array.boxes.push_back(bounding_box);
+ }
+
+ return bounding_box_array;
+}
+
+std::vector > ObjectVisualizer::ParseDetections(
+ const std::string& file_prefix) {
+ // Open bounding boxes file
+ std::string detections_file_name;
+ if (dataset_ == "training") {
+ detections_file_name =
+ data_path_ + dataset_ + "/label_2/" + file_prefix + ".txt";
+ } else if (dataset_ == "testing") {
+ // detections_file_name =
+ // data_path_ + dataset_ + "/results/" + file_prefix + ".txt";
+ detections_file_name =
+ data_path_ + dataset_ + "/label_2/" + file_prefix + ".txt";
+ }
+ std::ifstream detections_file(detections_file_name);
+ if (!detections_file) {
+ ROS_ERROR("File %s does not exist", detections_file_name.c_str());
+ ros::shutdown();
+ }
+
+ // Parse objects data
+ std::vector > detections;
+ std::string line_str;
+ while (getline(detections_file, line_str)) {
+ // Store std::string into std::stringstream
+ std::stringstream line_ss(line_str);
+ // Parse object type
+ std::string object_type;
+ getline(line_ss, object_type, ' ');
+ if (object_type == "DontCare" || object_type == "Misc" || object_type == "Tram") continue;
+ // Parse object data
+ std::vector detection;
+ std::string str;
+ while (getline(line_ss, str, ' ')) {
+ detection.push_back(boost::lexical_cast(str));
+ }
+ /* ---Rui--- */
+ if(object_type == "Car"|| object_type == "Van" || object_type == "Truck") {
+ detection.push_back(0);
+ } else if(object_type == "Pedestrian" || object_type == "Person_sitting") {
+ detection.push_back(1);
+ } else if(object_type == "Cyclist") {
+ detection.push_back(2);
+ }
+
+ detections.push_back(detection);
+ }
+ return detections;
+}
+
+void ObjectVisualizer::CommandButtonCallback(
+ const std_msgs::String::ConstPtr& in_command) {
+ // Parse frame number form command
+ if (in_command->data == "Next") {
+ //current_frame_ = (frame_size_ + current_frame_ + 1) % frame_size_;
+ current_frame_++;
+ } else if (in_command->data == "Prev") {
+ //current_frame_ = (frame_size_ + current_frame_ - 1) % frame_size_;
+ current_frame_++;
+ } else {
+ int frame = std::stoi(in_command->data);
+ if (frame >= 0 && frame < frame_size_)
+ current_frame_ = frame;
+ else
+ ROS_ERROR("No frame %s", in_command->data.c_str());
+ }
+
+ // Visualize object data
+ Visualizer();
+}
+
+void ObjectVisualizer::AssertFilesNumber() {
+ // Assert velodyne files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/velodyne") ==
+ frame_size_);
+ // Assert image_2 files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/image_2") ==
+ frame_size_);
+ // Assert calib files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/calib") ==
+ frame_size_);
+ if (dataset_ == "training") {
+ // Assert label_2 files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/label_2") ==
+ frame_size_);
+ } else if (dataset_ == "testing") {
+ // Assert results files numbers
+ // ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/results") ==
+ // frame_size_);
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/label_2") ==
+ frame_size_);
+ } else {
+ ROS_ERROR("Dataset input error: %s", dataset_.c_str());
+ ros::shutdown();
+ }
+}
+}
diff --git a/gating_control_ros/src/object_visualizer/object_visualizer.cc.old b/gating_control_ros/src/object_visualizer/object_visualizer.cc.old
new file mode 100644
index 0000000..c0ee2ce
--- /dev/null
+++ b/gating_control_ros/src/object_visualizer/object_visualizer.cc.old
@@ -0,0 +1,224 @@
+#include "object_visualizer/object_visualizer.h"
+
+namespace kitti_visualizer {
+
+ObjectVisualizer::ObjectVisualizer(ros::NodeHandle nh, ros::NodeHandle pnh)
+ : nh_(nh), pnh_(pnh) {
+ pnh_.param("data_path", data_path_, "");
+ pnh_.param("dataset", dataset_, "");
+ pnh_.param("frame_size", frame_size_, 0);
+ pnh_.param("current_frame", current_frame_, 0);
+
+ // Judge whether the files number are valid
+ AssertFilesNumber();
+
+ // Subscriber
+ sub_command_button_ =
+ nh_.subscribe("/kitti_visualizer/command_button", 2,
+ &ObjectVisualizer::CommandButtonCallback, this);
+
+ // Publisher
+ pub_point_cloud_ = nh_.advertise>(
+ "kitti_visualizer/object/point_cloud", 2);
+ pub_image_ =
+ nh_.advertise("kitti_visualizer/object/image", 2);
+ pub_bounding_boxes_ = nh_.advertise(
+ "kitti_visualizer/object/bounding_boxes", 2);
+}
+
+void ObjectVisualizer::Visualizer() {
+ // Get current file name
+ std::ostringstream file_prefix;
+ file_prefix << std::setfill('0') << std::setw(6) << current_frame_;
+ ROS_INFO("Visualizing frame %s ...", file_prefix.str().c_str());
+
+ // Visualize point cloud
+ PointCloudVisualizer(file_prefix.str(), pub_point_cloud_);
+
+ // Visualize image
+ ImageVisualizer(file_prefix.str(), pub_image_);
+
+ // Visualize 3D bounding boxes
+ BoundingBoxesVisualizer(file_prefix.str(), pub_bounding_boxes_);
+}
+
+void ObjectVisualizer::PointCloudVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ // Read point cloud
+ std::string cloud_file_name =
+ data_path_ + dataset_ + "/velodyne/" + file_prefix + ".bin";
+ pcl::PointCloud::Ptr raw_cloud(
+ new pcl::PointCloud);
+ ReadPointCloud(cloud_file_name, raw_cloud);
+
+ // Publish point cloud
+ raw_cloud->header.frame_id = "base_link";
+ publisher.publish(raw_cloud);
+}
+
+void ObjectVisualizer::ImageVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ // Read image
+ std::string image_file_name =
+ data_path_ + dataset_ + "/image_2/" + file_prefix + ".png";
+ cv::Mat raw_image = cv::imread(image_file_name.c_str());
+
+ // Draw 2D bounding boxes in image
+ Draw2DBoundingBoxes(file_prefix, raw_image);
+
+ // Publish image
+ sensor_msgs::ImagePtr raw_image_msg =
+ cv_bridge::CvImage(std_msgs::Header(), "bgr8", raw_image).toImageMsg();
+ raw_image_msg->header.frame_id = "base_link";
+ publisher.publish(raw_image_msg);
+}
+
+void ObjectVisualizer::Draw2DBoundingBoxes(const std::string& file_prefix,
+ cv::Mat& raw_image) {
+ // Read bounding boxes data
+ std::vector> detections = ParseDetections(file_prefix);
+
+ // Draw bounding boxes in image
+ for (const auto detection : detections) {
+ cv::rectangle(raw_image, cv::Point(detection[3], detection[4]),
+ cv::Point(detection[5], detection[6]), cv::Scalar(0, 255, 0),
+ 2, 8, 0);
+ }
+}
+
+void ObjectVisualizer::BoundingBoxesVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ // Read bounding boxes data
+ std::vector> detections = ParseDetections(file_prefix);
+
+ // Transform bounding boxes to jsk_recognition_msgs
+ jsk_recognition_msgs::BoundingBoxArray bounding_box_array =
+ TransformBoundingBoxes(detections, file_prefix);
+
+ // Publish bounding boxes
+ bounding_box_array.header.frame_id = "base_link";
+ publisher.publish(bounding_box_array);
+}
+
+jsk_recognition_msgs::BoundingBoxArray ObjectVisualizer::TransformBoundingBoxes(
+ const std::vector> detections,
+ const std::string& file_prefix) {
+ // Read transform matrixs from calib file
+ std::string calib_file_name =
+ data_path_ + dataset_ + "/calib/" + file_prefix + ".txt";
+ Eigen::MatrixXd trans_velo_to_cam = Eigen::MatrixXd::Identity(4, 4);
+ ReadCalibMatrix(calib_file_name, "Tr_velo_to_cam:", trans_velo_to_cam);
+ Eigen::MatrixXd trans_cam_to_rect = Eigen::MatrixXd::Identity(4, 4);
+ ReadCalibMatrix(calib_file_name, "R0_rect:", trans_cam_to_rect);
+
+ // Set bounding boxes to jsk_recognition_msgs::BoundingBoxArray
+ jsk_recognition_msgs::BoundingBoxArray bounding_box_array;
+ for (const auto detection : detections) {
+ jsk_recognition_msgs::BoundingBox bounding_box;
+ // Bounding box position
+ Eigen::Vector4d rect_position(detection[10], detection[11], detection[12],
+ 1.0);
+ Eigen::MatrixXd velo_position = trans_velo_to_cam.inverse() *
+ trans_cam_to_rect.inverse() * rect_position;
+ bounding_box.pose.position.x = velo_position(0);
+ bounding_box.pose.position.y = velo_position(1);
+ bounding_box.pose.position.z = velo_position(2) + detection[7] / 2.0;
+ // Bounding box orientation
+ tf::Quaternion bounding_box_quat =
+ tf::createQuaternionFromRPY(0.0, 0.0, 0.0 - detection[13]);
+ tf::quaternionTFToMsg(bounding_box_quat, bounding_box.pose.orientation);
+ // Bounding box dimensions
+ bounding_box.dimensions.x = detection[8];
+ bounding_box.dimensions.y = detection[9];
+ bounding_box.dimensions.z = detection[7];
+ // Bounding box header
+ bounding_box.header.stamp = ros::Time::now();
+ bounding_box.header.frame_id = "base_link";
+ bounding_box_array.boxes.push_back(bounding_box);
+ }
+
+ return bounding_box_array;
+}
+
+std::vector> ObjectVisualizer::ParseDetections(
+ const std::string& file_prefix) {
+ // Open bounding boxes file
+ std::string detections_file_name;
+ if (dataset_ == "training") {
+ detections_file_name =
+ data_path_ + dataset_ + "/label_2/" + file_prefix + ".txt";
+ } else if (dataset_ == "testing") {
+ detections_file_name =
+ data_path_ + dataset_ + "/results/" + file_prefix + ".txt";
+ }
+ std::ifstream detections_file(detections_file_name);
+ if (!detections_file) {
+ ROS_ERROR("File %s does not exist", detections_file_name.c_str());
+ ros::shutdown();
+ }
+
+ // Parse objects data
+ std::vector> detections;
+ std::string line_str;
+ while (getline(detections_file, line_str)) {
+ // Store std::string into std::stringstream
+ std::stringstream line_ss(line_str);
+ // Parse object type
+ std::string object_type;
+ getline(line_ss, object_type, ' ');
+ if (object_type == "DontCare") continue;
+ // Parse object data
+ std::vector detection;
+ std::string str;
+ while (getline(line_ss, str, ' ')) {
+ detection.push_back(boost::lexical_cast(str));
+ }
+ detections.push_back(detection);
+ }
+
+ return detections;
+}
+
+void ObjectVisualizer::CommandButtonCallback(
+ const std_msgs::String::ConstPtr& in_command) {
+ // Parse frame number form command
+ if (in_command->data == "Next") {
+ current_frame_ = (frame_size_ + current_frame_ + 1) % frame_size_;
+ } else if (in_command->data == "Prev") {
+ current_frame_ = (frame_size_ + current_frame_ - 1) % frame_size_;
+ } else {
+ int frame = std::stoi(in_command->data);
+ if (frame >= 0 && frame < frame_size_)
+ current_frame_ = frame;
+ else
+ ROS_ERROR("No frame %s", in_command->data.c_str());
+ }
+
+ // Visualize object data
+ Visualizer();
+}
+
+void ObjectVisualizer::AssertFilesNumber() {
+ // Assert velodyne files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/velodyne") ==
+ frame_size_);
+ // Assert image_2 files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/image_2") ==
+ frame_size_);
+ // Assert calib files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/calib") ==
+ frame_size_);
+ if (dataset_ == "training") {
+ // Assert label_2 files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/label_2") ==
+ frame_size_);
+ } else if (dataset_ == "testing") {
+ // Assert results files numbers
+ ROS_ASSERT(FolderFilesNumber(data_path_ + dataset_ + "/results") ==
+ frame_size_);
+ } else {
+ ROS_ERROR("Dataset input error: %s", dataset_.c_str());
+ ros::shutdown();
+ }
+}
+}
diff --git a/gating_control_ros/src/object_visualizer/object_visualizer_node.cc b/gating_control_ros/src/object_visualizer/object_visualizer_node.cc
new file mode 100644
index 0000000..6d6d4da
--- /dev/null
+++ b/gating_control_ros/src/object_visualizer/object_visualizer_node.cc
@@ -0,0 +1,17 @@
+#include "object_visualizer/object_visualizer.h"
+
+int main(int argc, char *argv[]) {
+ ros::init(argc, argv, "object_visualizer_node");
+
+ ros::NodeHandle nh;
+ ros::NodeHandle pnh("~");
+
+ kitti_visualizer::ObjectVisualizer object_visualizer(nh, pnh);
+
+ ros::Duration(3).sleep();
+ object_visualizer.Visualizer();
+
+ ros::spin();
+
+ return 0;
+}
diff --git a/gating_control_ros/src/track_visualizer/track_visualizer.cc b/gating_control_ros/src/track_visualizer/track_visualizer.cc
new file mode 100644
index 0000000..36b9073
--- /dev/null
+++ b/gating_control_ros/src/track_visualizer/track_visualizer.cc
@@ -0,0 +1,233 @@
+#include "track_visualizer/track_visualizer.h"
+
+namespace kitti_visualizer {
+
+TrackVisualizer::TrackVisualizer(ros::NodeHandle nh, ros::NodeHandle pnh)
+ : nh_(nh), pnh_(pnh) {
+ pnh_.param("data_path", data_path_, "");
+ pnh_.param("dataset", dataset_, "");
+ pnh_.param("scene", scene_, "");
+ pnh_.param("current_frame", current_frame_, 0);
+
+ // Subscriber
+ sub_command_button_ =
+ nh_.subscribe("/kitti_visualizer/command_button", 2,
+ &TrackVisualizer::CommandButtonCallback, this);
+
+ // Publisher
+ pub_point_cloud_ = nh_.advertise>(
+ "kitti_visualizer/object/point_cloud", 2);
+ pub_image_ =
+ nh_.advertise("kitti_visualizer/object/image", 2);
+ pub_bounding_boxes_ = nh_.advertise(
+ "kitti_visualizer/object/bounding_boxes", 2);
+ pub_tracking_result_ = nh_.advertise(
+ "kitti_visualizer/object/tracking_result", 2);
+}
+
+void TrackVisualizer::Visualizer(const int& frame) {
+ // Get current file name
+ std::ostringstream file_prefix;
+ file_prefix << std::setfill('0') << std::setw(6) << frame;
+ ROS_INFO("Visualizing frame %s ...", file_prefix.str().c_str());
+
+ // Visualize point cloud
+ PointCloudVisualizer(file_prefix.str(), pub_point_cloud_);
+
+ // Visualize image
+ ImageVisualizer(file_prefix.str(), pub_image_);
+
+ // Visualize 3D bounding boxes
+ BoundingBoxesVisualizer(file_prefix.str(), "/label_02/", pub_bounding_boxes_);
+
+ // Visualize tracking result
+ BoundingBoxesVisualizer(file_prefix.str(), "/AB3DMOT/", pub_tracking_result_);
+}
+
+void TrackVisualizer::PointCloudVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ // Read point cloud
+ std::string cloud_file_name = data_path_ + dataset_ + "/velodyne/" + scene_ +
+ "/" + file_prefix + ".bin";
+ pcl::PointCloud::Ptr raw_cloud(
+ new pcl::PointCloud);
+ ReadPointCloud(cloud_file_name, raw_cloud);
+
+ // Publish point cloud
+ raw_cloud->header.frame_id = "base_link";
+ publisher.publish(raw_cloud);
+}
+
+void TrackVisualizer::ImageVisualizer(const std::string& file_prefix,
+ const ros::Publisher publisher) {
+ // Read image
+ std::string image_file_name = data_path_ + dataset_ + "/image_02/" + scene_ +
+ "/" + file_prefix + ".png";
+ cv::Mat raw_image = cv::imread(image_file_name.c_str());
+
+ // Draw 2D bounding boxes in image
+ Draw2DBoundingBoxes(file_prefix, "/label_02/", raw_image);
+
+ // Publish image
+ sensor_msgs::ImagePtr raw_image_msg =
+ cv_bridge::CvImage(std_msgs::Header(), "bgr8", raw_image).toImageMsg();
+ raw_image_msg->header.frame_id = "base_link";
+ publisher.publish(raw_image_msg);
+}
+
+void TrackVisualizer::Draw2DBoundingBoxes(const std::string& file_prefix,
+ const std::string& folder,
+ cv::Mat& raw_image) {
+ // Read bounding boxes data
+ std::vector> tracks = ParseTracks(file_prefix, folder);
+
+ // Draw bounding boxes in image
+ for (const auto track : tracks) {
+ cv::rectangle(raw_image, cv::Point(track[4], track[5]),
+ cv::Point(track[6], track[7]), cv::Scalar(0, 255, 0), 2, 8,
+ 0);
+ }
+}
+
+void TrackVisualizer::BoundingBoxesVisualizer(const std::string& file_prefix,
+ const std::string& folder,
+ const ros::Publisher publisher) {
+ // Read bounding boxes data
+ std::vector> tracks = ParseTracks(file_prefix, folder);
+
+ // Transform bounding boxes to jsk_recognition_msgs
+ jsk_recognition_msgs::BoundingBoxArray bounding_box_array =
+ TransformBoundingBoxes(tracks, file_prefix);
+
+ // Publish bounding boxes
+ bounding_box_array.header.frame_id = "base_link";
+ publisher.publish(bounding_box_array);
+}
+
+jsk_recognition_msgs::BoundingBoxArray TrackVisualizer::TransformBoundingBoxes(
+ const std::vector> tracks,
+ const std::string& file_prefix) {
+ // Read transform matrixs from calib file
+ std::string calib_file_name =
+ data_path_ + dataset_ + "/calib/" + scene_ + ".txt";
+ Eigen::MatrixXd trans_velo_to_cam = Eigen::MatrixXd::Identity(4, 4);
+ ReadCalibMatrix(calib_file_name, "Tr_velo_cam", trans_velo_to_cam);
+ Eigen::MatrixXd trans_cam_to_rect = Eigen::MatrixXd::Identity(4, 4);
+ ReadCalibMatrix(calib_file_name, "R_rect", trans_cam_to_rect);
+
+ // Set bounding boxes to jsk_recognition_msgs::BoundingBoxArray
+ jsk_recognition_msgs::BoundingBoxArray bounding_box_array;
+ for (const auto track : tracks) {
+ jsk_recognition_msgs::BoundingBox bounding_box;
+ // Bounding box id
+ bounding_box.label = static_cast(track[0]);
+ // Bounding box position
+ Eigen::Vector4d rect_position(track[11], track[12], track[13], 1.0);
+ Eigen::MatrixXd velo_position = trans_velo_to_cam.inverse() *
+ trans_cam_to_rect.inverse() * rect_position;
+ bounding_box.pose.position.x = velo_position(0);
+ bounding_box.pose.position.y = velo_position(1);
+ bounding_box.pose.position.z = velo_position(2) + track[8] / 2.0;
+ // Bounding box orientation
+ tf::Quaternion bounding_box_quat =
+ tf::createQuaternionFromRPY(0.0, 0.0, 0.0 - track[14]);
+ tf::quaternionTFToMsg(bounding_box_quat, bounding_box.pose.orientation);
+ // Bounding box dimensions
+ bounding_box.dimensions.x = track[9];
+ bounding_box.dimensions.y = track[10];
+ bounding_box.dimensions.z = track[8];
+ // Bounding box header
+ bounding_box.header.stamp = ros::Time::now();
+ bounding_box.header.frame_id = "base_link";
+ bounding_box_array.boxes.push_back(bounding_box);
+ }
+
+ return bounding_box_array;
+}
+
+std::vector> TrackVisualizer::ParseTracks(
+ const std::string& file_prefix, const std::string& folder) {
+ // Open bounding boxes file
+ std::string tracks_file_name;
+ if (dataset_ == "training") {
+ tracks_file_name = data_path_ + dataset_ + folder + scene_ + ".txt";
+ } else if (dataset_ == "testing") {
+ tracks_file_name =
+ data_path_ + dataset_ + "/results/" + file_prefix + ".txt";
+ }
+ std::ifstream tracks_file(tracks_file_name);
+ if (!tracks_file) {
+ ROS_ERROR("File %s does not exist", tracks_file_name.c_str());
+ ros::shutdown();
+ }
+
+ // Parse tracks data
+ std::vector> tracks;
+ std::string line_str;
+ while (getline(tracks_file, line_str)) {
+ // Store std::string into std::stringstream
+ std::stringstream line_ss(line_str);
+ // Parse frame number
+ std::string frame;
+ getline(line_ss, frame, ' ');
+ if (boost::lexical_cast(frame) !=
+ boost::lexical_cast(file_prefix))
+ continue;
+ // Parse object data
+ std::vector track;
+ // Parse object id
+ std::string object_id;
+ getline(line_ss, object_id, ' ');
+ if (boost::lexical_cast(object_id) < 0) continue;
+ track.push_back(boost::lexical_cast(object_id));
+ // Parse object type
+ std::string object_type;
+ getline(line_ss, object_type, ' ');
+ if (object_type == "DontCare") continue;
+ std::string str;
+ while (getline(line_ss, str, ' ')) {
+ track.push_back(boost::lexical_cast(str));
+ }
+ tracks.push_back(track);
+ }
+
+ return tracks;
+}
+
+void TrackVisualizer::CommandButtonCallback(
+ const std_msgs::String::ConstPtr& in_command) {
+ // Parse frame number form command
+ if (in_command->data == "Next") {
+ current_frame_ = (frame_size_ + current_frame_ + 1) % frame_size_;
+ } else if (in_command->data == "Prev") {
+ current_frame_ = (frame_size_ + current_frame_ - 1) % frame_size_;
+ } else {
+ int frame = std::stoi(in_command->data);
+ if (frame >= 0 && frame < frame_size_)
+ current_frame_ = frame;
+ else
+ ROS_ERROR("No frame %s", in_command->data.c_str());
+ }
+
+ // Visualize object data
+ Visualizer(current_frame_);
+}
+
+int TrackVisualizer::GetFrameNumber() {
+ // Get velodyne frame number
+ int velo_frame_num =
+ FolderFilesNumber(data_path_ + dataset_ + "/velodyne/" + scene_ + "/");
+
+ // Get image_2 frame number
+ int image_frame_num =
+ FolderFilesNumber(data_path_ + dataset_ + "/image_02/" + scene_ + "/");
+
+ // Assert velodyne and image
+ ROS_ASSERT(velo_frame_num == image_frame_num);
+
+ // Assign
+ frame_size_ = velo_frame_num;
+
+ return velo_frame_num;
+}
+}
diff --git a/gating_control_ros/src/track_visualizer/track_visualizer_node.cc b/gating_control_ros/src/track_visualizer/track_visualizer_node.cc
new file mode 100644
index 0000000..3c62973
--- /dev/null
+++ b/gating_control_ros/src/track_visualizer/track_visualizer_node.cc
@@ -0,0 +1,27 @@
+#include "track_visualizer/track_visualizer.h"
+
+int main(int argc, char *argv[]) {
+ ros::init(argc, argv, "track_visualizer_node");
+
+ ros::NodeHandle nh;
+ ros::NodeHandle pnh("~");
+
+ kitti_visualizer::TrackVisualizer track_visualizer(nh, pnh);
+
+ // get frame number
+ int frame_size = track_visualizer.GetFrameNumber();
+
+ ros::Rate loop_rate(20);
+ int frame = 0;
+ // while (ros::ok()) {
+ // track_visualizer.Visualizer(frame);
+ // frame = (++frame) % frame_size;
+
+ // loop_rate.sleep();
+ // }
+
+ track_visualizer.Visualizer(frame);
+ ros::spin();
+
+ return 0;
+}
diff --git a/kitti_camera_ros/.gitignore b/kitti_camera_ros/.gitignore
new file mode 100644
index 0000000..259148f
--- /dev/null
+++ b/kitti_camera_ros/.gitignore
@@ -0,0 +1,32 @@
+# Prerequisites
+*.d
+
+# Compiled Object files
+*.slo
+*.lo
+*.o
+*.obj
+
+# Precompiled Headers
+*.gch
+*.pch
+
+# Compiled Dynamic libraries
+*.so
+*.dylib
+*.dll
+
+# Fortran module files
+*.mod
+*.smod
+
+# Compiled Static libraries
+*.lai
+*.la
+*.a
+*.lib
+
+# Executables
+*.exe
+*.out
+*.app
diff --git a/kitti_camera_ros/.travis.yml b/kitti_camera_ros/.travis.yml
new file mode 100644
index 0000000..b89aff2
--- /dev/null
+++ b/kitti_camera_ros/.travis.yml
@@ -0,0 +1,21 @@
+dist: bionic
+sudo: required
+language: generic
+cache: apt
+
+install:
+ - sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
+ - sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
+ - sudo apt update
+ - sudo apt install ros-melodic-desktop-full
+ - source /opt/ros/melodic/setup.bash
+ - mkdir -p ~/catkin_ws/src
+ - cd ~/catkin_ws/
+ - catkin_make
+ - source devel/setup.bash
+
+script:
+ - cd ~/catkin_ws/src
+ - git clone -b melodic https://github.com/epan-utbm/kitti_velodyne_ros.git
+ - cd ~/catkin_ws
+ - catkin_make
diff --git a/kitti_camera_ros/CMakeLists.txt b/kitti_camera_ros/CMakeLists.txt
new file mode 100644
index 0000000..20b9f1f
--- /dev/null
+++ b/kitti_camera_ros/CMakeLists.txt
@@ -0,0 +1,23 @@
+cmake_minimum_required(VERSION 2.8.3)
+project(kitti_camera_ros)
+
+find_package(catkin REQUIRED COMPONENTS
+ roscpp
+ sensor_msgs
+ geometry_msgs
+ visualization_msgs
+ pcl_conversions
+ pcl_ros
+ )
+
+find_package(PCL REQUIRED)
+
+include_directories(include ${catkin_INCLUDE_DIRS} ${PCL_INCLUDE_DIRS})
+
+catkin_package()
+
+add_executable(kitti_camera_ros src/kitti_camera_ros.cpp)
+target_link_libraries(kitti_camera_ros ${catkin_LIBRARIES} ${PCL_LIBRARIES})
+if(catkin_EXPORTED_TARGETS)
+ add_dependencies(kitti_camera_ros ${catkin_EXPORTED_TARGETS})
+endif()
diff --git a/kitti_camera_ros/LICENSE b/kitti_camera_ros/LICENSE
new file mode 100644
index 0000000..50e06a6
--- /dev/null
+++ b/kitti_camera_ros/LICENSE
@@ -0,0 +1,29 @@
+BSD 3-Clause License
+
+Copyright (c) 2021, Rui Yang
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/kitti_camera_ros/README.md b/kitti_camera_ros/README.md
new file mode 100644
index 0000000..03c2c96
--- /dev/null
+++ b/kitti_camera_ros/README.md
@@ -0,0 +1,9 @@
+# kitti_camera_ros
+
+Load KITTI camera data, play in ROS.
+
+## Usage
+
+```console
+$ roslaunch kitti_camera_ros kitti_camera_ros.launch
+```
diff --git a/kitti_camera_ros/launch/kitti_camera_ros.launch b/kitti_camera_ros/launch/kitti_camera_ros.launch
new file mode 100644
index 0000000..682a8b3
--- /dev/null
+++ b/kitti_camera_ros/launch/kitti_camera_ros.launch
@@ -0,0 +1,5 @@
+
+
+
+
+
diff --git a/kitti_camera_ros/package.xml b/kitti_camera_ros/package.xml
new file mode 100644
index 0000000..b2c576d
--- /dev/null
+++ b/kitti_camera_ros/package.xml
@@ -0,0 +1,27 @@
+
+
+ kitti_camera_ros
+ 0.0.1
+ Load KITTI camera data, play in ROS
+ Rui Yang
+ BSD
+
+ https://github.com/epan-utbm/efficient_online_learning
+ Rui Yang
+
+ catkin
+
+ roscpp
+ sensor_msgs
+ geometry_msgs
+ visualization_msgs
+ pcl_conversions
+ pcl_ros
+
+ roscpp
+ sensor_msgs
+ geometry_msgs
+ visualization_msgs
+ pcl_conversions
+ pcl_ros
+
diff --git a/kitti_camera_ros/src/kitti_camera_ros.cpp b/kitti_camera_ros/src/kitti_camera_ros.cpp
new file mode 100644
index 0000000..27153f7
--- /dev/null
+++ b/kitti_camera_ros/src/kitti_camera_ros.cpp
@@ -0,0 +1,94 @@
+// ROS
+#include
+#include
+#include
+#include
+#include
+// PCL
+#include
+// C++
+#include
+// Rui
+#include
+#include
+
+int main(int argc, char **argv) {
+ double frequency;
+ std::string camera_dir;
+ double timestamp;
+
+ ros::init(argc, argv, "kitti_camera_ros");
+ ros::NodeHandle private_nh("~");
+
+ ros::Publisher camera_pub = private_nh.advertise("/image_detections", 100, true);
+
+ private_nh.param("frequency", frequency, 10);
+ private_nh.param("camera_dir", camera_dir, "camera_dir_path");
+
+ ros::Rate loop_rate(frequency);
+
+ //vision_msgs::Detection2DArray detection_results;
+
+ struct dirent **filelist;
+ int n_file = scandir(camera_dir.c_str(), &filelist, NULL, alphasort);
+ if(n_file == -1) {
+ ROS_ERROR_STREAM("[kitti_camera_ros] Could not open directory: " << camera_dir);
+ return EXIT_FAILURE;
+ } else {
+ ROS_INFO_STREAM("[kitti_camera_ros] Load camera files in " << camera_dir);
+ ROS_INFO_STREAM("[kitti_camera_ros] frequency (loop rate): " << frequency);
+ }
+
+ int i_file = 2; // 0 = . 1 = ..
+ while(ros::ok() && i_file < n_file) {
+ vision_msgs::Detection2DArray detection_results;
+
+ /*** Camera ***/
+ std::string s = camera_dir + filelist[i_file]->d_name;
+ std::fstream camera_txt(s.c_str(), std::ios::in | std::ios::binary);
+ //std::cerr << "s: " << s.c_str() << std::endl;
+ if(!camera_txt.good()) {
+ ROS_ERROR_STREAM("[kitti_camera_ros] Could not read file: " << s);
+ return EXIT_FAILURE;
+ } else {
+ camera_txt >> timestamp;
+ ros::Time timestamp_ros(timestamp == 0 ? ros::TIME_MIN.toSec() : timestamp);
+ detection_results.header.stamp = timestamp_ros;
+
+ //camera_txt.seekg(0, std::ios::beg);
+
+ for(int i = 0; camera_txt.good() && !camera_txt.eof(); i++) {
+ vision_msgs::Detection2D detection;
+ vision_msgs::ObjectHypothesisWithPose result;
+ camera_txt >> detection.bbox.center.x;
+ camera_txt >> detection.bbox.center.y;
+ camera_txt >> detection.bbox.size_x;
+ camera_txt >> detection.bbox.size_y;
+ camera_txt >> result.id;
+ camera_txt >> result.score;
+ detection.results.push_back(result);
+ detection_results.detections.push_back(detection);
+ }
+ camera_txt.close();
+
+ camera_pub.publish(detection_results);
+ // ROS_INFO_STREAM("[kitti_camera_ros] detection_results.size " << detection_results.detections.size());
+ // ROS_INFO_STREAM("--------------------------------------------");
+ // for(int n = 0; n < detection_results.detections.size(); n++) {
+ // ROS_INFO_STREAM("[kitti_camera_ros] detections.label " << detection_results.detections[n].results[0].id);
+ // ROS_INFO_STREAM("[kitti_camera_ros] detections.score " << detection_results.detections[n].results[0].score);
+ // }
+ }
+
+ ros::spinOnce();
+ loop_rate.sleep();
+ i_file++;
+ }
+
+ for(int i = 2; i < n_file; i++) {
+ free(filelist[i]);
+ }
+ free(filelist);
+
+ return EXIT_SUCCESS;
+}
diff --git a/kitti_velodyne_ros/.gitignore b/kitti_velodyne_ros/.gitignore
new file mode 100644
index 0000000..259148f
--- /dev/null
+++ b/kitti_velodyne_ros/.gitignore
@@ -0,0 +1,32 @@
+# Prerequisites
+*.d
+
+# Compiled Object files
+*.slo
+*.lo
+*.o
+*.obj
+
+# Precompiled Headers
+*.gch
+*.pch
+
+# Compiled Dynamic libraries
+*.so
+*.dylib
+*.dll
+
+# Fortran module files
+*.mod
+*.smod
+
+# Compiled Static libraries
+*.lai
+*.la
+*.a
+*.lib
+
+# Executables
+*.exe
+*.out
+*.app
diff --git a/kitti_velodyne_ros/.travis.yml b/kitti_velodyne_ros/.travis.yml
new file mode 100644
index 0000000..b89aff2
--- /dev/null
+++ b/kitti_velodyne_ros/.travis.yml
@@ -0,0 +1,21 @@
+dist: bionic
+sudo: required
+language: generic
+cache: apt
+
+install:
+ - sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
+ - sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
+ - sudo apt update
+ - sudo apt install ros-melodic-desktop-full
+ - source /opt/ros/melodic/setup.bash
+ - mkdir -p ~/catkin_ws/src
+ - cd ~/catkin_ws/
+ - catkin_make
+ - source devel/setup.bash
+
+script:
+ - cd ~/catkin_ws/src
+ - git clone -b melodic https://github.com/epan-utbm/kitti_velodyne_ros.git
+ - cd ~/catkin_ws
+ - catkin_make
diff --git a/kitti_velodyne_ros/CMakeLists.txt b/kitti_velodyne_ros/CMakeLists.txt
new file mode 100644
index 0000000..5140750
--- /dev/null
+++ b/kitti_velodyne_ros/CMakeLists.txt
@@ -0,0 +1,23 @@
+cmake_minimum_required(VERSION 2.8.3)
+project(kitti_velodyne_ros)
+
+find_package(catkin REQUIRED COMPONENTS
+ roscpp
+ sensor_msgs
+ geometry_msgs
+ visualization_msgs
+ pcl_conversions
+ pcl_ros
+ )
+
+find_package(PCL REQUIRED)
+
+include_directories(include ${catkin_INCLUDE_DIRS} ${PCL_INCLUDE_DIRS})
+
+catkin_package()
+
+add_executable(kitti_velodyne_ros src/kitti_velodyne_ros.cpp)
+target_link_libraries(kitti_velodyne_ros ${catkin_LIBRARIES} ${PCL_LIBRARIES})
+if(catkin_EXPORTED_TARGETS)
+ add_dependencies(kitti_velodyne_ros ${catkin_EXPORTED_TARGETS})
+endif()
diff --git a/kitti_velodyne_ros/LICENSE b/kitti_velodyne_ros/LICENSE
new file mode 100644
index 0000000..b036349
--- /dev/null
+++ b/kitti_velodyne_ros/LICENSE
@@ -0,0 +1,29 @@
+BSD 3-Clause License
+
+Copyright (c) 2020, EPAN Research Group @ UTBM
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/kitti_velodyne_ros/README.md b/kitti_velodyne_ros/README.md
new file mode 100644
index 0000000..346f001
--- /dev/null
+++ b/kitti_velodyne_ros/README.md
@@ -0,0 +1,23 @@
+# kitti_velodyne_ros
+
+[![Build Status](https://travis-ci.org/epan-utbm/kitti_velodyne_ros.svg?branch=melodic)](https://travis-ci.org/epan-utbm/kitti_velodyne_ros) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/24e89caa1d40456f966e039145f64edf)](https://app.codacy.com/gh/epan-utbm/kitti_velodyne_ros?utm_source=github.com&utm_medium=referral&utm_content=epan-utbm/kitti_velodyne_ros&utm_campaign=Badge_Grade_Dashboard) [![License](https://img.shields.io/badge/License-BSD%203--Clause-gree.svg)](https://opensource.org/licenses/BSD-3-Clause)
+
+Load KITTI velodyne data, play in ROS.
+
+## Usage
+
+```console
+$ roslaunch kitti_velodyne_ros kitti_velodyne_ros.launch
+```
+
+If you want to save the point cloud as a csv file, simply activate in [kitti_velodyne_ros.launch](launch/kitti_velodyne_ros.launch) :
+
+```console
+
+```
+
+In case you want to play with [LOAM](https://github.com/laboshinl/loam_velodyne):
+
+```console
+$ roslaunch kitti_velodyne_ros kitti_velodyne_ros_loam.launch
+```
diff --git a/kitti_velodyne_ros/launch/kitti_velodyne_ros.launch b/kitti_velodyne_ros/launch/kitti_velodyne_ros.launch
new file mode 100644
index 0000000..7401531
--- /dev/null
+++ b/kitti_velodyne_ros/launch/kitti_velodyne_ros.launch
@@ -0,0 +1,8 @@
+
+
+
+
+
+
+
+
diff --git a/kitti_velodyne_ros/launch/kitti_velodyne_ros.rviz b/kitti_velodyne_ros/launch/kitti_velodyne_ros.rviz
new file mode 100644
index 0000000..d92fa3d
--- /dev/null
+++ b/kitti_velodyne_ros/launch/kitti_velodyne_ros.rviz
@@ -0,0 +1,188 @@
+Panels:
+ - Class: rviz/Displays
+ Help Height: 78
+ Name: Displays
+ Property Tree Widget:
+ Expanded:
+ - /Global Options1
+ - /Status1
+ Splitter Ratio: 0.5
+ Tree Height: 731
+ - Class: rviz/Selection
+ Name: Selection
+ - Class: rviz/Tool Properties
+ Expanded:
+ - /2D Pose Estimate1
+ - /2D Nav Goal1
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.5886790156364441
+ - Class: rviz/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+ - Class: rviz/Time
+ Experimental: false
+ Name: Time
+ SyncMode: 0
+ SyncSource: PointCloud2
+Preferences:
+ PromptSaveOnExit: true
+Toolbars:
+ toolButtonStyle: 2
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: Intensity
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 0.9900000095367432
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Queue Size: 10
+ Selectable: true
+ Size (Pixels): 1
+ Size (m): 0.019999999552965164
+ Style: Flat Squares
+ Topic: /kitti_velodyne_ros/velodyne_points
+ Unreliable: false
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ - Alpha: 1
+ Arrow Length: 0.30000001192092896
+ Axes Length: 0.30000001192092896
+ Axes Radius: 0.009999999776482582
+ Class: rviz/PoseArray
+ Color: 255; 25; 0
+ Enabled: false
+ Head Length: 0.07000000029802322
+ Head Radius: 0.029999999329447746
+ Name: GroundTruthPose
+ Shaft Length: 0.23000000417232513
+ Shaft Radius: 0.009999999776482582
+ Shape: Arrow (Flat)
+ Topic: /kitti_velodyne_ros/poses
+ Unreliable: false
+ Value: false
+ - Alpha: 0.5
+ Cell Size: 1
+ Class: rviz/Grid
+ Color: 160; 160; 164
+ Enabled: false
+ Line Style:
+ Line Width: 0.029999999329447746
+ Value: Lines
+ Name: Grid
+ Normal Cell Count: 0
+ Offset:
+ X: 0
+ Y: 0
+ Z: 0
+ Plane: XY
+ Plane Cell Count: 10
+ Reference Frame:
+ Value: false
+ - Class: rviz/TF
+ Enabled: false
+ Frame Timeout: 15
+ Frames:
+ All Enabled: true
+ Marker Scale: 7
+ Name: TF
+ Show Arrows: true
+ Show Axes: true
+ Show Names: true
+ Tree:
+ {}
+ Update Interval: 0
+ Value: false
+ - Class: rviz/MarkerArray
+ Enabled: false
+ Marker Topic: /kitti_velodyne_ros/markers
+ Name: MarkerArray
+ Namespaces:
+ {}
+ Queue Size: 100
+ Value: false
+ Enabled: true
+ Global Options:
+ Background Color: 20; 20; 20
+ Default Light: true
+ Fixed Frame: velodyne
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz/Interact
+ Hide Inactive Objects: true
+ - Class: rviz/MoveCamera
+ - Class: rviz/Select
+ - Class: rviz/FocusCamera
+ - Class: rviz/Measure
+ - Class: rviz/SetInitialPose
+ Theta std deviation: 0.2617993950843811
+ Topic: /initialpose
+ X std deviation: 0.5
+ Y std deviation: 0.5
+ - Class: rviz/SetGoal
+ Topic: /move_base_simple/goal
+ - Class: rviz/PublishPoint
+ Single click: true
+ Topic: /clicked_point
+ Value: true
+ Views:
+ Current:
+ Class: rviz/Orbit
+ Distance: 59.19333267211914
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.05999999865889549
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Focal Point:
+ X: 0.8557648658752441
+ Y: -1.995847225189209
+ Z: 0.9883638620376587
+ Focal Shape Fixed Size: true
+ Focal Shape Size: 0.05000000074505806
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.009999999776482582
+ Pitch: 0.6902026534080505
+ Target Frame:
+ Value: Orbit (rviz)
+ Yaw: 0.9703954458236694
+ Saved: ~
+Window Geometry:
+ Displays:
+ collapsed: false
+ Height: 1025
+ Hide Left Dock: false
+ Hide Right Dock: true
+ QMainWindow State: 000000ff00000000fd00000004000000000000016a00000366fc0200000008fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d00000366000000c900fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261000000010000010f00000396fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000002800000396000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e100000197000000030000073d0000003bfc0100000002fb0000000800540069006d006501000000000000073d000002eb00fffffffb0000000800540069006d00650100000000000004500000000000000000000005cd0000036600000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Time:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: true
+ Width: 1853
+ X: 67
+ Y: 27
diff --git a/kitti_velodyne_ros/launch/kitti_velodyne_ros_loam.launch b/kitti_velodyne_ros/launch/kitti_velodyne_ros_loam.launch
new file mode 100644
index 0000000..6d4c37c
--- /dev/null
+++ b/kitti_velodyne_ros/launch/kitti_velodyne_ros_loam.launch
@@ -0,0 +1,21 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/kitti_velodyne_ros/launch/kitti_velodyne_ros_loam.rviz b/kitti_velodyne_ros/launch/kitti_velodyne_ros_loam.rviz
new file mode 100644
index 0000000..dbc90e1
--- /dev/null
+++ b/kitti_velodyne_ros/launch/kitti_velodyne_ros_loam.rviz
@@ -0,0 +1,193 @@
+Panels:
+ - Class: rviz/Displays
+ Help Height: 78
+ Name: Displays
+ Property Tree Widget:
+ Expanded:
+ - /Global Options1
+ - /Status1
+ Splitter Ratio: 0.5
+ Tree Height: 561
+ - Class: rviz/Selection
+ Name: Selection
+ - Class: rviz/Tool Properties
+ Expanded:
+ - /2D Pose Estimate1
+ - /2D Nav Goal1
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.588679016
+ - Class: rviz/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+ - Class: rviz/Time
+ Experimental: false
+ Name: Time
+ SyncMode: 0
+ SyncSource: PointCloud2
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: Intensity
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 0.99000001
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Queue Size: 10
+ Selectable: true
+ Size (Pixels): 1
+ Size (m): 0.0199999996
+ Style: Flat Squares
+ Topic: /kitti_velodyne_ros/velodyne_points
+ Unreliable: false
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ - Alpha: 1
+ Arrow Length: 0.300000012
+ Axes Length: 0.300000012
+ Axes Radius: 0.00999999978
+ Class: rviz/PoseArray
+ Color: 255; 25; 0
+ Enabled: false
+ Head Length: 0.0700000003
+ Head Radius: 0.0299999993
+ Name: GroundTruthPose
+ Shaft Length: 0.230000004
+ Shaft Radius: 0.00999999978
+ Shape: Arrow (Flat)
+ Topic: /kitti_velodyne_ros/poses
+ Unreliable: false
+ Value: false
+ - Alpha: 0.5
+ Cell Size: 1
+ Class: rviz/Grid
+ Color: 160; 160; 164
+ Enabled: true
+ Line Style:
+ Line Width: 0.0299999993
+ Value: Lines
+ Name: Grid
+ Normal Cell Count: 0
+ Offset:
+ X: 0
+ Y: 0
+ Z: 0
+ Plane: XY
+ Plane Cell Count: 10
+ Reference Frame:
+ Value: true
+ - Class: rviz/TF
+ Enabled: true
+ Frame Timeout: 15
+ Frames:
+ All Enabled: true
+ base_link:
+ Value: true
+ camera_init:
+ Value: true
+ map:
+ Value: true
+ velodyne:
+ Value: true
+ Marker Scale: 7
+ Name: TF
+ Show Arrows: true
+ Show Axes: true
+ Show Names: true
+ Tree:
+ map:
+ camera_init:
+ base_link:
+ velodyne:
+ {}
+ Update Interval: 0
+ Value: true
+ - Class: rviz/MarkerArray
+ Enabled: true
+ Marker Topic: /kitti_velodyne_ros/markers
+ Name: MarkerArray
+ Namespaces:
+ "": true
+ Queue Size: 100
+ Value: true
+ Enabled: true
+ Global Options:
+ Background Color: 20; 20; 20
+ Default Light: true
+ Fixed Frame: map
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz/Interact
+ Hide Inactive Objects: true
+ - Class: rviz/MoveCamera
+ - Class: rviz/Select
+ - Class: rviz/FocusCamera
+ - Class: rviz/Measure
+ - Class: rviz/SetInitialPose
+ Topic: /initialpose
+ - Class: rviz/SetGoal
+ Topic: /move_base_simple/goal
+ - Class: rviz/PublishPoint
+ Single click: true
+ Topic: /clicked_point
+ Value: true
+ Views:
+ Current:
+ Class: rviz/Orbit
+ Distance: 81.0036621
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.0599999987
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Focal Point:
+ X: 0.855764866
+ Y: -1.99584723
+ Z: 0.988363862
+ Focal Shape Fixed Size: true
+ Focal Shape Size: 0.0500000007
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.00999999978
+ Pitch: 0.690202653
+ Target Frame:
+ Value: Orbit (rviz)
+ Yaw: 0.970395446
+ Saved: ~
+Window Geometry:
+ Displays:
+ collapsed: false
+ Height: 839
+ Hide Left Dock: false
+ Hide Right Dock: true
+ QMainWindow State: 000000ff00000000fd00000004000000000000016a000002c0fc0200000008fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000006100fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c0061007900730100000028000002c0000000d700fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261000000010000010f00000396fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000002800000396000000ad00fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000005490000003bfc0100000002fb0000000800540069006d00650100000000000005490000030000fffffffb0000000800540069006d00650100000000000004500000000000000000000003d9000002c000000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Time:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: true
+ Width: 1353
+ X: 65
+ Y: 24
diff --git a/kitti_velodyne_ros/package.xml b/kitti_velodyne_ros/package.xml
new file mode 100644
index 0000000..19ffb33
--- /dev/null
+++ b/kitti_velodyne_ros/package.xml
@@ -0,0 +1,24 @@
+
+
+ kitti_velodyne_ros
+ 0.0.1
+ Load KITTI velodyne data, play in ROS
+ Zhi Yan
+ BSD
+
+ catkin
+
+ roscpp
+ sensor_msgs
+ geometry_msgs
+ visualization_msgs
+ pcl_conversions
+ pcl_ros
+
+ roscpp
+ sensor_msgs
+ geometry_msgs
+ visualization_msgs
+ pcl_conversions
+ pcl_ros
+
diff --git a/kitti_velodyne_ros/src/kitti_velodyne_ros.cpp b/kitti_velodyne_ros/src/kitti_velodyne_ros.cpp
new file mode 100644
index 0000000..eb0b599
--- /dev/null
+++ b/kitti_velodyne_ros/src/kitti_velodyne_ros.cpp
@@ -0,0 +1,161 @@
+// ROS
+#include
+#include
+#include
+#include
+#include
+// PCL
+#include
+// C++
+#include
+
+int main(int argc, char **argv) {
+ double frequency;
+ std::string velodyne_dir;
+ std::string poses_file;
+ double pose[12];
+ geometry_msgs::PoseArray poses;
+ visualization_msgs::MarkerArray markers;
+ bool save_to_csv;
+ std::string times_file;
+ double timestamp;
+ std::string line;
+
+ ros::init(argc, argv, "kitti_velodyne_ros");
+ ros::NodeHandle private_nh("~");
+
+ ros::Publisher velodyne_pub = private_nh.advertise("velodyne_points", 100, true);
+ ros::Publisher poses_pub = private_nh.advertise("poses", 100, true);
+ ros::Publisher markers_pub = private_nh.advertise("markers", 100, true);
+
+ private_nh.param("frequency", frequency, 10);
+ private_nh.param("velodyne_dir", velodyne_dir, "velodyne_dir_path");
+ private_nh.param("poses_file", poses_file, "poses_file_path");
+ private_nh.param("times_file", times_file, "times_file_path");
+ private_nh.param("save_to_csv", save_to_csv, false);
+
+ ros::Rate loop_rate(frequency);
+
+ struct dirent **filelist;
+ int n_file = scandir(velodyne_dir.c_str(), &filelist, NULL, alphasort);
+ if(n_file == -1) {
+ ROS_ERROR_STREAM("[kitti_velodyne_ros] Could not open directory: " << velodyne_dir);
+ return EXIT_FAILURE;
+ } else {
+ ROS_INFO_STREAM("[kitti_velodyne_ros] Load velodyne files in " << velodyne_dir);
+ ROS_INFO_STREAM("[kitti_velodyne_ros] frequency (loop rate): " << frequency);
+ ROS_INFO_STREAM("[kitti_velodyne_ros] save_to_csv: " << std::boolalpha << save_to_csv);
+ }
+
+ // std::fstream poses_txt(poses_file.c_str(), std::ios::in);
+ // if(!poses_txt.good()) {
+ // ROS_WARN_STREAM("[kitti_velodyne_ros] Could not read poses file: " << poses_file);
+ // }
+
+ // std::fstream times_txt(times_file.c_str(), std::ios::in);
+ // if(!times_txt.good()) {
+ // ROS_WARN_STREAM("[kitti_velodyne_ros] Could not read times file: " << times_file);
+ // }
+
+ int i_file = 2; // 0 = . 1 = ..
+ while(ros::ok() && i_file < n_file) {
+ /*** Timestamp ***/
+ //times_txt >> timestamp;
+
+ //getline(times_txt, line);
+ //timestamp = atof(line.erase(0,17).c_str());
+
+ //ros::Time timestamp_ros(timestamp == 0 ? ros::TIME_MIN.toSec() : timestamp);
+
+ /*** Velodyne ***/
+ std::string s = velodyne_dir + filelist[i_file]->d_name;
+ std::fstream velodyne_bin(s.c_str(), std::ios::in | std::ios::binary);
+ //std::cerr << s.c_str() << std::endl;
+ if(!velodyne_bin.good()) {
+ ROS_ERROR_STREAM("[kitti_velodyne_ros] Could not read file: " << s);
+ return EXIT_FAILURE;
+ } else {
+ pcl::PointCloud::Ptr cloud(new pcl::PointCloud);
+
+ velodyne_bin.seekg(0, std::ios::beg);
+ for(int i = 0; velodyne_bin.good() && !velodyne_bin.eof(); i++) {
+ pcl::PointXYZI point;
+ velodyne_bin.read((char *) &point.x, 3 * sizeof(float));
+ velodyne_bin.read((char *) &point.intensity, sizeof(float));
+ cloud->push_back(point);
+ }
+ velodyne_bin.close();
+
+ if(save_to_csv) {
+ std::ofstream csv_file(s.substr(0, s.find_last_of('.')) + ".csv");
+ if(csv_file.is_open()) {
+ csv_file << "seq, x, y, z, i\n";
+ for(size_t i = 0; i < cloud->size(); i++) {
+ csv_file << i << ", " << cloud->points[i].x << ", " << cloud->points[i].y << ", " << cloud->points[i].z << ", " << cloud->points[i].intensity << "\n";
+ }
+ csv_file.close();
+ }
+ }
+
+ sensor_msgs::PointCloud2 pc2;
+ pcl::toROSMsg(*cloud, pc2);
+ pc2.header.frame_id = "velodyne";
+ // if(!poses_txt.good()) {
+ // pc2.header.stamp = ros::Time::now();
+ // } else {
+ // pc2.header.stamp = timestamp_ros;
+ // }
+ //pc2.header.stamp = timestamp_ros;
+ pc2.header.stamp = ros::Time::now();
+ velodyne_pub.publish(pc2);
+ }
+
+ /*** Ground Truth Poses
+ poses_txt >> pose[0] >> pose[1] >> pose[2] >> pose[3] >> pose[4] >> pose[5] >> pose[6] >> pose[7] >> pose[8] >> pose[9] >> pose[10] >> pose[11];
+ geometry_msgs::Pose p;
+ p.position.x = pose[11];
+ p.position.y = pose[3];
+ p.position.z = 0;
+ p.orientation.w = 1;
+ poses.poses.push_back(p);
+ if(!poses_txt.good()) {
+ poses.header.stamp = ros::Time::now();
+ } else {
+ poses.header.stamp = timestamp_ros;
+ }
+ poses.header.frame_id = "base_link";
+ poses_pub.publish(poses);
+ ***/
+
+ /*** Rviz car's trajectory ***/
+ visualization_msgs::Marker m;
+ m.header.stamp = ros::Time::now();
+ m.header.frame_id = "odom";
+ m.type = visualization_msgs::Marker::LINE_STRIP;
+ geometry_msgs::Point pp;
+ for(int i = 0; i < poses.poses.size(); i++) {
+ pp.x = poses.poses[i].position.x - poses.poses[poses.poses.size()-1].position.x;
+ pp.y = poses.poses[i].position.y - poses.poses[poses.poses.size()-1].position.y;
+ m.points.push_back(pp);
+ }
+ m.scale.x = 0.1;
+ m.color.a = 1.0;
+ m.color.r = 1.0;
+ m.color.g = 0.5;
+ m.color.b = 0.0;
+ m.lifetime = ros::Duration(1.0);
+ markers.markers.push_back(m);
+ markers_pub.publish(markers);
+
+ ros::spinOnce();
+ loop_rate.sleep();
+ i_file++;
+ }
+
+ for(int i = 2; i < n_file; i++) {
+ free(filelist[i]);
+ }
+ free(filelist);
+
+ return EXIT_SUCCESS;
+}
diff --git a/online_forests_ros/CMakeLists.txt b/online_forests_ros/CMakeLists.txt
new file mode 100644
index 0000000..424bad4
--- /dev/null
+++ b/online_forests_ros/CMakeLists.txt
@@ -0,0 +1,42 @@
+cmake_minimum_required(VERSION 2.8.3)
+project(online_forests_ros)
+
+find_package(catkin REQUIRED COMPONENTS
+ roscpp
+ std_msgs
+ autoware_tracker
+ )
+
+include_directories(
+ include
+ ${catkin_INCLUDE_DIRS}
+ )
+
+catkin_package(
+ INCLUDE_DIRS include
+ )
+
+add_executable(online_forests_ros
+ src/online_forests/classifier.cpp
+ src/online_forests/hyperparameters.cpp
+ src/online_forests/onlinenode.cpp
+ src/online_forests/onlinetree.cpp
+ src/online_forests/utilities.cpp
+ src/online_forests/data.cpp
+ # src/online_forests/Online-Forest.cpp
+ src/online_forests/onlinerf.cpp
+ src/online_forests/randomtest.cpp
+ src/online_forests_ros.cpp
+ )
+
+target_link_libraries(online_forests_ros
+ ${catkin_LIBRARIES}
+ config++
+ blas
+ )
+
+if(catkin_EXPORTED_TARGETS)
+ add_dependencies(online_forests_ros
+ ${catkin_EXPORTED_TARGETS}
+ )
+endif()
diff --git a/online_forests_ros/LICENSE b/online_forests_ros/LICENSE
new file mode 100644
index 0000000..5a08ff2
--- /dev/null
+++ b/online_forests_ros/LICENSE
@@ -0,0 +1,23 @@
+The MIT License (MIT)
+
+Copyright (c) 2014 Amir Saffari
+Copyright (c) 2020 Zhi Yan
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
diff --git a/online_forests_ros/README.md b/online_forests_ros/README.md
new file mode 100644
index 0000000..58c236b
--- /dev/null
+++ b/online_forests_ros/README.md
@@ -0,0 +1,100 @@
+# online_forests_ros
+
+This pacakge is forked from [https://github.com/amirsaffari/online-random-forests](https://github.com/amirsaffari/online-random-forests), and the original Readme file is below the dividing line.
+
+[2020-09-18]: ROSified the original Online Random Forests, and added support for data stream (not limited to files).
+
+# Install prerequisites
+
+```
+sudo apt install libgmm++-dev libconfig++-dev libatlas-base-dev libblas-dev liblapack-dev
+```
+
+# TODO
+
+* Replace libconfig++-dev (-lconfig++) by yaml-cpp
+* Compare with https://www.csie.ntu.edu.tw/~cjlin/liblinear/
+
+---
+
+Online Random Forests
+=====================
+
+This is the original implementation of the Online Random Forest algorithm [1]. There is a more recent implementation of this algorithm at https://github.com/amirsaffari/online-multiclass-lpboost which was used in [2].
+
+Read the INSTALL file for build instructions.
+
+Usage:
+======
+Input arguments:
+
+ -h | --help : will display this message.
+ -c : path to the config file.
+
+ --ort : use Online Random Tree (ORT) algorithm.
+ --orf : use Online Random Forest (ORF) algorithm.
+
+
+ --train : train the classifier.
+ --test : test the classifier.
+ --t2 : train and test the classifier at the same time.
+
+
+ Examples:
+ ./Online-Forest -c conf/orf.conf --orf --t2
+
+Config file:
+============
+All the settings for the classifier are passed via the config file. You can find the
+config file in "conf" folder. It is easy to see what are the meanings behind each of
+these settings:
+Data:
+ * trainData = path to the training file
+ * testData = path to the test file
+
+Tree:
+ * maxDepth = maximum depth for a tree
+ * numRandomTests = number of random tests for each node
+ * numProjectionFeatures = number of features for hyperplane tests
+ * counterThreshold = number of samples to be seen for an online node before splitting
+
+Forest:
+ * numTrees = number of trees in the forest
+ * numEpochs = number of online training epochs
+ * useSoftVoting = boolean flag for using hard or soft voting
+
+Output:
+ * savePath = path to save the results (not implemented yet)
+ * verbose = defines the verbosity level (0: silence)
+
+Data format:
+============
+The data formats used is very similar to the LIBSVM file formats. It only need to have
+one header line which contains the following information:
+\#Samples \#Features \#Classes \#FeatureMinIndex
+
+where
+
+\#Samples: number of samples
+
+\#Features: number of features
+
+\#Classes: number of classes
+
+\#FeatureMinIndex: the index of the first feature used
+
+You can find a few datasets in the data folder, check their header to see some examples.
+Currently, there is only one limitation with the data files: the classes should be
+labeled starting in a regular format and start from 0. For example, for a 3 class problem
+the labels should be in {0, 1, 2} set.
+
+===========
+REFERENCES:
+===========
+[1] Amir Saffari, Christian Leistner, Jakob Santner, Martin Godec, and Horst Bischof,
+"On-line Random Forests,"
+3rd IEEE ICCV Workshop on On-line Computer Vision, 2009.
+
+[2] Amir Saffari, Martin Godec, Thomas Pock, Christian Leistner, Horst Bischof,
+“Online Multi-Class LPBoost“,
+Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010.
diff --git a/online_forests_ros/config/orf.conf b/online_forests_ros/config/orf.conf
new file mode 100644
index 0000000..fe30d57
--- /dev/null
+++ b/online_forests_ros/config/orf.conf
@@ -0,0 +1,25 @@
+Data:
+{
+ trainData = "data/dna-train.libsvm";
+ trainLabels = "data/train.label";
+ testData = "data/dna-test.libsvm";
+ testLabels = "data/test.label";
+};
+Tree:
+{
+ maxDepth = 20;
+ numRandomTests = 10;
+ numProjectionFeatures = 2;
+ counterThreshold = 50;
+};
+Forest:
+{
+ numTrees = 100;
+ numEpochs = 10;
+ useSoftVoting = 1;
+};
+Output:
+{
+ savePath = "/tmp/online-forest-";
+ verbose = 1; // 0 = None
+};
diff --git a/online_forests_ros/doc/2009-OnlineRandomForests.pdf b/online_forests_ros/doc/2009-OnlineRandomForests.pdf
new file mode 100644
index 0000000..1e8f22c
Binary files /dev/null and b/online_forests_ros/doc/2009-OnlineRandomForests.pdf differ
diff --git a/online_forests_ros/include/online_forests/classifier.h b/online_forests_ros/include/online_forests/classifier.h
new file mode 100644
index 0000000..ef52528
--- /dev/null
+++ b/online_forests_ros/include/online_forests/classifier.h
@@ -0,0 +1,39 @@
+#ifndef CLASSIFIER_H_
+#define CLASSIFIER_H_
+
+#include
+
+#include "online_forests/data.h"
+
+using namespace std;
+
+class Classifier {
+public:
+virtual void update(Sample &sample) = 0;
+virtual void train(DataSet &dataset) = 0;
+virtual Result eval(Sample &sample) = 0;
+virtual vector test(DataSet & dataset) = 0;
+virtual vector trainAndTest(DataSet &dataset_tr, DataSet &dataset_ts) = 0;
+
+double compError(const vector &results, const DataSet &dataset) {
+ double error = 0.0;
+ for (int i = 0; i < dataset.m_numSamples; i++) {
+ if (results[i].prediction != dataset.m_samples[i].y) {
+ error++;
+ }
+ }
+ //std::cerr<<"numError: "< &errors) {
+ for (int i = 0; i < (int) errors.size(); i++) {
+ cout << i + 1 << ": " << errors[i] << " --- ";
+ }
+ cout << endl;
+}
+};
+
+#endif /* CLASSIFIER_H_ */
diff --git a/online_forests_ros/include/online_forests/data.h b/online_forests_ros/include/online_forests/data.h
new file mode 100644
index 0000000..36f541b
--- /dev/null
+++ b/online_forests_ros/include/online_forests/data.h
@@ -0,0 +1,52 @@
+#ifndef DATA_H_
+#define DATA_H_
+
+#include
+#include
+#include
+#include
+
+using namespace std;
+using namespace gmm;
+
+// TYPEDEFS
+typedef int Label;
+typedef double Weight;
+typedef rsvector SparseVector;
+
+// DATA CLASSES
+class Sample {
+public:
+ SparseVector x;
+ Label y;
+ Weight w;
+
+ void disp() {
+ cout << "Sample: y = " << y << ", w = " << w << ", x = ";
+ cout << x << endl;
+ }
+};
+
+class DataSet {
+public:
+ vector m_samples;
+ int m_numSamples;
+ int m_numFeatures;
+ int m_numClasses;
+
+ vector m_minFeatRange;
+ vector m_maxFeatRange;
+
+ void findFeatRange();
+
+ void loadLIBSVM(string filename);
+ void loadLIBSVM2(string data);
+};
+
+class Result {
+public:
+ vector confidence;
+ int prediction;
+};
+
+#endif /* DATA_H_ */
diff --git a/online_forests_ros/include/online_forests/hyperparameters.h b/online_forests_ros/include/online_forests/hyperparameters.h
new file mode 100644
index 0000000..f55c097
--- /dev/null
+++ b/online_forests_ros/include/online_forests/hyperparameters.h
@@ -0,0 +1,34 @@
+#ifndef HYPERPARAMETERS_H_
+#define HYPERPARAMETERS_H_
+
+#include
+using namespace std;
+
+class Hyperparameters
+{
+ public:
+ Hyperparameters();
+ Hyperparameters(const string& confFile);
+
+ // Online node
+ int numRandomTests;
+ int numProjectionFeatures;
+ int counterThreshold;
+ int maxDepth;
+
+ // Online tree
+
+ // Online forest
+ int numTrees;
+ int useSoftVoting;
+ int numEpochs;
+
+ // Data
+ string trainData;
+ string testData;
+
+ // Output
+ int verbose;
+};
+
+#endif /* HYPERPARAMETERS_H_ */
diff --git a/online_forests_ros/include/online_forests/onlinenode.h b/online_forests_ros/include/online_forests/onlinenode.h
new file mode 100644
index 0000000..f398fa4
--- /dev/null
+++ b/online_forests_ros/include/online_forests/onlinenode.h
@@ -0,0 +1,232 @@
+#ifndef ONLINENODE_H_
+#define ONLINENODE_H_
+
+#include
+#include
+
+#include "online_forests/data.h"
+#include "online_forests/hyperparameters.h"
+#include "online_forests/randomtest.h"
+#include "online_forests/utilities.h"
+
+using namespace std;
+
+class OnlineNode {
+public:
+ OnlineNode() {
+ m_isLeaf = true;
+ }
+
+ OnlineNode(const Hyperparameters &hp, const int &numClasses, const int &numFeatures, const vector &minFeatRange,
+ const vector &maxFeatRange, const int &depth) :
+ m_numClasses(&numClasses), m_numFeatures(&numFeatures), m_depth(depth), m_isLeaf(true), m_counter(0.0), m_label(-1),
+ m_parentCounter(0.0), m_hp(&hp), m_minFeatRange(&minFeatRange), m_maxFeatRange(&maxFeatRange) {
+ for (int i = 0; i < numClasses; i++) {
+ m_labelStats.push_back(0.0);
+ }
+
+ // Creating random tests
+ for (int i = 0; i < hp.numRandomTests; i++) {
+ HyperplaneFeature test(numClasses, numFeatures, hp.numProjectionFeatures, minFeatRange, maxFeatRange);
+ m_onlineTests.push_back(test);
+ }
+ }
+
+ OnlineNode(const Hyperparameters &hp, const int &numClasses, const int &numFeatures, const vector &minFeatRange,
+ const vector &maxFeatRange, const int &depth, const vector &parentStats) :
+ m_numClasses(&numClasses), m_numFeatures(&numFeatures), m_depth(depth), m_isLeaf(true), m_counter(0.0), m_label(-1),
+ m_parentCounter(0.0), m_hp(&hp), m_minFeatRange(&minFeatRange), m_maxFeatRange(&maxFeatRange) {
+ m_labelStats = parentStats;
+ m_label = argmax(m_labelStats);
+ m_parentCounter = sum(m_labelStats);
+
+ // Creating random tests
+ for (int i = 0; i < hp.numRandomTests; i++) {
+ HyperplaneFeature test(numClasses, numFeatures, hp.numProjectionFeatures, minFeatRange, maxFeatRange);
+ m_onlineTests.push_back(test);
+ }
+ }
+
+ ~OnlineNode() {
+ if (!m_isLeaf) {
+ delete m_leftChildNode;
+ delete m_rightChildNode;
+ }
+ }
+
+ void update(Sample &sample);
+
+ Result eval(Sample &sample) {
+ if (m_isLeaf) {
+ Result result;
+ if (m_counter + m_parentCounter) {
+ result.confidence = m_labelStats;
+ scale(result.confidence, 1.0 / (m_counter + m_parentCounter));
+ result.prediction = m_label;
+ } else {
+ for (int i = 0; i < *m_numClasses; i++) {
+ result.confidence.push_back(1.0 / *m_numClasses);
+ }
+ result.prediction = 0;
+ }
+
+ return result;
+ } else {
+ if (m_bestTest.eval(sample)) {
+ return m_rightChildNode->eval(sample);
+ } else {
+ return m_leftChildNode->eval(sample);
+ }
+ }
+ }
+
+ void writeNode(OnlineNode *node, double counter, FILE *fp)
+ {
+ queue que;
+ que.push(node);
+ while(!que.empty()){
+ OnlineNode *cur = que.front();
+ if (cur->m_isLeaf){
+ fprintf(fp,"L");
+ fprintf(fp," %d",cur->m_depth);
+ fprintf(fp," %d",cur->m_isLeaf);
+ fprintf(fp," %lf",cur->m_counter);
+ fprintf(fp," %d",cur->m_label);
+ fprintf(fp," %lf",cur->m_parentCounter);
+ for (int i = 0; i < cur->m_labelStats.size(); i++) {
+ fprintf(fp," %lf",cur->m_labelStats[i]);
+ }
+ for (int i = 0; i < cur->m_onlineTests.size(); i++) {
+ cur->m_onlineTests[i].writeTest(fp);
+ }
+ fprintf(fp,"\n");
+ que.pop();
+ } else {
+ fprintf(fp,"S");
+ fprintf(fp," %d",cur->m_depth);
+ fprintf(fp," %d",cur->m_isLeaf);
+ fprintf(fp," %lf",cur->m_counter);
+ fprintf(fp," %d",cur->m_label);
+ fprintf(fp," %lf",cur->m_parentCounter);
+ for (int i = 0; i < cur->m_labelStats.size(); i++) {
+ fprintf(fp," %lf",cur->m_labelStats[i]);
+ }
+ cur->m_bestTest.writeTest(fp);
+ fprintf(fp,"\n");
+ que.pop();
+ que.push( cur->m_leftChildNode );
+ que.push( cur->m_rightChildNode );
+
+ /* if( cur->m_leftChildNode != nullptr){
+ que.push( cur->m_leftChildNode );
+ }
+ if( cur->m_rightChildNode != nullptr){
+ que.push( cur->m_rightChildNode );
+ } */
+ }
+ }
+ }
+
+ void loadNode(OnlineNode *node, FILE *fp)
+ {
+ char node_type;
+ queue tree;
+ OnlineNode *cur;
+ tree.push(node);
+
+ while(!tree.empty()){
+ cur = tree.front();
+ tree.pop();
+
+ fscanf(fp,"%c ",&node_type);
+
+ if(node_type == 'T'){
+ //cout<<"Loading next tree"<m_depth);
+ fscanf(fp, "%d ", &cur->m_isLeaf);
+ fscanf(fp, "%lf ", &cur->m_counter);
+ fscanf(fp, "%d ", &cur->m_label);
+ fscanf(fp, "%lf ", &cur->m_parentCounter);
+ for (int i = 0; i < cur->m_labelStats.size(); i++) {
+ fscanf(fp, "%lf ", &cur->m_labelStats[i]);
+ }
+ for (int i = 0; i < cur->m_onlineTests.size(); i++) {
+ cur->m_onlineTests[i].loadTest(fp);
+ }
+ fscanf(fp,"\n");
+
+ } else if (node_type == 'S'){
+ fscanf(fp, "%d ", &cur->m_depth);
+ fscanf(fp, "%d ", &cur->m_isLeaf);
+ fscanf(fp, "%lf ", &cur->m_counter);
+ fscanf(fp, "%d ", &cur->m_label);
+ fscanf(fp, "%lf ", &cur->m_parentCounter);
+ for (int i = 0; i < cur->m_labelStats.size(); i++) {
+ fscanf(fp, "%lf ", &cur->m_labelStats[i]);
+ }
+ HyperplaneFeature temp_bestTest(*m_numClasses, *m_numFeatures, m_hp->numProjectionFeatures, *m_minFeatRange, *m_maxFeatRange);
+ cur->m_bestTest = temp_bestTest;
+ cur->m_bestTest.loadTest(fp);
+ fscanf(fp,"\n");
+
+ pair , vector > parentStats = cur->m_bestTest.getStats();
+ cur->m_leftChildNode = new OnlineNode(*cur->m_hp, *cur->m_numClasses, *cur->m_numFeatures, *cur->m_minFeatRange, *cur->m_maxFeatRange, cur->m_depth + 1,
+ parentStats.second);
+ tree.push(cur->m_leftChildNode);
+ cur->m_rightChildNode = new OnlineNode(*cur->m_hp, *cur->m_numClasses, *cur->m_numFeatures, *cur->m_minFeatRange, *cur->m_maxFeatRange, cur->m_depth + 1,
+ parentStats.first);
+ tree.push(cur->m_rightChildNode);
+ }
+ }
+ return;
+ }
+
+private:
+ const int *m_numClasses;
+ const int *m_numFeatures;
+ int m_depth;
+ bool m_isLeaf;
+ double m_counter;
+ int m_label;
+ double m_parentCounter;
+ const Hyperparameters *m_hp;
+ const vector *m_minFeatRange;
+ const vector *m_maxFeatRange;
+
+ vector m_labelStats;
+
+ OnlineNode* m_leftChildNode;
+ OnlineNode* m_rightChildNode;
+
+ vector m_onlineTests;
+ HyperplaneFeature m_bestTest;
+
+ bool shouldISplit() {
+ bool isPure = false;
+ for (int i = 0; i < *m_numClasses; i++) {
+ if (m_labelStats[i] == m_counter + m_parentCounter) {
+ isPure = true;
+ break;
+ }
+ }
+
+ if (isPure) {
+ return false;
+ }
+
+ if (m_depth >= m_hp->maxDepth) { // do not split if max depth is reached
+ return false;
+ }
+
+ if (m_counter < m_hp->counterThreshold) { // do not split if not enough samples seen
+ return false;
+ }
+
+ return true;
+ }
+
+};
+
+#endif /* ONLINENODE_H_ */
diff --git a/online_forests_ros/include/online_forests/onlinerf.h b/online_forests_ros/include/online_forests/onlinerf.h
new file mode 100644
index 0000000..ebf1ab6
--- /dev/null
+++ b/online_forests_ros/include/online_forests/onlinerf.h
@@ -0,0 +1,167 @@
+#ifndef ONLINERF_H_
+#define ONLINERF_H_
+
+#include "online_forests/classifier.h"
+#include "online_forests/data.h"
+#include "online_forests/hyperparameters.h"
+#include "online_forests/onlinetree.h"
+#include "online_forests/utilities.h"
+
+class OnlineRF: public Classifier {
+public:
+ OnlineRF(const Hyperparameters &hp, const int &numClasses, const int &numFeatures, const vector &minFeatRange,
+ const vector &maxFeatRange) :
+ m_numClasses(&numClasses), m_counter(0.0), m_oobe(0.0), m_hp(&hp) {
+ OnlineTree *tree;
+ for (int i = 0; i < hp.numTrees; i++) {
+ tree = new OnlineTree(hp, numClasses, numFeatures, minFeatRange, maxFeatRange);
+ m_trees.push_back(tree);
+ }
+ }
+
+ ~OnlineRF() {
+ for (int i = 0; i < m_hp->numTrees; i++) {
+ delete m_trees[i];
+ }
+ }
+
+ virtual void update(Sample &sample) {
+ m_counter += sample.w;
+
+ Result result, treeResult;
+ for (int i = 0; i < *m_numClasses; i++) {
+ result.confidence.push_back(0.0);
+ }
+
+ int numTries;
+ for (int i = 0; i < m_hp->numTrees; i++) {
+ numTries = poisson(1.0);
+ if (numTries) {
+ for (int n = 0; n < numTries; n++) {
+ m_trees[i]->update(sample);
+ }
+ } else {
+ treeResult = m_trees[i]->eval(sample);
+ if (m_hp->useSoftVoting) {
+ add(treeResult.confidence, result.confidence);
+ } else {
+ result.confidence[treeResult.prediction]++;
+ }
+ }
+ }
+
+ if (argmax(result.confidence) != sample.y) {
+ m_oobe += sample.w;
+ }
+ }
+
+ virtual void train(DataSet &dataset) {
+ vector randIndex;
+ int sampRatio = dataset.m_numSamples / 10;
+ for (int n = 0; n < m_hp->numEpochs; n++) {
+ randPerm(dataset.m_numSamples, randIndex);
+ for (int i = 0; i < dataset.m_numSamples; i++) {
+ update(dataset.m_samples[randIndex[i]]);
+ if (m_hp->verbose >= 1 && (i % sampRatio) == 0) {
+ //cout << "--- Online Random Forest training --- Epoch: " << n + 1 << " --- ";
+ //cout << (10 * i) / sampRatio << "%" << endl;
+ }
+ }
+ cout << "--- Online Random Forest training --- Epoch: " << n + 1 << " --- " << endl;
+ }
+ }
+
+ virtual Result eval(Sample &sample) {
+ Result result, treeResult;
+ for (int i = 0; i < *m_numClasses; i++) {
+ result.confidence.push_back(0.0);
+ }
+
+ for (int i = 0; i < m_hp->numTrees; i++) {
+ treeResult = m_trees[i]->eval(sample);
+ if (m_hp->useSoftVoting) {
+ add(treeResult.confidence, result.confidence);
+ } else {
+ result.confidence[treeResult.prediction]++;
+ }
+ }
+
+ scale(result.confidence, 1.0 / m_hp->numTrees);
+ result.prediction = argmax(result.confidence);
+ return result;
+ }
+
+ virtual vector test(DataSet &dataset) {
+ vector results;
+ for (int i = 0; i < dataset.m_numSamples; i++) {
+ results.push_back(eval(dataset.m_samples[i]));
+ }
+
+ double error = compError(results, dataset);
+ if (m_hp->verbose >= 1) {
+ cout << "--- Online Random Forest test error: " << error << endl;
+ }
+
+ return results;
+ }
+
+ virtual vector trainAndTest(DataSet &dataset_tr, DataSet &dataset_ts) {
+ vector results;
+ vector randIndex;
+ int sampRatio = dataset_tr.m_numSamples / 10;
+ vector testError;
+ for (int n = 0; n < m_hp->numEpochs; n++) {
+ randPerm(dataset_tr.m_numSamples, randIndex);
+ for (int i = 0; i < dataset_tr.m_numSamples; i++) {
+ update(dataset_tr.m_samples[randIndex[i]]);
+ if (m_hp->verbose >= 1 && (i % sampRatio) == 0) {
+ cout << "--- Online Random Forest training --- Epoch: " << n + 1 << " --- ";
+ cout << (10 * i) / sampRatio << "%" << endl;
+ }
+ }
+
+ results = test(dataset_ts);
+ testError.push_back(compError(results, dataset_ts));
+ }
+
+ if (m_hp->verbose >= 1) {
+ cout << endl << "--- Online Random Forest test error over epochs: ";
+ dispErrors(testError);
+ }
+
+ return results;
+ }
+
+ virtual void writeForest(string fileName) {
+ cout<<"Writing forest"<numTrees; i++) {
+ m_trees[i]->writeTree(fp);
+ }
+ fclose(fp);
+ cout<<"Writing forest done"<numTrees; i++) {
+ m_trees[i]->loadTree(fp, i);
+ }
+ fclose(fp);
+ cout<<"Loading forest done"< m_trees;
+};
+
+#endif /* ONLINERF_H_ */
diff --git a/online_forests_ros/include/online_forests/onlinetree.h b/online_forests_ros/include/online_forests/onlinetree.h
new file mode 100644
index 0000000..5d6ac32
--- /dev/null
+++ b/online_forests_ros/include/online_forests/onlinetree.h
@@ -0,0 +1,107 @@
+#ifndef ONLINETREE_H_
+#define ONLINETREE_H_
+
+#include
+#include
+
+#include "online_forests/classifier.h"
+#include "online_forests/data.h"
+#include "online_forests/hyperparameters.h"
+#include "online_forests/onlinenode.h"
+
+using namespace std;
+
+class OnlineTree: public Classifier {
+public:
+ OnlineTree(const Hyperparameters &hp, const int &numClasses, const int &numFeatures, const vector &minFeatRange,
+ const vector &maxFeatRange) :
+ m_counter(0.0), m_hp(&hp) {
+ m_rootNode = new OnlineNode(hp, numClasses, numFeatures, minFeatRange, maxFeatRange, 0);
+ }
+
+ ~OnlineTree() {
+ delete m_rootNode;
+ }
+
+ virtual void update(Sample &sample) {
+ m_rootNode->update(sample);
+ }
+
+ virtual void train(DataSet &dataset) {
+ vector randIndex;
+ int sampRatio = dataset.m_numSamples / 10;
+ for (int n = 0; n < m_hp->numEpochs; n++) {
+ randPerm(dataset.m_numSamples, randIndex);
+ for (int i = 0; i < dataset.m_numSamples; i++) {
+ update(dataset.m_samples[randIndex[i]]);
+ if (m_hp->verbose >= 3 && (i % sampRatio) == 0) {
+ cout << "--- Online Random Tree training --- Epoch: " << n + 1 << " --- ";
+ cout << (10 * i) / sampRatio << "%" << endl;
+ }
+ }
+ }
+ }
+
+ virtual Result eval(Sample &sample) {
+ return m_rootNode->eval(sample);
+ }
+
+ virtual vector test(DataSet &dataset) {
+ vector results;
+ for (int i = 0; i < dataset.m_numSamples; i++) {
+ results.push_back(eval(dataset.m_samples[i]));
+ }
+
+ double error = compError(results, dataset);
+ if (m_hp->verbose >= 3) {
+ cout << "--- Online Random Tree test error: " << error << endl;
+ }
+
+ return results;
+ }
+
+ virtual vector trainAndTest(DataSet &dataset_tr, DataSet &dataset_ts) {
+ vector results;
+ vector randIndex;
+ int sampRatio = dataset_tr.m_numSamples / 10;
+ vector testError;
+ for (int n = 0; n < m_hp->numEpochs; n++) {
+ randPerm(dataset_tr.m_numSamples, randIndex);
+ for (int i = 0; i < dataset_tr.m_numSamples; i++) {
+ update(dataset_tr.m_samples[randIndex[i]]);
+ if (m_hp->verbose >= 3 && (i % sampRatio) == 0) {
+ cout << "--- Online Random Tree training --- Epoch: " << n + 1 << " --- ";
+ cout << (10 * i) / sampRatio << "%" << endl;
+ }
+ }
+
+ results = test(dataset_ts);
+ testError.push_back(compError(results, dataset_ts));
+ }
+
+ if (m_hp->verbose >= 3) {
+ cout << endl << "--- Online Random Tree test error over epochs: ";
+ dispErrors(testError);
+ }
+
+ return results;
+ }
+
+ virtual void writeTree(FILE * fp) {
+ m_rootNode->writeNode(m_rootNode, m_counter, fp);
+ fprintf(fp,"T\n");
+ }
+
+ virtual void loadTree(FILE * fp, int tree_index) {
+ m_rootNode->loadNode(m_rootNode, fp);
+ return;
+ }
+
+private:
+ double m_counter;
+ const Hyperparameters *m_hp;
+
+ OnlineNode* m_rootNode;
+};
+
+#endif /* ONLINETREE_H_ */
diff --git a/online_forests_ros/include/online_forests/randomtest.h b/online_forests_ros/include/online_forests/randomtest.h
new file mode 100644
index 0000000..233c12a
--- /dev/null
+++ b/online_forests_ros/include/online_forests/randomtest.h
@@ -0,0 +1,174 @@
+#ifndef RANDOMTEST_H_
+#define RANDOMTEST_H_
+
+#include "online_forests/data.h"
+#include "online_forests/utilities.h"
+
+class RandomTest {
+public:
+ RandomTest() {
+
+ }
+
+ RandomTest(const int &numClasses) :
+ m_numClasses(&numClasses), m_trueCount(0.0), m_falseCount(0.0) {
+ for (int i = 0; i < numClasses; i++) {
+ m_trueStats.push_back(0.0);
+ m_falseStats.push_back(0.0);
+ }
+ m_threshold = randomFromRange(-1, 1);
+ }
+
+ RandomTest(const int &numClasses, const double featMin, const double featMax) :
+ m_numClasses(&numClasses), m_trueCount(0.0), m_falseCount(0.0) {
+ for (int i = 0; i < numClasses; i++) {
+ m_trueStats.push_back(0.0);
+ m_falseStats.push_back(0.0);
+ }
+ m_threshold = randomFromRange(featMin, featMax);
+ }
+
+ void updateStats(const Sample &sample, const bool decision) {
+ if (decision) {
+ m_trueCount += sample.w;
+ m_trueStats[sample.y] += sample.w;
+ } else {
+ m_falseCount += sample.w;
+ m_falseStats[sample.y] += sample.w;
+ }
+ }
+
+ double score() {
+ double totalCount = m_trueCount + m_falseCount;
+
+ // Split Entropy
+ double p, splitEntropy = 0.0;
+ if (m_trueCount) {
+ p = m_trueCount / totalCount;
+ splitEntropy -= p * log2(p);
+ }
+ if (m_trueCount) {
+ p = m_trueCount / totalCount;
+ splitEntropy -= p * log2(p);
+ }
+
+ // Prior Entropy
+ double priorEntropy = 0.0;
+ for (int i = 0; i < *m_numClasses; i++) {
+ p = (m_trueStats[i] + m_falseStats[i]) / totalCount;
+ if (p) {
+ priorEntropy -= p * log2(p);
+ }
+ }
+
+ // Posterior Entropy
+ double trueScore = 0.0, falseScore = 0.0;
+ if (m_trueCount) {
+ for (int i = 0; i < *m_numClasses; i++) {
+ p = m_trueStats[i] / m_trueCount;
+ if (p) {
+ trueScore -= p * log2(p);
+ }
+ }
+ }
+ if (m_falseCount) {
+ for (int i = 0; i < *m_numClasses; i++) {
+ p = m_falseStats[i] / m_falseCount;
+ if (p) {
+ falseScore -= p * log2(p);
+ }
+ }
+ }
+ double posteriorEntropy = (m_trueCount * trueScore + m_falseCount * falseScore) / totalCount;
+
+ // Information Gain
+ return (2.0 * (priorEntropy - posteriorEntropy)) / (priorEntropy * splitEntropy + 1e-10);
+ }
+
+ pair , vector > getStats() {
+ return pair , vector > (m_trueStats, m_falseStats);
+ }
+
+protected:
+ const int *m_numClasses;
+ double m_threshold;
+ double m_trueCount;
+ double m_falseCount;
+ vector m_trueStats;
+ vector m_falseStats;
+};
+
+class HyperplaneFeature: public RandomTest {
+public:
+ HyperplaneFeature() {
+
+ }
+
+ HyperplaneFeature(const int &numClasses, const int &numFeatures, const int &numProjFeatures, const vector