Skip to content

Commit

Permalink
v1.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
FriedhelmHamann committed Feb 7, 2025
1 parent 88bf999 commit 1a4875e
Show file tree
Hide file tree
Showing 141 changed files with 834 additions and 384,757 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
data/
models/
output/

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down
284 changes: 186 additions & 98 deletions README.md

Large diffs are not rendered by default.

22 changes: 0 additions & 22 deletions configs/predict/combined.yaml

This file was deleted.

18 changes: 18 additions & 0 deletions configs/predict/combined_on_test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
common:
data_root: data/MouseSIS
iou_threshold: 0.1
split: test
sequence_ids: [1, 7, 10, 16, 22, 26, 28, 32]

gray_detector:
yolo_path: ./models/yolo_frame.pt

e2vid_detector:
yolo_path: ./models/yolo_e2vid.pt

tracker:
max_age: 1
min_hits: 3
iou_threshold: 0.3

output_dir: ./output/
18 changes: 18 additions & 0 deletions configs/predict/combined_on_validation.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
common:
data_root: data/MouseSIS
iou_threshold: 0.1
split: val
sequence_ids: [3, 4, 12, 25]

gray_detector:
yolo_path: ./models/yolo_frame.pt

e2vid_detector:
yolo_path: ./models/yolo_e2vid.pt

tracker:
max_age: 1
min_hits: 3
iou_threshold: 0.3

output_dir: ./output/
22 changes: 0 additions & 22 deletions configs/predict/e2vid.yaml

This file was deleted.

22 changes: 0 additions & 22 deletions configs/predict/frame.yaml

This file was deleted.

18 changes: 18 additions & 0 deletions configs/predict/quickstart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
common:
data_root: data/MouseSIS
iou_threshold: 0.1
split: val
sequence_ids: [25]

gray_detector:
yolo_path: ./models/yolo_frame.pt

e2vid_detector:
yolo_path: ./models/yolo_e2vid.pt

tracker:
max_age: 1
min_hits: 3
iou_threshold: 0.3

output_dir: ./output/
18 changes: 18 additions & 0 deletions configs/predict/sis_challenge_baseline.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
common:
data_root: data/MouseSIS
iou_threshold: 0.1
split: test
sequence_ids: [10, 16, 22, 26, 28, 32]

gray_detector:
yolo_path: ./models/yolo_frame.pt

e2vid_detector:
yolo_path: ./models/yolo_e2vid.pt

tracker:
max_age: 1
min_hits: 3
iou_threshold: 0.3

output_dir: ./output/
7 changes: 0 additions & 7 deletions configs/train/e2vid.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions configs/train/frame.yaml

This file was deleted.

93 changes: 93 additions & 0 deletions docs/DATASET.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Dataset

The [MouseSIS dataset](https://arxiv.org/pdf/2409.03358) consists of 33 sequences with an approximate duration of 20s each. Each sequences contains pixel-aligned frames and events with accurate time-alignment (hardware-triggered). Additionally, we provide mask-accurate tracking labels for all mice in the sequences (a task we term *spatio-temporal instance segmentation (SIS)*).

The data itself can be found [here](https://drive.google.com/drive/folders/1TQns9-WZw-n26FaUE3gqdAhGgrlRUzCp?usp=drive_link). Each sequence is saved as a .hdf5 file in it's respective split folder. The annotations are provided as .json files in a format similar to the YouTubeVIS format (one per split).

You can comprehend the dataset structure using this visualization script:

```bash
python scripts/visualize_events_frames_and_masks.py --h5_path data/MouseSIS/top/val/seq25.h5 --annotation_path data/MouseSIS/val_annotations.json
```

If you download the whole dataset, the structure of the dataset looks like this.

```txt
data/MouseSIS
├── top/
│ ├── train
│ │ ├── seq_02.hdf5
│ │ ├── seq_05.hdf5
│ │ ├── ...
│ │ └── seq_33.hdf5
| ├── val
│ │ ├── seq_03.hdf5
│ │ ├── seq_04.hdf5
│ │ ├── ...
│ │ └── seq_25.hdf5
│ └── test
│ ├── seq_01.hdf5
│ ├── seq_07.hdf5
│ ├── ...
│ └── seq_32.hdf5
├── dataset_info.csv
├── val_annotations.json
└── train_annotations.json
```
The .hdf5 files have the following fields ():

```txt
images: (num_images, height, width, 3) uint8
img2event: (num_images,) int64
img_ts: (num_images,) float64 # timestamps in microseconds
p: (num_events,) uint8
t: (num_events,) uint32 # timestamps in microseconds
x: (num_events,) float64
y: (num_events,) float64
```
The field `img2event` is the index of the last event occuring before the start of exposure of an image.

The annotations files have this format:

```json
{
"info": {
"description": "string", // Dataset description
"version": "string", // Version identifier
"date_created": "string" // Creation timestamp
},
"videos": [
{
"id": "string", // Video identifier (range: "01" to "33")
"width": integer, // Frame width in pixels (1280)
"height": integer, // Frame height in pixels (720)
"length": integer // Total number of frames
}
],
"annotations": [
{
"id": integer, // Unique instance identifier
"video_id": "string", // Reference to parent video
"category_id": integer, // Object category (1 = mouse)
"segmentations": [
{
"size": [height: integer, width: integer], // Mask dimensions
"counts": "string" // RLE-encoded segmentation mask
}
],
"areas": [float], // Object area in pixels
"bboxes": [ // Bounding box coordinates
[x_min: float, y_min: float, width: float, height: float]
],
"iscrowd": integer // Crowd annotation flag (0 or 1)
}
],
"categories": [
{
"id": integer, // Category identifier
"name": "string", // Category name
"supercategory": "string" // Parent category
}
]
}
```
3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ ultralytics
transformers
matplotlib
event-vision-library
pycocotools
pycocotools
scikit-image
17 changes: 8 additions & 9 deletions src/TrackEval/run_mouse_eval.py → scripts/eval.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
""" run_mouse_eval.py
""" eval.py
Run example:
run_mouse.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL STEm_Seg
Command Line Arguments: Defaults, # Comments
Expand Down Expand Up @@ -34,12 +34,12 @@
"""

import sys
import os
from pathlib import Path
import argparse
from multiprocessing import freeze_support
import argparse
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval # noqa: E402
sys.path.append(str(Path(__file__).parent.parent))
import src.third_party.TrackEval.trackeval as trackeval

if __name__ == '__main__':
freeze_support()
Expand All @@ -50,11 +50,10 @@
args = parser.parse_args()

default_eval_config = trackeval.Evaluator.get_default_eval_config()
# print only combined since TrackMAP is undefined for per sequence breakdowns
default_eval_config['PRINT_ONLY_COMBINED'] = False
default_dataset_config = trackeval.datasets.MouseYouTubeVIS.get_default_dataset_config(TRACKERS_TO_EVAL = args.TRACKERS_TO_EVAL, SPLIT_TO_EVAL = args.SPLIT_TO_EVAL)
default_dataset_config = trackeval.datasets.MouseSIS.get_default_dataset_config()
default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity'], 'THRESHOLD': 0.5}
config = {**default_eval_config, **default_dataset_config, **default_metrics_config} # Merge default configs
config = {**default_eval_config, **default_dataset_config, **default_metrics_config}
parser = argparse.ArgumentParser()
for setting in config.keys():
if type(config[setting]) == list or type(config[setting]) == type(None):
Expand Down Expand Up @@ -84,12 +83,12 @@

# Run code
evaluator = trackeval.Evaluator(eval_config)
dataset_list = [trackeval.datasets.MouseYouTubeVIS(dataset_config)]
dataset_list = [trackeval.datasets.MouseSIS(dataset_config)]
metrics_list = []
for metric in [trackeval.metrics.TrackMAP, trackeval.metrics.HOTA, trackeval.metrics.CLEAR,
trackeval.metrics.Identity]:
if metric.get_name() in metrics_config['METRICS']:
# specify TrackMAP config for YouTubeVIS
# specify TrackMAP config for MouseSIS
if metric == trackeval.metrics.TrackMAP:
default_track_map_config = metric.get_default_metric_config()
default_track_map_config['USE_TIME_RANGES'] = False
Expand Down
Loading

0 comments on commit 1a4875e

Please sign in to comment.