ID changes in good top down model and hungarian tracks #1815
Replies: 9 comments 4 replies
-
Hi @transkriptase, Apologies for the delay! We're a bit behind on support responses at the moment. I think your issue is related to the window size: While this allows SLEAP to consider linking candidate poses going back up to 20 frames, the further back in time you go, the more likely it is to make false associations. Since you're using the I'd recommend trying out a lower track window and consider using our newer track-local queues. This enables bridging over long gaps if you know the exact number of animals that are in your video. You can try it out with these settings:
Just change Let us know if that helps! Cheers, Talmo |
Beta Was this translation helpful? Give feedback.
-
Hello Talmo, thank you so much for your answer. I run another tracking with the command you suggested;
This reduced the number of ID changes, there are still many and even the distance between the individuals is not short, the ID changes exist. I added one example, maybe it helps! For example in photo 1 at frame 1577 track 47 (blue) and track 7 (yellow) switch identities in next frame (1578). In frame 1578 you can clearly see that track ID replaced even the distance between them. I hope you have more suggestions because ID changes is extremely big problem for me right now. Best, |
Beta Was this translation helpful? Give feedback.
-
Hi Talmo, I uploaded the file in .zip file to the link you sent. Please write me if you need anything else. Best wishes, |
Beta Was this translation helpful? Give feedback.
-
Hi Divya,
Thank you so much for your answer and I hope we find the best solution. I
will be waiting for your last feedback.
Best wishes,
Özge
Am Do., 5. Sept. 2024 um 21:05 Uhr schrieb DivyaSesh <
***@***.***>:
… Hi @transkriptase <https://github.com/transkriptase>,
Thank you for sharing the data.!
Apologies for the delay. We looked into the data and trying a few
alternate methods. We found that using [object_keypoint_similarity](
https://github.com/talmolab/sleap/blob/35463a1ddf7649ab813d36f46680dad5eaf3edfc/sleap/nn/tracker/components.py#L77
) as the scoring function with hungarian matching could potentially solve
the ID switching issue (the current instance_similarity method doesn't
apply any normalization, resulting in low - near zero, similarity scores).
We're currently testing this method and working on having
object_keypoint_similarity method as the default scoring method in this PR
<#1939>.
Thanks,
Divya
—
Reply to this email directly, view it on GitHub
<#1815 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BBGKX672NRNMPSW6MV5MAT3ZVCTPHAVCNFSM6AAAAABJSDMYNWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTANJWGEZTQMY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi @transkriptase, The PR #1939 is all set! Using
Let us know if you have any questions! Thanks! Divya |
Beta Was this translation helpful? Give feedback.
-
Hallo Divya, sadly I am getting same ID switched in same frames. I am not sure if there is less then previous one but they are still exist, even I installed and checked new sleap_dev version of the software. WhatsApp.Video.2024-10-14.um.13.11.20_f509063a.mp4And in this version, i also get so much memory error as well. I have no idea why!!! Hopefully we can find a better solution. Best, |
Beta Was this translation helpful? Give feedback.
-
Hello Divya, thank you for the answer. I add normalized_instance but still get memory error like I used 80 GB memory in GPU |
Beta Was this translation helpful? Give feedback.
-
Hello Divya, today, I tried to run a trackking with a local GPU PC (not on hpc) and got same memory error. I will also try same setting with old version of sleap but I guess this is sleap_dev version error. Please let me know how can I handle this problem. This os the part of the output: (sleap_dev) C:\Users\okilic\Desktop\cuttung_legs>sleap-track "C-/3/C-_8_3_5000_21.03.240.mp4" -m "models/baseline.centroid_2" -m "models/baseline_medium_rf.topdown_3" -tracking.tracker simplemaxtracks --tracking.similarity normalized_instance --tracking.max_tracking 1 --tracking.max_tracks 58 --tracking.track_window 5 -o "tracked/8_3_simple_58_5_dev.slp" INFO:sleap.nn.inference:Failed to query GPU memory from nvidia-smi. Defaulting to first GPU. System: Video: C-/3/C-_8_3_5000_21.03.240.mp4 ........ Best, |
Beta Was this translation helpful? Give feedback.
-
Hello again Divya, I got the same memory error with old version of sleap as well 2024-11-06 14:38:33.842811: E tensorflow/stream_executor/cuda/cuda_driver.cc:802] failed to alloc 34359738368 bytes on host: CUDA_ERROR_OUT_OF_MEMORY: out of memory (sleap) C:\Users\okilic\Desktop\cuttung_legs> so, I have no idea now what to do :) |
Beta Was this translation helpful? Give feedback.
-
Hi,
I use sleap since really long time and so far so good tracks I got but then I recognized that my tracks are flipping in some situation like in pic1
Even my tracks result looks like this:
I run tracking on hpc with this command:
(sleap) okilic@sv2213:~$ sleap-track "/home/okilic/cutting_legs/C-/2/C-_6_2_500 0_21.03.240.mp4" -m "/home/okilic/models/baseline.centroid_2/" -m "/home/okili c/models/baseline_medium_rf.topdown_3/" --tracking.tracker flow --tracking.sim ilarity centroid --tracking.match hungarian --tracking.track_window 20 -o "/ home/okilic/cutting_legs/tracked/6_2_F_C_H.slp"
Started inference at: 2024-06-19 15:45:08.337293
Args:
{
│ 'data_path': '/home/okilic/cutting_legs/C-/2/C-_6_2_5000_21.03.240.mp4',
│ 'models': ['/home/okilic/models/baseline.centroid_2/', '/home/okilic/models/baseline_medium_rf.topdown_3/'],
│ 'frames': '',
│ 'only_labeled_frames': False,
│ 'only_suggested_frames': False,
│ 'output': '/home/okilic/cutting_legs/tracked/6_2_F_C_H.slp',
│ 'no_empty_frames': False,
│ 'verbosity': 'rich',
│ 'video.dataset': None,
│ 'video.input_format': 'channels_last',
│ 'video.index': '',
│ 'cpu': False,
│ 'first_gpu': False,
│ 'last_gpu': False,
│ 'gpu': 'auto',
│ 'max_edge_length_ratio': 0.25,
│ 'dist_penalty_weight': 1.0,
│ 'batch_size': 4,
│ 'open_in_gui': False,
│ 'peak_threshold': 0.2,
│ 'max_instances': None,
│ 'tracking.tracker': 'flow',
│ 'tracking.max_tracking': None,
│ 'tracking.max_tracks': None,
│ 'tracking.target_instance_count': None,
│ 'tracking.pre_cull_to_target': None,
│ 'tracking.pre_cull_iou_threshold': None,
│ 'tracking.post_connect_single_breaks': None,
│ 'tracking.clean_instance_count': None,
│ 'tracking.clean_iou_threshold': None,
│ 'tracking.similarity': 'centroid',
│ 'tracking.match': 'hungarian',
│ 'tracking.robust': None,
│ 'tracking.track_window': 20,
│ 'tracking.min_new_track_points': None,
│ 'tracking.min_match_points': None,
│ 'tracking.img_scale': None,
│ 'tracking.of_window_size': None,
│ 'tracking.of_max_levels': None,
│ 'tracking.save_shifted_instances': None,
│ 'tracking.kf_node_indices': None,
│ 'tracking.kf_init_frame_count': None
}
2024-06-19 15:45:08.405489: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.407732: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.411661: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.413686: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.424260: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.426924: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.429263: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.431409: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.434089: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.436855: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.439228: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.441132: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
INFO:sleap.nn.inference:Auto-selected GPU 0 with 30655 MiB of free memory.
Versions:
SLEAP: 1.3.3
TensorFlow: 2.7.0
Numpy: 1.19.5
Python: 3.7.12
OS: Linux-5.10.0-30-amd64-x86_64-with-debian-11.9
System:
GPUs: 1/4 available
Device: /physical_device:GPU:0
Available: True
Initalized: False
Memory growth: True
Device: /physical_device:GPU:1
Available: False
Initalized: False
Memory growth: None
Device: /physical_device:GPU:2
Available: False
Initalized: False
Memory growth: None
Device: /physical_device:GPU:3
Available: False
Initalized: False
Memory growth: None
Video: /home/okilic/cutting_legs/C-/2/C-_6_2_5000_21.03.240.mp4
2024-06-19 15:45:08.631733: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-06-19 15:45:08.640805: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.643670: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:08.646101: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:10.419222: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:10.421554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:10.424518: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2024-06-19 15:45:10.427450: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 28541 MB memory: -> device: 0, name: NVIDIA A100 80GB PCIe, pci bus id: 0000:04:00.0, compute capability: 8.0
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% ETA: -:--:-- ?
and the number of flips usually around 10-15 or more in 5 min video. Somehow I can not use --tracking_similarity instance instead of centroid because it takes so lond and hpc never finish tracking. I get this error on terminal:
at 7fae9bee6000 of size 256 next 4589
2024-06-19 15:35:13.714450: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6100 of size 256 next 4601
2024-06-19 15:35:13.714456: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6200 of size 256 next 4592
2024-06-19 15:35:13.714461: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6300 of size 256 next 4591
2024-06-19 15:35:13.714466: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6400 of size 256 next 4594
2024-06-19 15:35:13.714479: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6500 of size 256 next 4579
2024-06-19 15:35:13.714483: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6600 of size 256 next 4617
2024-06-19 15:35:13.714492: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6700 of size 256 next 4616
2024-06-19 15:35:13.714499: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7fae9bee6800 of size 256 next 4618
could you please help me to at least minimize id flips? I hope there is a clear solutiion for this.
Thank you
Beta Was this translation helpful? Give feedback.
All reactions