Official PyTorch implementation of "Cross-Modal Object Tracking via Modality-Aware Fusion Network and A Large-Scale Dataset". (TNNLS 2024)
If you're using this code in a publication, please cite our paper.
@article{liu2024cross,
title={Cross-Modal Object Tracking via Modality-Aware Fusion Network and A Large-Scale Dataset},
author={Liu, Lei and Zhang, Mengya and Li, Cheng and Li, Chenglong and Tang, Jin},
journal={IEEE Transactions on Neural Networks and Learning Systems},
year={2024}
}
System Requirements are the same as DiMP.
Pretrained Model and results If you only run the tracker, you can use the pretrained model: Google Drive/Baidu Yun. Also, results from pretrained model are provided in Baidu Yun(Code:xxv0), more details could be found in hear.