Skip to content
This repository has been archived by the owner on Jun 29, 2024. It is now read-only.

Latest commit

 

History

History
194 lines (147 loc) · 8.23 KB

File metadata and controls

194 lines (147 loc) · 8.23 KB

Trajectory Prediction

Here're some resources about Trajectory Prediction

Intros:

  • This directory is about the methodology of Trajectory Prediction task in the prediction stage, that is concerned with forecasting the future movements of surrounding dynamic entities such as vehicles, pedestrians, and cyclists. This task is fundamental for safe and efficient navigation, as it helps the autonomous vehicle to anticipate potential changes in the environment and make timely, proactive driving decisions.

  • In more detail, Trajectory Prediction involves generating plausible future paths for each detected object based on its current and historical states, such as its position, velocity, acceleration, and heading direction. Moreover, it also considers the object's interactions with the environment and other objects to account for the context and behavior patterns.

  • For instance, a trajectory prediction module might predict that a pedestrian standing on the sidewalk near a crosswalk might cross the road in the near future, or a vehicle signaling a lane change will move into the adjacent lane. By making these predictions, the autonomous driving system can plan its actions accordingly to avoid potential collisions and ensure a smooth and safe ride.

  • Due to the complexity and uncertainty of real-world scenarios, trajectory prediction remains a challenging problem, requiring robust methods that can handle diverse scenarios and take into account the inherently probabilistic nature of human behavior.


ViP3D: End-to-end visual trajectory prediction via 3d agent queries

paper link: here

citation:

@inproceedings{gu2023vip3d,
  title={ViP3D: End-to-end visual trajectory prediction via 3d agent queries},
  author={Gu, Junru and Hu, Chenxu and Zhang, Tianyuan and Chen, Xuanyao and Wang, Yilun and Wang, Yue and Zhao, Hang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5496--5506},
  year={2023}
}

Self-Supervised Traffic Advisors: Distributed, Multi-view Traffic Prediction for Smart Cities

paper link: here

citation:

@misc{sun2022selfsupervised,
      title={Self-Supervised Traffic Advisors: Distributed, Multi-view Traffic Prediction for Smart Cities}, 
      author={Jiankai Sun and Shreyas Kousik and David Fridovich-Keil and Mac Schwager},
      year={2022},
      eprint={2204.06171},
      archivePrefix={arXiv},
      primaryClass={id='cs.RO' full_name='Robotics' is_active=True alt_name=None in_archive='cs' is_general=False description='Roughly includes material in ACM Subject Class I.2.9.'}
}

SGCN: Sparse graph convolution network for pedestrian trajectory prediction

paper link: here

citation:

@inproceedings{shi2021sgcn,
  title={SGCN: Sparse graph convolution network for pedestrian trajectory prediction},
  author={Shi, Liushuai and Wang, Le and Long, Chengjiang and Zhou, Sanping and Zhou, Mo and Niu, Zhenxing and Hua, Gang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={8994--9003},
  year={2021}
}

Shared cross-modal trajectory prediction for autonomous driving

paper link: here

citation:

@inproceedings{choi2021shared,
  title={Shared cross-modal trajectory prediction for autonomous driving},
  author={Choi, Chiho and Choi, Joon Hee and Li, Jiachen and Malla, Srikanth},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={244--253},
  year={2021}
}

Divide-and-conquer for lane-aware diverse trajectory prediction

paper link: here

citation:

@inproceedings{narayanan2021divide,
  title={Divide-and-conquer for lane-aware diverse trajectory prediction},
  author={Narayanan, Sriram and Moslemi, Ramin and Pittaluga, Francesco and Liu, Buyu and Chandraker, Manmohan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={15799--15808},
  year={2021}
}

Pedestrian and ego-vehicle trajectory prediction from monocular camera

paper link: here

citation:

@inproceedings{neumann2021pedestrian,
  title={Pedestrian and ego-vehicle trajectory prediction from monocular camera},
  author={Neumann, Lukas and Vedaldi, Andrea},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={10204--10212},
  year={2021}
}

Safe local motion planning with self-supervised freespace forecasting

paper link: here

citation:

@inproceedings{hu2021safe,
  title={Safe local motion planning with self-supervised freespace forecasting},
  author={Hu, Peiyun and Huang, Aaron and Dolan, John and Held, David and Ramanan, Deva},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={12732--12741},
  year={2021}
}

Lookout: Diverse multi-future prediction and planning for self-driving

paper link: here

citation:

@inproceedings{cui2021lookout,
  title={Lookout: Diverse multi-future prediction and planning for self-driving},
  author={Cui, Alexander and Casas, Sergio and Sadat, Abbas and Liao, Renjie and Urtasun, Raquel},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={16107--16116},
  year={2021}
}

Egocentric Human Trajectory Forecasting with a Wearable Camera and Multi-Modal Fusion

paper link: here

citation:

@article{Qiu_2022,
   title={Egocentric Human Trajectory Forecasting With a Wearable Camera and Multi-Modal Fusion},
   volume={7},
   ISSN={2377-3774},
   url={http://dx.doi.org/10.1109/LRA.2022.3188101},
   DOI={10.1109/lra.2022.3188101},
   number={4},
   journal={IEEE Robotics and Automation Letters},
   publisher={Institute of Electrical and Electronics Engineers (IEEE)},
   author={Qiu, Jianing and Chen, Lipeng and Gu, Xiao and Lo, Frank P.-W. and Tsai, Ya-Yen and Sun, Jiankai and Liu, Jiaqi and Lo, Benny},
   year={2022},
   month=oct, pages={8799–8806} 
}

Pnpnet: End-to-end perception and prediction with tracking in the loop

paper link: here

citation:

@inproceedings{liang2020pnpnet,
  title={Pnpnet: End-to-end perception and prediction with tracking in the loop},
  author={Liang, Ming and Yang, Bin and Zeng, Wenyuan and Chen, Yun and Hu, Rui and Casas, Sergio and Urtasun, Raquel},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11553--11562},
  year={2020}
}

Motion estimation for self-driving cars with a generalized camera

paper link: here

citation:

@inproceedings{hee2013motion,
  title={Motion estimation for self-driving cars with a generalized camera},
  author={Hee Lee, Gim and Faundorfer, Friedrich and Pollefeys, Marc},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2746--2753},
  year={2013}
}