Skip to content

Latest commit

 

History

History
129 lines (88 loc) · 5.4 KB

t2v.md

File metadata and controls

129 lines (88 loc) · 5.4 KB

Text to Video and Video to Text

Here're are some resources about Text to Video and Video to Text modeling, understanding, generation in Multi-Modal LLMs

Method

Movie Gen: A Cast of Media Foundation Models

tag: Movie Gen | Meta

paper link: here

blog link: here

citation:

@misc{polyak2024moviegencastmedia,
      title={Movie Gen: A Cast of Media Foundation Models}, 
      author={Adam Polyak and Amit Zohar and Andrew Brown and Andros Tjandra and Animesh Sinha and Ann Lee and Apoorv Vyas and Bowen Shi and Chih-Yao Ma and Ching-Yao Chuang and David Yan and Dhruv Choudhary and Dingkang Wang and Geet Sethi and Guan Pang and Haoyu Ma and Ishan Misra and Ji Hou and Jialiang Wang and Kiran Jagadeesh and Kunpeng Li and Luxin Zhang and Mannat Singh and Mary Williamson and Matt Le and Matthew Yu and Mitesh Kumar Singh and Peizhao Zhang and Peter Vajda and Quentin Duval and Rohit Girdhar and Roshan Sumbaly and Sai Saketh Rambhatla and Sam Tsai and Samaneh Azadi and Samyak Datta and Sanyuan Chen and Sean Bell and Sharadh Ramaswamy and Shelly Sheynin and Siddharth Bhattacharya and Simran Motwani and Tao Xu and Tianhe Li and Tingbo Hou and Wei-Ning Hsu and Xi Yin and Xiaoliang Dai and Yaniv Taigman and Yaqiao Luo and Yen-Cheng Liu and Yi-Chiao Wu and Yue Zhao and Yuval Kirstain and Zecheng He and Zijian He and Albert Pumarola and Ali Thabet and Artsiom Sanakoyeu and Arun Mallya and Baishan Guo and Boris Araya and Breena Kerr and Carleigh Wood and Ce Liu and Cen Peng and Dimitry Vengertsev and Edgar Schonfeld and Elliot Blanchard and Felix Juefei-Xu and Fraylie Nord and Jeff Liang and John Hoffman and Jonas Kohler and Kaolin Fire and Karthik Sivakumar and Lawrence Chen and Licheng Yu and Luya Gao and Markos Georgopoulos and Rashel Moritz and Sara K. Sampson and Shikai Li and Simone Parmeggiani and Steve Fine and Tara Fowler and Vladan Petrovic and Yuming Du},
      year={2024},
      eprint={2410.13720},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2410.13720}, 
}

Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation

tag: Hallo2 | Fudan University

paper link: here

github link: here

citation:

@misc{cui2024hallo2longdurationhighresolutionaudiodriven,
      title={Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation}, 
      author={Jiahao Cui and Hui Li and Yao Yao and Hao Zhu and Hanlin Shang and Kaihui Cheng and Hang Zhou and Siyu Zhu and Jingdong Wang},
      year={2024},
      eprint={2410.07718},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2410.07718}, 
}

mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models

tag: mPLUG-Owl 3 | Alibaba Group

paper link: here

github link: here

citation:

@misc{ye2024mplugowl3longimagesequenceunderstanding,
      title={mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models}, 
      author={Jiabo Ye and Haiyang Xu and Haowei Liu and Anwen Hu and Ming Yan and Qi Qian and Ji Zhang and Fei Huang and Jingren Zhou},
      year={2024},
      eprint={2408.04840},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.04840}, 
}

Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation

tag: Hallo | Fudan University

paper link: here

github link: here

homepage link: here

follow-up work: here

citation:

@misc{xu2024hallohierarchicalaudiodrivenvisual,
      title={Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation}, 
      author={Mingwang Xu and Hui Li and Qingkun Su and Hanlin Shang and Liwei Zhang and Ce Liu and Jingdong Wang and Yao Yao and Siyu Zhu},
      year={2024},
      eprint={2406.08801},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2406.08801}, 
}

Benchmark

Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis

tag: Video-MME | USTC

paper link: here

github link: here

homepage link: here

dataset link: here

citation:

@misc{fu2024videommefirstevercomprehensiveevaluation,
      title={Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis}, 
      author={Chaoyou Fu and Yuhan Dai and Yongdong Luo and Lei Li and Shuhuai Ren and Renrui Zhang and Zihan Wang and Chenyu Zhou and Yunhang Shen and Mengdan Zhang and Peixian Chen and Yanwei Li and Shaohui Lin and Sirui Zhao and Ke Li and Tong Xu and Xiawu Zheng and Enhong Chen and Rongrong Ji and Xing Sun},
      year={2024},
      eprint={2405.21075},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2405.21075}, 
}