Skip to content

Q-Future/Visual-Question-Answering-for-Video-Quality-Assessment

Repository files navigation

VQA²: Visual Question Answering for Video Quality Assessment

[ACMMM2025] Official code and dataset for VQA² series models and dataset

Built upon LLaVA-Onevision

1Shanghai Jiaotong University, 2Nanyang Technological University
*Equal contribution. #Corresponding author.

Release News

🔖 TODO:

  • 🎯[√] Release testing and training code.
  • 🎯[√] Release model weights.
  • 🎯[√] Release the stage-2 instruction dataset.
  • 🎯[√] Release the stage-3 instruction dataset.
  • 🎯[√] Release the training code on the famous Qwen2.5-VL.

Quicker Start:

Install dependencies:

cd llava_finetune
conda create -n VQA python=3.10 -y
conda activate VQA
pip install --upgrade pip
pip install -e ".[train]"
pip install pytorchvideo
pip install transformers==4.44.0 

Fix:[2024.12.20] Please download the initialized slowfast.pth (https://huggingface.co/JZHWS/slowfast) and load the pretrained model in "llava\model\slowfast\builder.py"(line 11) to make sure the model initialization is implementable since the model downloaded through pytorchvideo includes meta tensors.

VQA² Scorers:

cd quality_scoring

python ./llava/eval/model_score_video.py (for video)

python ./llava/eval/model_score_image.py (for image)

VQA² Assistant:

For Q-bench-video Evaluation:

cd quality_interpreting
python ./llava/eval/model_vqa_q_bench_video.py

For image Evaluation:

cd quality_interpreting
python ./llava/eval/model_vqa_image.py

Gradio demo:

python ./app.py #Note that the minimum GPU requirement is 3090(24G)*1.

Training

cd llava_finetune
chmod +x ./finetune_onevision.sh

Then directly execute this .sh file.

Training Dataset

Stage-2-streaming (2.1K): https://huggingface.co/datasets/q-future/VQA-stage2-streaming (q-future/VQA-stage2-streaming)

Stage-3 (14.3K mix/11.6K only): https://huggingface.co/datasets/q-future/VQA-stage3 (q-future/VQA-stage3)

NOTE!!! The Stage-2-UGC part is in Stage3-mix part in https://huggingface.co/datasets/q-future/VQA-stage3

Model Zoo

We temporarily provide the huggingface weight of VQA²-UGC-Scorer(7B) ,VQA²-Streaming-Scorer(7B), and VQA²-Assistant(7B); more versions will be released later.

HF-PATH:

VQA²-UGC-Scorer(7B): https://huggingface.co/q-future/VQA-UGC-Scorer-llava_qwen (q-future/VQA-UGC-Scorer-llava_qwen)

VQA²-Streaming-Scorer(7B): https://huggingface.co/q-future/VQA-Streaming-Scorer-llava_qwen (q-future/VQA-Streaming-Scorer-llava_qwen)

VQA²-Assistant(7B): https://huggingface.co/q-future/VQA-Assistant-llava_qwen (q-future/VQA-Assistant-llava_qwen)

VQA²-Assistant(7B)-enhanced (for video and images): https://huggingface.co/q-future/VQA-Assistant-llava-qwen-enhanced (q-future/VQA-Assistant-llava-qwen-enhanced)

Citation

If you consider this work interesting, please feel free to cite it in your work!

@article{jia2024vqa,
  title={VQA $\^{} 2$: Visual Question Answering for Video Quality Assessment},
  author={Jia, Ziheng and Zhang, Zicheng and Qian, Jiaying and Wu, Haoning and Sun, Wei and Li, Chunyi and Liu, Xiaohong and Lin, Weisi and Zhai, Guangtao and Min, Xiongkuo},
  journal={arXiv preprint arXiv:2411.03795},
  year={2024}
}
}
@article{zhang2024q,
  title={Q-Bench-Video: Benchmarking the Video Quality Understanding of LMMs},
  author={Zhang, Zicheng and Jia, Ziheng and Wu, Haoning and Li, Chunyi and Chen, Zijian and Zhou, Yingjie and Sun, Wei and Liu, Xiaohong and Min, Xiongkuo and Lin, Weisi and others},
  journal={CVPR 2025},
  year={2024}
}

About

[ACMMM2025] Official released code for VQA² series models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published