Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question regarding stage 4 HD image size #244

Open
jpan72 opened this issue Oct 17, 2024 · 2 comments
Open

Question regarding stage 4 HD image size #244

jpan72 opened this issue Oct 17, 2024 · 2 comments

Comments

@jpan72
Copy link

jpan72 commented Oct 17, 2024

Hello,

Thank you for the great work!

For stage 4 (instruction tuning with HD data), the current code seems to resize/crop image to 224x224:
https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/scripts/videochat_mistral/config_7b_hd_stage4.py#L21
https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/dataset/__init__.py#L73

which means it's actually using 224x224 frames for training. Is that true? If so, what is this "HD" about? Or did I miss something?

Thank you!

@yinanhe
Copy link
Member

yinanhe commented Oct 17, 2024

224 is the input resolution of our vision encoder. You can refer to the dynamic resolution setting of HD

dynamic_config=dict(
local_size=224,
hd_num=6,
padding=False,
add_global=True,
),

@jpan72
Copy link
Author

jpan72 commented Nov 7, 2024

Thank you for the swift response! I see how the dynamic resolution setting is working for HD training.

One followup question is - I saw the blocks is not used in videochat2 HD training.
https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/dataset/hd_utils.py#L93

However, in the InternVL code that videochat2 code is referring to, they used the blocks to generate (local_size x local_size) sub-images:
https://github.com/OpenGVLab/InternVL/blob/2d93b099ffbbf45d1db59710914f26fce4494104/README.md?plain=1#L752-L771

Does that mean videochat2 HD training doesn't use the sub-images, and uses the resized images instead? In that case, how does that work with the vision encoder 224x224 input setup?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants