Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add qwen2-vl support #12405

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

add qwen2-vl support #12405

wants to merge 2 commits into from

Conversation

Agoniii
Copy link
Contributor

@Agoniii Agoniii commented Feb 27, 2025

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add Qwen2-VL support

Need to be used with the mrope support in Mcore.

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • Generate
python scripts/vlm/qwen2vl_generate.py --load_from_hf
  • Finetune
    mock data:
torchrun --nproc_per_node 2 scripts/vlm/qwen2vl_finetune.py --gbs=1 --mbs=1 --max_steps=10 --tp_size 2 --devices 2 --log_dir ./outputs

real data

torchrun --nproc_per_node=4 scripts/vlm/qwen2vl_finetune.py \
     --num_nodes=1 --devices=4  --tp_size=4  \
     --gbs=1 --mbs=1  \
     --max_steps=1000 \
     --restore_path=$RESTORE_PATH \
     --projector_type="mcore_mlp" \
     --data_type="qwen2vl" \
     --data_path=$DATA_PATH \
     --image_folder=$IMAGE_FOLDER \
     --log_dir="/workspace/logs"

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

Signed-off-by: Xue Huang <[email protected]>
@@ -0,0 +1,9 @@
from pathlib import Path
Copy link
Collaborator

@yaoyu-33 yaoyu-33 Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesnt need to be a stand alone scirpt. You can put this in doc and also you can put this in the comments of finetune / generate script. Try to keep the folder clean.

from nemo.lightning.pytorch.optim import MegatronOptimizerModule, OptimizerModule
from nemo.utils import logging

MODEL_CONFIG_ATTR = [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change to import from neva plz



@dataclass
class Qwen2VLProjectorConfig(TransformerConfig, io.IOMixin):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this cannot re-use our current vlm.MultimodalProjectorConfig?

self._language_max_sequence_length = self.language_model.max_sequence_length
self._language_is_pipeline_parallel = language_transformer_config.pipeline_model_parallel_size > 1
if config.language_model_from_pretrained is not None:
sharded_state_dict = dict(state_dict=self.language_model.sharded_state_dict(prefix="module."))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we switch to neva/base.py's restore_model_weights func?

# This attribute is needed to check if an all-reduce is required
# on the word embeddings inside `finalize_model_grads._allreduce_word_embedding_grads`.

self.vision_model_from_hf = str(self.vision_model.__class__.__module__).startswith("transformers.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to support different HF vit as vision encoder?
If this is not really get used, just remove it.

return output, final_loss_mask.contiguous()

# override _preprocess_data() in megatron-lm/megatron/core/models/multimodal/llava_model.py
def _preprocess_data(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we recently have updated this func in neva a bit. Can you double check to see if anything need to be updated.

past_seen_tokens, past_seen_tokens + combined_embeddings.shape[0], device=combined_embeddings.device
)

final_attention_mask = self._update_causal_mask(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we using a special mask here? i.e. not triangular causal mask?
this mask wont do anything in TE right now.
Our mask type is default to causal, so it will always be causal and ignore the mask inputed here.

def forward(
self, x: torch.Tensor, grid_thw: torch.Tensor, attention_mask: Optional[torch.Tensor] = None
) -> torch.Tensor:
"""Forward function of the CLIP ViT Model. This function passes the input tensors
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

double check the docstrings? seems not matching with the code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants