-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add qwen2-vl support #12405
base: main
Are you sure you want to change the base?
add qwen2-vl support #12405
Conversation
Signed-off-by: Xue Huang <[email protected]>
Signed-off-by: Agoniii <[email protected]>
@@ -0,0 +1,9 @@ | |||
from pathlib import Path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it doesnt need to be a stand alone scirpt. You can put this in doc and also you can put this in the comments of finetune / generate script. Try to keep the folder clean.
from nemo.lightning.pytorch.optim import MegatronOptimizerModule, OptimizerModule | ||
from nemo.utils import logging | ||
|
||
MODEL_CONFIG_ATTR = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change to import from neva plz
|
||
|
||
@dataclass | ||
class Qwen2VLProjectorConfig(TransformerConfig, io.IOMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this cannot re-use our current vlm.MultimodalProjectorConfig
?
self._language_max_sequence_length = self.language_model.max_sequence_length | ||
self._language_is_pipeline_parallel = language_transformer_config.pipeline_model_parallel_size > 1 | ||
if config.language_model_from_pretrained is not None: | ||
sharded_state_dict = dict(state_dict=self.language_model.sharded_state_dict(prefix="module.")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we switch to neva/base.py
's restore_model_weights
func?
# This attribute is needed to check if an all-reduce is required | ||
# on the word embeddings inside `finalize_model_grads._allreduce_word_embedding_grads`. | ||
|
||
self.vision_model_from_hf = str(self.vision_model.__class__.__module__).startswith("transformers.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to support different HF vit as vision encoder?
If this is not really get used, just remove it.
return output, final_loss_mask.contiguous() | ||
|
||
# override _preprocess_data() in megatron-lm/megatron/core/models/multimodal/llava_model.py | ||
def _preprocess_data( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we recently have updated this func in neva a bit. Can you double check to see if anything need to be updated.
past_seen_tokens, past_seen_tokens + combined_embeddings.shape[0], device=combined_embeddings.device | ||
) | ||
|
||
final_attention_mask = self._update_causal_mask( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we using a special mask here? i.e. not triangular causal mask?
this mask wont do anything in TE right now.
Our mask type is default to causal, so it will always be causal and ignore the mask inputed here.
def forward( | ||
self, x: torch.Tensor, grid_thw: torch.Tensor, attention_mask: Optional[torch.Tensor] = None | ||
) -> torch.Tensor: | ||
"""Forward function of the CLIP ViT Model. This function passes the input tensors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double check the docstrings? seems not matching with the code
Important
The
Update branch
button must only be pressed in very rare occassions.An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.
What does this PR do ?
Add Qwen2-VL support
Need to be used with the mrope support in Mcore.
Collection: [Note which collection this PR will affect]
Changelog
Usage
mock data:
real data
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information