Skip to content

[Model] Ultravox: Support Llama 4 and Gemma 3 backends #17818

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

farzadab
Copy link
Contributor

@farzadab farzadab commented May 7, 2025

This is a simplified version of my older PR that was approved by @DarkLight1337 but ended up not working on some backends: https://github.com/vllm-project/vllm/pull/15728/files
This new PR allows Ultravox to support Gemma 3 and Llama 4 backends.

On the Ultravox side, I've made sure that all tokenizers have a new <|audio|> token to allow for better tracking audio placeholder tokens. This is only available on the tokenizer and not the embedding layer. As such, I intercept the input_ids before calling embedding on them and apply safe_input_ids instead.

When using V0, Ultravox has been verified to work on the following backends on an earlier version of this PR: Llama 3, Gemma 3, and Llama 4.

V0 seems to work as verified by evals. I've seen issues on V1 on an earlier version of VLLM, but I'm not sure if that was due to Ultravox or a VLLM V1 bug.

Copy link

github-actions bot commented May 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

What issue are you getting on V1?

@@ -558,7 +557,12 @@ def get_input_embeddings(
input_ids: torch.Tensor,
multimodal_embeddings: Optional[MultiModalEmbeddings] = None,
) -> torch.Tensor:
inputs_embeds = self.language_model.get_input_embeddings(input_ids)
Copy link
Contributor Author

@farzadab farzadab May 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DarkLight1337 When using V1, I noticed that the output was also completely garbled.

After debugging I noticed that when I tried printing input_ids here for the same sample (conditioned on len(input_ids)>1 to avoid decoding tokens), this is what I got:

# with VLLM_USE_V1=0
>>> t.decode([200000, 200005, 15651, 200006, 368, 4662, 583, 262, 19933, 43910, 26, 200008, 200005, 1556, 200006, 368, 4984, 290, 2182, 4097, 38, 7283, 201133, 200008, 200005, 140680, 200006, 368])
'<|begin_of_text|><|header_start|>system<|header_end|>\n\nYou are a helpful assistant.<|eot|><|header_start|>user<|header_end|>\n\nAnswer the following question: \n\n<|vision_reserved_special_token_1047|><|eot|><|header_start|>assistant<|header_end|>\n\n'

# with VLLM_USE_V1=1
>>> t.decode([24, 4984, 290, 2182, 4097, 38, 7283, 201133, 200008, 200005, 140680, 200006, 368])
',Answer the following question: \n\n<|vision_reserved_special_token_1047|><|eot|><|header_start|>assistant<|header_end|>\n\n'

The input_ids in the case of V1 seemed to be missing a part of the beginning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I got this issue at around v0.8.4 or 0.8.4. I'll try verifying it on v0.8.5.post1.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by upgrading to v0.9.1

@liPatrick
Copy link

Verified that the issue with inference mismatch was indeed a VLLM bug. Upgrading to v0.9.1 fixed the issue and now V1 inference matches V0.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) June 12, 2025 02:19
@DarkLight1337
Copy link
Member

Nice, let's merge this!

@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 12, 2025
@mergify mergify bot added the llama Related to Llama models label Jun 23, 2025
Copy link

mergify bot commented Jun 23, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @farzadab.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 23, 2025
auto-merge was automatically disabled June 23, 2025 20:03

Head branch was pushed to by a user without write access

@liPatrick liPatrick force-pushed the farzad-audiotoken-gemma3llama4 branch from 0ffff36 to 1cb823d Compare June 23, 2025 20:03
@mergify mergify bot removed the needs-rebase label Jun 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants