Skip to content

[Llama4] Update attn_temperature_tuning #19997

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

b8zhong
Copy link
Contributor

@b8zhong b8zhong commented Jun 23, 2025

Purpose

Since huggingface/transformers#37501 landed

It's the below on HF now:

attn_temperature_tuning (`bool`, *optional*, defaults to `True`):
            Whether to dynamically scale the attention temperature for each query token based on sequence length.
            Recommended for long sequences (e.g., >32k tokens) to maintain stable output results.

So we no longer need this comment

Test Plan

Start the model on TP=8 on H100, loaded fine

Command:

vllm serve /models/Llama4-Scout-17B --tensor-parallel-size 8

Signed-off-by: Brayden Zhong <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @b8zhong, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the Llama4 model implementation to align with recent changes in the Hugging Face transformers library, specifically regarding the attn_temperature_tuning configuration parameter. The changes involve removing an obsolete comment and simplifying the boolean evaluation of this parameter, reflecting its updated type upstream.

Highlights

  • Code Alignment & Comment Removal: I've removed an outdated TODO comment in vllm/model_executor/models/llama4.py that stated attn_temperature_tuning should be a boolean in Hugging Face. This comment is no longer necessary as the upstream transformers library has already updated this parameter to be a boolean type.
  • Logic Simplification: The conditional logic for self.attn_temperature_tuning was simplified. Previously, it checked config.attn_temperature_tuning > 0, which implicitly treated a boolean True as 1. Now, it directly uses config.attn_temperature_tuning, leveraging its explicit boolean type for cleaner and more accurate evaluation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the llama Related to Llama models label Jun 23, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request updates the attn_temperature_tuning logic to align with recent changes in the Hugging Face LlamaConfig, where the attn_temperature_tuning parameter is now a boolean. The removal of the outdated TODO comment and the simplification of the boolean expression enhance code clarity and maintainability.

@b8zhong
Copy link
Contributor Author

b8zhong commented Jun 23, 2025

@houseroad I think you left this comment 👍

@b8zhong b8zhong changed the title [Llama4] Update comment [Llama4] Update attn_temperature_tuning Jun 23, 2025
Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@houseroad houseroad added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 23, 2025
@houseroad
Copy link
Collaborator

Could you update the test plan about how the changes are verified?

@b8zhong
Copy link
Contributor Author

b8zhong commented Jun 24, 2025

@houseroad Sure, I just quickly tested the loading of the model on H100 and it was fine. However, I am out of credits on my provider to benchmark throughput 🙃

@houseroad
Copy link
Collaborator

Thanks, that’s fine. Could you paste your test command to the PR description? That will be very helpful

@b8zhong
Copy link
Contributor Author

b8zhong commented Jun 24, 2025

@houseroad It's there now!

Copy link
Collaborator

@yeqcharlotte yeqcharlotte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this! Yeah it should be removed with latest huggingface side upgrade.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants