Skip to content

[Bugfix][Rocm] Fix import error on ROCm #20140

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

charlifu
Copy link
Contributor

@charlifu charlifu commented Jun 26, 2025

This PR fix the import error from #19560, which trying to import functions do not exist on ROCm.

ImportError: cannot import name 'flash_attn_varlen_func' from 'vllm.attention.utils.fa_utils'

Signed-off-by: charlifu <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @charlifu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical ImportError on ROCm systems by refining the import strategy for specific FlashAttention utility functions. My change ensures that these CUDA-specific functions are only loaded when running on a CUDA platform, thereby enhancing cross-platform compatibility and stability for the vllm project.

Highlights

  • Bugfix: I've addressed an ImportError that occurred on ROCm platforms due to attempts to import functions (flash_attn_varlen_func, get_scheduler_metadata, reshape_and_cache_flash) that are not available on those systems.
  • Platform Compatibility: I've modified the import logic in vllm/v1/attention/backends/flash_attn.py to conditionally import CUDA-specific FlashAttention utility functions only when the detected platform is CUDA, preventing import failures on non-CUDA environments like ROCm.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added rocm Related to AMD ROCm v1 labels Jun 26, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request fixes an import error on ROCm by conditionally importing modules that are only available on CUDA. The import of current_platform should also be made conditional to ensure that the code is completely platform-agnostic until CUDA is confirmed.

Comment on lines 16 to +20
from vllm.attention.utils.fa_utils import (flash_attn_supports_fp8,
flash_attn_varlen_func,
get_flash_attn_version,
get_scheduler_metadata,
reshape_and_cache_flash)
get_flash_attn_version)
from vllm.config import VllmConfig, get_layers_from_vllm_config
from vllm.logger import init_logger
from vllm.platforms import current_platform
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Moving these imports inside the if current_platform.is_cuda(): block avoids importing unnecessary modules when not running on CUDA, potentially resolving import errors on other platforms. However, current_platform is imported before this conditional block, which might cause issues if current_platform relies on CUDA-specific libraries. Consider delaying the import of current_platform as well to be completely platform-agnostic until CUDA is confirmed.

Signed-off-by: charlifu <[email protected]>
from vllm.utils import cdiv
from vllm.v1.attention.backends.utils import (
AttentionMetadataBuilder, CommonAttentionMetadata, get_kv_cache_layout,
make_local_attention_virtual_batches)
from vllm.v1.kv_cache_interface import AttentionSpec
from vllm.v1.worker.block_table import BlockTable

if current_platform.is_xpu():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably bring back the is_cuda() here as well

Signed-off-by: charlifu <[email protected]>
Copy link
Contributor

@jikunshang jikunshang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for fixing!

@charlifu charlifu closed this Jun 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rocm Related to AMD ROCm v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants