Skip to content

Commit fa968f4

Browse files
committed
fix
Signed-off-by: MengqingCao <[email protected]>
1 parent ff9da97 commit fa968f4

File tree

1 file changed

+8
-7
lines changed

1 file changed

+8
-7
lines changed

_posts/2025-03-12-hardware-plugin.md

+8-7
Original file line numberDiff line numberDiff line change
@@ -5,23 +5,24 @@ author: "vLLM Ascend Team"
55
image: /assets/logos/vllm-logo-only-light.png
66
---
77

8-
Since December 2024, through the joint efforts of the vLLM community and the vLLM Ascend team, we have completed the **Hardware Pluggable** RFC. This proposal allows hardware integration into vLLM in a decoupled manner, enabling rapid and modular support for different hardware platforms. The RFC has now taken initial shape. This blog post focuses on how the vLLM Hardware Plugin works and shares best practice for supporting Ascend NPU through the plugin mechanism.
8+
Since December 2024, through the joint efforts of the vLLM community and the vLLM Ascend team, we have completed the [Hardware Pluggable RFC]((https://github.com/vllm-project/vllm/issues/11162)). This proposal allows hardware integration into vLLM in a decoupled manner, enabling rapid and modular support for different hardware platforms. The RFC has now taken initial shape.
9+
This proposal enables hardware integration into vLLM in a decoupled way, allowing for quick and modular support of various hardware platforms.
910

1011
---
1112

1213
## Why vLLM Hardware Plugin?
1314

1415
Currently, vLLM already supports multiple backends. However, as the number of vLLM backends continues to grow, several challenges have emerged:
1516

16-
- **Increased Code Complexity**: Each hardware backend has its own `Executor`, `Worker`, `Runner`, and `Attention` components. This has made the vLLM codebase more complex, with non-generic backend-specific code scattered throughout the project.
17+
- **Increased Code Complexity**: Each hardware backend has its own `Executor`, `Worker`, `Runner`, and `Attention` components. This has increased the complexity of the vLLM codebase, with non-generic backend-specific code scattered throughout the project.
1718
- **High Maintenance Costs**: The cost of maintaining backends is high, not only for the backend developers but also for the vLLM community. The scarcity of community contributor resources makes efficiently adding new features difficult when backend maintainers are not present.
1819
- **Lack of Extensibility**: While vLLM follows a well-structured layered design by implementing backends through `Executor`, `Worker`, `Runner`, and `Attention`, supporting new hardware often requires invasive modifications or patching rather than dynamic registration. This makes adding new backends cumbersome.
1920

2021
Recognizing the need for a flexible and modular approach to integrating hardware backends, we identified hardware pluginization as a feasible solution:
2122

22-
- **Decoupled Codebase**: The hardware backend plugin code remains independent, making the vLLM core code cleaner and more maintainable.
23+
- **Decoupled Codebase**: The hardware backend plugin code remains independent, making the vLLM core code cleaner.
2324
- **Reduced Maintenance Burden**: vLLM developers can focus on generic features without being overwhelmed by the differences caused by backend-specific implementations.
24-
- **Faster Expansion and Iteration**: Each backend can be maintained independently to ensure stability, and new backends can be integrated quickly.
25+
- **Faster Integration & More Independent**: New backends can be integrated quickly with less work to do and evolve independently.
2526

2627
---
2728

@@ -34,11 +35,11 @@ Before introducing the vLLM Hardware Plugin, let's first look at two prerequisit
3435

3536
Based on these RFCs, we proposed [[RFC] Hardware Pluggable](https://github.com/vllm-project/vllm/issues/11162), which integrates the `Platform` module into vLLM as a plugin. Additionally, we refactored `Executor`, `Worker`, `ModelRunner`, `AttentionBackend`, and `Communicator` to support hardware plugins more flexibly.
3637

37-
Currently, the vLLM team, in collaboration with vLLM Ascend developers, has successfully implemented the initial version of this RFC. We also validated the functionality through the [vllm-project/vllm-ascend](https://github.com/vllm-project/vllm-ascend) project. Using this plugin mechanism, we successfully integrated vLLM with the Ascend NPU backend.
38+
Currently, the vLLM team, collaborate with vLLM Ascend developers, has successfully implemented the Platform module introduced in the RFC. We also validated the functionality through the [vllm-project/vllm-ascend](https://github.com/vllm-project/vllm-ascend) project. Using this plugin mechanism, we successfully integrated vLLM with the Ascend NPU backend.
3839

3940
---
4041

41-
## How to Add Backend Support with vLLM Hardware Plugin
42+
## How to Integrate a New Backend via vLLM Hardware Plugin Mechanism
4243

4344
This section will dive into integrating a New Backend via the Hardware Plugin in both developer and user perspective.
4445

@@ -113,7 +114,7 @@ if "MyLlava" not in ModelRegistry.get_supported_archs():
113114

114115
### User Perspective
115116

116-
Taking vLLM Ascend as an example, you only need to install vllm and vllm-ascend to complete the installation:
117+
Only need to install vllm and your plugin before running, taking [vllm-ascend](https://github.com/vllm-project/vllm-ascend) as an example:
117118

118119
```bash
119120
pip install vllm vllm-ascend

0 commit comments

Comments
 (0)