-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Roadmap] FlashInfer v0.2 to v0.3 #675
Comments
Initial support blackwell: |
Looking forward to Pod-Attention support! |
To add more context, we have the following piece of code in mneomsyne codebase:
In essence, we create make 4 different instances of flashinfer prefill attention wrapper and call the kernel 4 times 😢 cc @yzh119 |
Could POD-Attention potentially support the removal of prefill and decode batch scheduling logic, and instead just run all the decode and prefill requests together? |
@Edenzzzz good idea, there is no reason to keep two set of APIs. Actually the current prefill attention can be used by decoding, just set the query length per request to 1. We should use a unified |
@yzh119 Thanks! I plan to try employing similar logic in SGLang this week. |
Milestones
Our tentative roadmap includes the following milestones:
We welcome your feedback and suggestions!
Let us know what features you'd like to see in FlashInfer.
The text was updated successfully, but these errors were encountered: