Skip to content

Issues: fla-org/flash-linear-attention

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

[Bug] dplr kernel causes core dumped bug Something isn't working
#173 opened Feb 8, 2025 by hypnopump
2 tasks done
[RFC] Fuse elementwise operations in RWKV layers enhancement New feature or request
#165 opened Feb 5, 2025 by sustcsonglin
[RFC] Support more hybrid patterns enhancement New feature or request urgent
#153 opened Feb 1, 2025 by sustcsonglin
[RFC] Implement model-specific 4d parallelism enhancement New feature or request
#148 opened Jan 28, 2025 by yzhangcs
[Bug] TypeError: 'constexpr' object is not iterable bug Something isn't working
#138 opened Jan 23, 2025 by York-Cheung
[RFC] Remove head_first kernel enhancement New feature or request todo To be implemented
#114 opened Jan 10, 2025 by sustcsonglin FLA v1.0.0 release
[RFC] Add YOCO models enhancement New feature or request stale
#106 opened Jan 5, 2025 by yzhangcs FLA v1.0.0 release
[Bug]: KV Cache exploded bug Something isn't working
#91 opened Dec 14, 2024 by rakkit
ProTip! Type g i on any issue or pull request to go back to the issue listing page.