Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Swapping attention in a pretrained model for inference #4

Open
kabachuha opened this issue Mar 20, 2024 · 0 comments
Open

Swapping attention in a pretrained model for inference #4

kabachuha opened this issue Mar 20, 2024 · 0 comments

Comments

@kabachuha
Copy link

Consider we have a LLM, which had been pretrained with quadratic attention, and we want to extend its context size/improve performance. And for this purpose we only swap the attention computation from q,k,v to this rebased linear flash attention.

Similar, still quadratic, attention swap examples include using FlashAttention in XFormers or ScaledDotProduct in Torch2.

Assuming we don't do a backward pass, so no weird gradients breaking the weights. Will the LLM continue inferring more or less fine or will it break down? (Perplexity/loss/qa/needle in stack would be interesting to see)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant