-
-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible and efficient if load layer on demand? #267
Comments
I experimented with this early on, but I couldn't find a way to make it even remotely usable. The bottleneck during text generation is largely memory bandwidth, since every parameter of the model* is read at least once during a forward pass. If you're streaming layers from system RAM, even if you can get it running completely asynchronously, your inference speed will be limited by PCIe bandwidth. So you can expect a slowdown of whatever is the ratio between your PCIe and VRAM bandwidths, likely a slowdown on the order of 30x. *) The exception is the token embedding layer, which ExLlama already keeps in system RAM. |
Yeah, I've been testing by copying |
I have a gpu that I want to load multiple model in it. Your exllama model is loading all weight to gpu after instantiate the
ExLlama
. Is it possible if I load every decoder layer to cpu first and then load it to gpu when forward is called to that layer (then move it back to cpu if done forward pass). It seems possible but I'm not sure how the generation time will be affected.The text was updated successfully, but these errors were encountered: