-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Issues: tloen/alpaca-lora
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Why this error? ValueError: We need an
offload_dir
to dispatch this model according to this device_map
, the following submodules need to be offloaded: base_model.model.model.layers.3, base_model.model.model.layers.4, base_model.model.model.layers.5, base_model.model.model.layers.6, base_model.model.model.layers.7, base_model.model.model.layers.8, base_model.model.model.layers.9, base_model.model.model.layers.10, base_model.model.model.la
#627
opened May 9, 2024 by
hzbhh
failed to run on colab: ModulesToSaveWrapper has no attribute
embed_tokens
#621
opened Feb 22, 2024 by
Vostredamus
Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported
#617
opened Feb 7, 2024 by
AngelMisaelPelayo
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
#609
opened Nov 27, 2023 by
mj2688
Errors of tuning on 70B LLAMA 2, does alpaca-lora support 70B llama 2 tuning work?
#608
opened Nov 26, 2023 by
bqflab
is there any flag to mark the model is safetensors or pickle format?
#607
opened Nov 24, 2023 by
Tasarinan
Load_in_8bit causing issues: Out of memory error with 44Gb VRAM in my GPU or device_map error
#604
opened Nov 11, 2023 by
Nimisha-Pabbichetty
Are the saved models (either adapter_model.bin or pytorch_model.bin) only 25-26MB in size?
#601
opened Nov 7, 2023 by
LAB-703
Previous Next
ProTip!
no:milestone will show everything without a milestone.