-
-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run on CPU without AVX2 #315
Comments
Are you on the latest version? |
steps:
result
As I know Illegal instruction (core dumped) means that problem with AVX2 instruction. When I tried the GGUF format with llama.cpp I received the same Illegal instruction (core dumped). |
Maybe this gives more information about an error: gdb --args python3 test_benchmark_inference.py -d /home/dev/models/Mistral-7B-Instruct-v0.2-GPTQ/ -p -ppl
|
Hello,
I have a server with Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz and 5x WX9100 and want to run Mistral 7b on each GPU.
But I received an error: "Illegal instruction (core dumped)" when I tried to do it.
Is it possible to run exllama on the CPU without AVX2?
The text was updated successfully, but these errors were encountered: