We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantized
GGML_ASSERT: /private/var/folders/q9/ms4_8l4970q525dlsjp_b9n0x6l4j9/T/pip-install-jqga24o3/whisper-cpp-python_e0149178a136485cadb677b46e61e602/vendor/whisper.cpp/ggml.c:4288: wtype != GGML_TYPE_COUNT
Facing Above Exception When loading Quantized Whisper Model
Python Version : Python 3.9.6 whisper_cpp_python version : 0.2.0 Macbook M1 Pro Model used : https://huggingface.co/ggerganov/whisper.cpp/blob/main/ggml-tiny-q5_1.bin
Kindly help me with this ..
The text was updated successfully, but these errors were encountered:
Because it use old whisper.cpp without support Quantized models. You need compile new whisper.cpp libs by youself
Sorry, something went wrong.
@magicse , what would happen if repo would point vendor folder to latest whisper.cpp version?
@slavanorm This is whisper_cpp.py for new model and new whisper.cpp whisper_cpp.zip
Also you need build dll (libggml.dll , libwhisper.dll) from new whisper_cpp an put it to whisper_cpp_python root path.
No branches or pull requests
Facing Above Exception When loading Quantized Whisper Model
Python Version : Python 3.9.6
whisper_cpp_python version : 0.2.0
Macbook M1 Pro
Model used : https://huggingface.co/ggerganov/whisper.cpp/blob/main/ggml-tiny-q5_1.bin
Kindly help me with this ..
The text was updated successfully, but these errors were encountered: