-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model Request] FastEmbed new model request: dunzhang/stella_en_1.5B_v5 #411
Comments
@joein Any possibility of getting this one added? |
Hey @richard-deetlefs @PylotLight We might not be able to add this model in the closest time, however, as of fastembed v0.6.0 we support adding custom models to fastembed in runtime As I can see, the model providers have converted it to onnx, so should not be a problem, unless the model follows the typical preprocessing/postprocessing (just pooling / normalization) steps An example of adding a custom model In this particular case, you would also need to set |
Got it. Thanks. |
Great! Thanks! Do you have plans to do the same for Sparse text embeddings, Late interaction model, and the others? |
Eventually, yes, we'd like to add this feature to the other models as well However, it might not be reasonable for some of the models because they have too specific steps in preprocessing/postprocessing which are impossible to reuse Image models are the most probable candidates to get the custom models support feature next |
Good day,
Would you please add dunzhang/stella_en_1.5B_v5 to FastEmbed? It is not thaaat big (5.75 GB), but has an excellent MTEB retrieval score and MIT license.
I am also a paid qdrant cloud user if that helps with the argument ;)
Thanks!
The text was updated successfully, but these errors were encountered: