-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama support #10
Comments
Does it expose an openai endpoint? If it does, then support should be simple |
It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible |
good idea, will work on it soon been super busy last few months, I'd appreciate if you could try things out and lemme know if you face any issues, it should be simple like just pointing the middleware to the Ollama/litellm openai port |
So it's possible to run ollama in docker, and let's say it exposes the usual localhost:11434 port, then using LiteLLM you can convert that port exposure to a OpenAI endpoint ? Or do you need to run the Ollama model right from LiteLLM? |
|
I'll try to get on it soon after fixing the authentication |
Ollama is a very popular backend for running local models with a large library of supported models. It would be great to see ollama support
The text was updated successfully, but these errors were encountered: