You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to be able to configure an OpenAI-compatible server instead of using the local whisper.
My use case : I already have a dedicated machine for local AI (notably running ollama and https://github.com/speaches-ai/speaches for whisper). I'd like to make use of the GPU installed on that machine already running some whisper and also use Vibe as a light and cool frontend on my smaller machines.
Using OpenAI-compatible API seems to be the trend nowadays and probably future-proof to integrate this project with others.
I couldn't find a way to use anything else than the local embedded whisper model in the UI, and haven't found anything in the issues.
The text was updated successfully, but these errors were encountered:
Describe the feature
I'd like to be able to configure an OpenAI-compatible server instead of using the local whisper.
My use case : I already have a dedicated machine for local AI (notably running ollama and https://github.com/speaches-ai/speaches for whisper). I'd like to make use of the GPU installed on that machine already running some whisper and also use Vibe as a light and cool frontend on my smaller machines.
Using OpenAI-compatible API seems to be the trend nowadays and probably future-proof to integrate this project with others.
I couldn't find a way to use anything else than the local embedded whisper model in the UI, and haven't found anything in the issues.
The text was updated successfully, but these errors were encountered: