Description
Description:
As a developer working on my project, one of the main challenges I’ve encountered is the limitations of using external language models, especially when I reach usage limits or encounter interruptions. This disrupts my workflow, as I often need to wait. To improve my experience and efficiency, I propose the addition of a feature that allows me to select and use a local LLM that I have installed via Ollama.
Problem:
Currently, using an external LLM service limits my productivity. Whenever I hit a usage limit, my project comes to a halt, causing delays and interruptions. While I would prefer not to subscribe to additional services or plans, I do understand the necessity of keeping costs manageable. Having the ability to use a local LLM would allow me to continue working seamlessly without external limitations, saving me time and frustration.
Written by GPT because I'm not very good at writing GitHub issues. xD