Skip to content

Feature Request: Local LLM Integration for Uninterrupted Project Work #113

Open
@SlavenDj

Description

@SlavenDj

Description:

As a developer working on my project, one of the main challenges I’ve encountered is the limitations of using external language models, especially when I reach usage limits or encounter interruptions. This disrupts my workflow, as I often need to wait. To improve my experience and efficiency, I propose the addition of a feature that allows me to select and use a local LLM that I have installed via Ollama.

Problem:

Currently, using an external LLM service limits my productivity. Whenever I hit a usage limit, my project comes to a halt, causing delays and interruptions. While I would prefer not to subscribe to additional services or plans, I do understand the necessity of keeping costs manageable. Having the ability to use a local LLM would allow me to continue working seamlessly without external limitations, saving me time and frustration.


Written by GPT because I'm not very good at writing GitHub issues. xD

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions