Large Model Proxy is designed to make it easy to run multiple resource-heavy Large Models (LM) on the same machine with a limited amount of VRAM/other resources. It listens on a dedicated port for each proxied LM and/or a single port for OpenAI API, making LMs always available to the clients connecting to these ports.
Upon receiving a connection, if the LM on this port or model
specified in JSON payload to an OpenAI API endpoint is not already started, it will:
- Verify if the required resources are available to start the corresponding LM.
- If resources are not available, it will automatically stop the least recently used LM to free up contested resources.
- Start the LM.
- Wait for the LM to be available on the specified port.
- Wait for healthcheck to pass.
Then, it will proxy the connection from the LM to the client that connected to it. To the client, this should be fully transparent, with the only exception being that receiving any data on the connection takes longer if the LM had to be spun up.
Ubuntu and Debian: Download the deb file attached to the latest release.
Arch Linux: Install from AUR.
Other Distros:
-
Install go
-
Clone the repository:
git clone https://github.com/perk11/large-model-proxy.git
-
Navigate into the project directory:
cd large-model-proxy
-
Build the project:
go build -o large-model-proxy
or
make
Windows: Not currently tested, but should work in WSL using "Ubuntu" or "Other Distros" instruction. It will probably not work on Windows natively, as it is using Unix Process Groups.
macOS: Might work using "Other Distros" instruction, but I do not own any Apple devices to check, please let me know if it does!
Configuration support JSONC (JSON with comments and trailing commas).
Below is an example config.jsonc
:
Below is a breakdown of what this configuration does:
-
Any client can access the following services:
- Automatic1111's Stable Diffusion web UI on port 7860
- llama.cpp with Gemma2 on port 8081
- OpenAI API on port 7070, supporting Gemma2 via llama.cpp and Qwen2.5-7B-Instruct via vLLM, depending on the
model
specified in the JSON payload. - Management server on port 7071 (optional). If
ManagementServer.ListenPort
is not specified in the config, the management server will not run. - ComfyUI on port 8188 through a Docker container, which exposes the internal port 8188 as 18188 on the host, which is then proxied back to 8188 when active.
-
Internally, large-model-proxy will expect Automatic1111 to be available on port 17860, Gemma27B on port 18081, Qwen2.5-7B-Instruct on port 18082, and ComfyUI on port 18188 once it runs the commands given in the "Command" parameter and healthcheck passes.
-
This config allocates up to 24GB of VRAM and 32GB of RAM for them. No more GPU or RAM will be attempted to be used (assuming the values in ResourceRequirements are correct).
-
The Stable Diffusion web UI is expected to use up to 3GB of VRAM and 30GB of RAM, while Gemma27B will use up to 20GB of VRAM and 3GB of RAM, Qwen2.5-7B-Instruct up to 18GB of VRAM and no RAM (for example's sake), and ComfyUI up to 20GB of VRAM and 16GB of RAM.
-
Automatic1111, Gemma2, and ComfyUI logs will be in the
logs/
directory of the current dir, while Qwen logs will be in/var/log/Qwen2.5-7B.log
. -
When ComfyUI is no longer in use, its container will be killed using the
docker kill comfyui
command. Other services will be terminated normally. -
Service URLs are configured as follows:
- Automatic1111: Uses the default URL template (
DefaultServiceUrl
) which resolves tohttp://localhost:7860/
- Gemma27B: Uses a custom static URL
http://gemma-proxy-server/
(no port templating) - Qwen2.5-7B-Instruct: Explicitly set to
null
, so no URL will be generated even though a default is available - ComfyUI: No
ServiceUrl
specified, so it uses the default template resolving tohttp://localhost:8188/
These URLs appear in the management API responses and make service names clickable in the web dashboard for easy access to service interfaces.
- Automatic1111: Uses the default URL template (
-
No ListenPort in Qwen configuration makes it only available via OpenAI API.
-
Because ConsiderStoppedOnProcessExit set to true, if the process used by Qwen terminates, large-model-proxy will still continue to consider it running. This is useful if a process detaches from large-model-proxy.
With this configuration, Qwen and Automatic1111 can run at the same time. Assuming they do, a request for Gemma will unload the one least recently used. If they are currently in use, a request to Gemma will have to wait for one of the other services to free up.
ResourcesAvailable
can include any resource metrics, CPU cores, multiple VRAM values for multiple GPUs, etc. These values are not checked against actual usage.
./large-model-proxy -c path/to/config.json
If the -c
argument is omitted, large-model-proxy
will look for config.json
in the current directory.
Currently, the following OpenAI API endpoints are supported:
/v1/completions
/v1/chat/completions
/v1/models
(This one makes it work with, e.g., Open WebUI seamlessly).- More to come
The management API is a simple HTTP API that allows you to get the status of the proxy and the services it is proxying.
To enable it, you need to specify ManagementApi.ListenPort
in the config.
Access the web dashboard at http://localhost:{ManagementApi.ListenPort}/
for a real-time view of all services, their status, resource usage, and active connections. The dashboard automatically refreshes every second and does the following:
- shows all services with their current state (running/stopped), listen ports, active connections, and last used timestamps
- when services have URLs configured, their names become clickable links that open the service's web interface in a new tab
- sorting by name, active connections, or last used time
GET /status: Returns a JSON object with comprehensive proxy and service information.
Response fields:
services
: A list of all services managed by the proxy.resources
: A map of all resources managed by the proxy and their usage.
Each service in the services
array includes the following fields:
name
: Service namelisten_port
: Port the service listens onis_running
: Whether the service is currently runningactive_connections
: Number of active connections to the servicelast_used
: Timestamp when the service was last used (for running services)service_url
: The rendered service URL (if configured), ornull
if no URL is availableresource_requirements
: Resources required by the service
The service_url
field is generated from the service's ServiceUrl
or DefaultServiceUrl
configuration, with the {{.PORT}}
template variable replaced by the service's ListenPort
. The web dashboard consumes this endpoint to display real-time status information and enable clickable service navigation.
Output from each service is logged to a separate file. Default behavior is to log it into logs/{name}.log
,
but it can be redefined by specifying the LogFilePath
parameter for each service.
Pull requests and feature suggestions are welcome.
Due to its concurrent nature, race conditions are very common in large-model-proxy, making manual testing impractical. Therefore, I am striving to have close to 100% automated test coverage.
If you're adding a new feature, implementing automated tests for it is going to be required to merge it.
Both large-model-proxy
and test-server/test-server
must be built before running tests.
We recommend using make test
, which will ensure these are built before proceeding to run the tests in verbose mode.
I review all the issues submitted on Github, including feature suggestions.
Feel free to join my Telegram group for any feedback, questions, and collaboration: https://t.me/large_model_proxy