You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 7, 2025. It is now read-only.
Inference requests are stored in a prioritized data structure. The priority of a request can be set via a custom header value. The priority values are categorical (e.g. LOW, HIGH). Workers retrieve jobs from the data structure according to configurable probabilities (e.g. a worker retrieves with 33% probability the next LOW priority job, with 66% probability the next HIGH priority job).
The feature is optional and backwards compatible in the sense that for requests for which the header is not set, the current FIFO queue behavior is retained.
Motivation, pitch
In a high-load scenario, users may want to prioritize certain job types over others (e.g. premium users' requests could have a high priority, while jobs that are not time sensitive could be deprioritized).
Alternatives
This could in theory be accomplished by serving multiple versions of the same model, but this would use more resources than serving a single model with request prioritization.
Additional context
I implemented this feature as part of my work at @textshuttle in a manner customized to their products' needs. If there is interest in this feature, I could create a more general PR for this feature. Note that users that do not need this feature can choose to simply not set the header value and retain the current behavior.