Skip to content

Add Containerized Benchmarking Support for GuideLLM #123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
4 changes: 0 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ __pycache__/

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
Expand Down Expand Up @@ -198,9 +197,6 @@ coverage/
/src/ui/.next/
/src/ui/out/

# production
build/

# misc
*.pem

Expand Down
48 changes: 48 additions & 0 deletions build/Containerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
ARG PYTHON=3.13

# Use a multi-stage build to create a lightweight production image
FROM docker.io/python:${PYTHON}-slim as builder

# Copy repository files
COPY / /src

# Create a venv and install guidellm
RUN python3 -m venv /opt/guidellm \
&& /opt/guidellm/bin/pip install --no-cache-dir /src

# Copy entrypoint script into the venv bin directory
RUN install -m0755 /src/build/entrypoint.sh /opt/guidellm/bin/entrypoint.sh

# Prod image
FROM docker.io/python:${PYTHON}-slim

# Copy the virtual environment from the builder stage
COPY --from=builder /opt/guidellm /opt/guidellm

# Add guidellm bin to PATH
ENV PATH="/opt/guidellm/bin:$PATH"

# Create a non-root user
RUN useradd -md /results guidellm

# Switch to non-root user
USER guidellm

# Set working directory
WORKDIR /results

# Metadata
LABEL org.opencontainers.image.source="https://github.com/neuralmagic/guidellm"
LABEL org.opencontainers.image.description="GuideLLM Benchmark Container"

# Set the environment variable for the benchmark script
# TODO: Replace with scenario environment variables
ENV GUIDELLM_TARGET="http://localhost:8000" \
GUIDELLM_MODEL="neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16" \
GUIDELLM_RATE_TYPE="sweep" \
GUIDELLM_DATA="prompt_tokens=256,output_tokens=128" \
GUIDELLM_MAX_REQUESTS="100" \
GUIDELLM_MAX_SECONDS="" \
GUIDELLM_OUTPUT_PATH="/results/results.json"

ENTRYPOINT [ "/opt/guidellm/bin/entrypoint.sh" ]
43 changes: 43 additions & 0 deletions build/entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail

# Path to the guidellm binary
guidellm_bin="/opt/guidellm/bin/guidellm"

# If we receive any arguments switch to guidellm command
if [ $# -gt 0 ]; then
echo "Running command: guidellm $*"
exec $guidellm_bin "$@"
fi

# Get a list of environment variables that start with GUIDELLM_
args="$(printenv | cut -d= -f1 | grep -E '^GUIDELLM_')"

# NOTE: Bash array + exec prevent shell escape issues
CMD=("${guidellm_bin}" "benchmark")

# Parse environment variables for the benchmark command
for var in $args; do
# Remove GUIDELLM_ prefix
arg_name="${var#GUIDELLM_}"

# If there is an extra underscore at the
# start than this is a config variable
if [ "${arg_name:0:1}" == "_" ]; then
continue
fi

# Convert to lowercase
arg_name="${arg_name,,}"
# Replace underscores with dashes
arg_name="${arg_name//_/-}"

# Add the argument to the command array if set
if [ -n "${!var}" ]; then
CMD+=("--${arg_name}" "${!var}")
fi
done

# Execute the command
echo "Running command: ${CMD[*]}"
exec "${CMD[@]}"