A framework for evaluating open-ended tool use across various large language models.
mcpx-eval
can be used to compare the output of different LLMs with the same prompt for a given task using mcp.run tools.
This means we're not only interested in the quality of the output, but also curious about the helpfulness of various models
when presented with real world tools.
The tests/ directory contains pre-defined evals
uv tool install mcpx-eval
Or from git:
uv tool install git+https://github.com/dylibso/mcpx-eval
Run the my-test
test for 10 iterations:
mcpx-eval test --model ... --model ... --config my-test.toml --iter 10
Or run a task directly from mcp.run:
mcpx-eval test --model .. --model .. --task my-task --iter 10
Generate an HTML scoreboard for all evals:
mcpx-eval gen --html results.html --show
A test file is a TOML file containing the following fields:
name
- name of the testtask
- optional, the name of the mcp.run task to useprompt
- prompt to test, this is passed to the LLM under test, this can be left blank iftask
is setcheck
- prompt for the judge, this is used to determine the quality of the test outputexpected-tools
- list of tool names that might be usedignore-tools
- optional, list of tools to ignore, they will not be available to the LLMimport
- optional, includes fields from another test TOML filevars
- optional, a dict of variables that will be used to format the prompt