Skip to content

Infrastructure for the benchmarking and performance profiling of the xDSL compiler framework.

Notifications You must be signed in to change notification settings

xdslproject/xdsl-bench

Repository files navigation

Benchmarks for xDSL

This repository contains infrastructure for the benchmarking and performance profiling of the xDSL compiler framework.

Automated regression benchmarking with ASV

airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. Runtime, memory consumption and even custom-computed values may be tracked. The results are displayed in an interactive web frontend that requires only a basic static webserver to host.

-- ASV documentation

Every day by the cron schedule 0 4 * * *, a GitHub actions workflow is run using ASV to benchmark the 15 most recent commits to the xDSL repository, and commit the results to the .asv/results/github-action directory of an artefact repository. Then, the interactive web frontend is built from these results and all previously committed results from previous workflow runs, then finally deployed to GitHub pages.

This web frontend can be found at https://xdsl.dev/xdsl-bench/.

## Profiling

In addition to running under ASV, all benchmarks can be profiled with a variety of tools using the infrastructure in bench_utils.py. This provides a simple CLI when a benchmark file is directly run which through which the tool and benchmark can be specified. The help page for this CLI is as follows:

uv run python3
 BENCHMARK.py --help
usage: BENCHMARK.py [-h] [-o OUTPUT] [-q]
                     {BENCHMARK_NAME,...}
                     {timeit,snakeviz,viztracer,flameprof}

positional arguments:
  {BENCHMARK_NAME,...}
                        name of the benchmark to run
  {timeit,snakeviz,viztracer,flameprof}
                        profiler to use

options:
  -h, --help            show this help message and exit
  -o OUTPUT, --output OUTPUT
                        the directory into which to write out the profile
                        files
  -q, --quiet           don't show the profiler's UI

Example

To use viztracer to profile the lexer on apply_pdl_extra_file.mlir (the "Lexer.apply_pdl_extra_fil" benchmark):

uv run python3 components.py Lexer.apply_pdl_extra_file viztracer

Extensibility

This infrastructure can be modified to support further profilers by adding further subcommands and command implementations to the bench_utils.py file.

Benchmarking strategy

For a holistic view of xDSL's performance, we provide three levels of granularity for benchmarking. At the highest level, end-to-end tests such as running xdsl-opt on MLIR files capture the entire compiler pipeline. Below this, component tests such as lexing MLIR files only capture individual stages in the compiler pipeline. Finally, microbenchmarks evaluate properties of the implementation, and are desiged to align with existing tests of MLIR.

List of benchmarks

  • End-to-end (end_to_end.py)
    • Constant folding
    • Empty program
    • Fused multiply-add
    • Dead code elimination
    • ...
  • Component (components.py)
    • Lexer
    • Parser
    • Pattern rewriter
    • Printer
    • Verifier
  • Microbenchmarks (microbenchmarks.py)
    • IR traversal (direct block iteration and walking)
    • Dialect loading
    • Extensibility through interface/trait lookups
    • Operation creation
    • ...
  • Import machinery (importing.py)
    • Dialects
    • Interpreters
    • ...

About

Infrastructure for the benchmarking and performance profiling of the xDSL compiler framework.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •