-
Notifications
You must be signed in to change notification settings - Fork 2
Microbenchmarks
The microbenchmark tests are designed to capture the performance of different components of the system. The test results are stored so that tests do not need to be part of the normal CI process and trends can be tracked over time. Alerts have been created so that a slack message will be posted if a microbenchmark's throughput falls 2 standard deviations below the mean for two consecutive runs of the microbenchmark tests.
-
Microbenchmark Script
- Google Benchmark
- Python
- Jenkins API
- Performance Storage Service
- Django
- Time-Series Database
- TimescaleDB
- Data Visualization
- Grafana
The Jenkins pipeline executes the microbenchmarks script which runs all the Google microbenchmarks. After the microbenchmarks have completed the script sends the results to the performance storage service via a RESTful API call. The performance storage service validates the payload, converts it into a format suitable for storage, and stores it in the time-series database. Grafana queries the database in order to display visualizations of the results.
The following is a chart show the code dependencies in the performance storage service, relating to the microbenchmark tests.
See the Timescaledb Schema wiki for details about the schema.
Currently each time the microbenchmark script runs it gathers all the historical results that have been archived in Jenkins to calculate the rolling 30-day average and the standard deviation. Since we are storing the results in TimescaleDB. We should create an endpoint in the Performance Storage Service where the microbenchmark test script can fetch the last 30 days of results and calculate those numbers based on the data stored in the database. This will remove the dependency on Jenkins.