Skip to content
This repository has been archived by the owner on Mar 8, 2022. It is now read-only.

Microbenchmarks

bialesdaniel edited this page Dec 12, 2020 · 3 revisions

Overview

The microbenchmark tests are designed to capture the performance of different components of the system. The test results are stored so that tests do not need to be part of the normal CI process and trends can be tracked over time. Alerts have been created so that a slack message will be posted if a microbenchmark's throughput falls 2 standard deviations below the mean for two consecutive runs of the microbenchmark tests.

Technologies

Runtime View

The Jenkins pipeline executes the microbenchmarks script which runs all the Google microbenchmarks. After the microbenchmarks have completed the script sends the results to the performance storage service via a RESTful API call. The performance storage service validates the payload, converts it into a format suitable for storage, and stores it in the time-series database. Grafana queries the database in order to display visualizations of the results.

Module View

The following is a chart show the code dependencies in the performance storage service, relating to the microbenchmark tests.

Schema

See the Timescaledb Schema wiki for details about the schema.

Future Work

Currently each time the microbenchmark script runs it gathers all the historical results that have been archived in Jenkins to calculate the rolling 30-day average and the standard deviation. Since we are storing the results in TimescaleDB. We should create an endpoint in the Performance Storage Service where the microbenchmark test script can fetch the last 30 days of results and calculate those numbers based on the data stored in the database. This will remove the dependency on Jenkins.