You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
In the past, we have accidentally created performance regressions (#694, fixed in #843) when adding features to freud. Our 2.4 release added a custom build type to get around a documentation issue with scikit-build, which accidentally removed some important compiler optimization flags.
Describe the solution you'd like
Have the benchmarks run on PRs before merging them to make sure we haven't accidentally added any performance regressions. This can be done be adding the benchmark label to a pull request.
Describe alternatives you've considered
We could maybe run the benchmarks manually, but that is a huge hassle and I doubt that would be done reliably.
Additional context
Using the extra CI time is now feasible because our CI is in the process of being entirely converted to github actions (#951), so we have more time and the ability to host our own CI runners.
The text was updated successfully, but these errors were encountered:
I would be happy to see benchmarks run this way, but I would caution against doing so unless we host our own dedicated runners. Our previous experiences (across multiple packages, including freud but also e.g. signac) is that running benchmarks on shared CI machines is so noisy as to be basically useless for these purposes.
Is your feature request related to a problem? Please describe.
In the past, we have accidentally created performance regressions (#694, fixed in #843) when adding features to freud. Our 2.4 release added a custom build type to get around a documentation issue with
scikit-build
, which accidentally removed some important compiler optimization flags.Describe the solution you'd like
Have the benchmarks run on PRs before merging them to make sure we haven't accidentally added any performance regressions. This can be done be adding the benchmark label to a pull request.
Describe alternatives you've considered
We could maybe run the benchmarks manually, but that is a huge hassle and I doubt that would be done reliably.
Additional context
Using the extra CI time is now feasible because our CI is in the process of being entirely converted to github actions (#951), so we have more time and the ability to host our own CI runners.
The text was updated successfully, but these errors were encountered: