-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark performance of PyGMT functions #2910
Comments
Agree with benchmarking the low-level clib functions, but I have other thoughts about benchmarking the wrappers (see below).
Considering that we will be switching from GMT 6.4.0 to 6.5.0 soon, I almost think we should have at least one benchmark for each of the 60+ PyGMT modules listed in I assumed that the benchmarking would be slow in #2730 (comment) since they might need to run a benchmark multiple times, but actually after reading https://codspeed.io/blog/pinpoint-performance-regressions-with-ci-integrated-differential-profiling, codspeed only runs each benchmark once!
So it might be ok to have 60+ benchmarks running at once. Plus |
Or maybe we should just benchmark all tests except those for catching exceptions. |
Maybe not all all, but at least one (or two) for each PyGMT function for now? |
I'm OK with that. |
Ok, here's the list of test files:
I'm gonna work on the second half (from test_geopandas to test_xyz2grd) in #2911 for now. |
Putting this here to remind ourselves to document guidelines for Continuous Benchmarking before closing this issue.
|
At commit dedfa7a in #2908, and the follow-up patch in #2952, we set the benchmark workflow to only run on merge to Luckily, we can manually trigger a baseline benchmark run on the
|
Description of the desired feature
We are attempting some big refactoring steps in PyGMT to avoid the use of temporary intermediate files #2730, and will also be performing some updates to GMT 6.5.0 soon, so it would be good to track any performance improvements or regressions in terms of execution speed. Since #835, we have actually measured the execution time of the slowest tests (>0.2s) on CI for #584, but those execution times are not really tracked over time, and only hidden in the CI log files.
So, to better track performance over time, we are setting up a Continuous Benchmarking workflow in #2908. This is using
pytest-codspeed
, with the results logged to https://codspeed.io/GenericMappingTools/pygmt. The benchmarks is selective however, and will only be run on unit tests marked with@pytest.mark.benchmark
.Main question: Which tests do we decide to benchmark?
Originally posted by @seisman in #2908 (comment)
Other questions: When do we want to run the benchmarks? Should this be enabled on every Pull Request, or just Pull Requests that modify certain files perhaps? Edit: Answered in #2908 (comment)
Are you willing to help implement and maintain this feature?
Yes
The text was updated successfully, but these errors were encountered: