On older versions of Python, skip benchmarks that use features introduced in newer Python versions #283
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
All our benchmarks have a
requires-python
field in theirpyproject.toml
files, e.g.:pyperformance/pyperformance/data-files/benchmarks/bm_2to3/pyproject.toml
Line 3 in 974e29c
The
requires-python
field is added to the metadata of each benchmark as apython
field here:pyperformance/pyperformance/_benchmark_metadata.py
Line 22 in 974e29c
pyperformance/pyperformance/_benchmark_metadata.py
Lines 193 to 202 in 974e29c
We can use that metadata to create a new
python
property onBenchmark
objects, which returns apackaging.specifiers.SpecifierSet
instance. This property can then be easily used to filter out benchmarks that require a higher version of Python than the version of Python pyperformance is running on.Fixes #281. Unblocks #280 and #268.
I haven't added a test for this -- I was unsure if it was necessary, and if it was necessary, where that test should go. I'm happy to add one if that would be helpful and there's an obvious place it could go, however!
I manually tested adding a benchmark that used new-in-Python-3.8 features in 1734919, and the CI passed fine (adding the same benchmark to
main
currently causes the CI to fail — see #280). Passing CI run on my GitHub fork: https://github.com/AlexWaygood/pyperformance/actions/runs/4813903519