Skip to content

add ability to run experiments in parallel with different hyper params #124

add ability to run experiments in parallel with different hyper params

add ability to run experiments in parallel with different hyper params #124

Triggered via pull request October 28, 2024 17:09
Status Success
Total duration 18m 34s
Artifacts

py-bench.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

1 warning and 2 notices
benchmark
The following actions use a deprecated Node.js version and will be forced to run on node20: actions/cache@v3. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
Benchmark results: python/Makefile#L1
......................................... create_5_000_run_trees: Mean +- std dev: 567 ms +- 45 ms ......................................... create_10_000_run_trees: Mean +- std dev: 1.12 sec +- 0.05 sec ......................................... create_20_000_run_trees: Mean +- std dev: 1.12 sec +- 0.06 sec ......................................... dumps_class_nested_py_branch_and_leaf_200x400: Mean +- std dev: 764 us +- 12 us ......................................... dumps_class_nested_py_leaf_50x100: Mean +- std dev: 26.7 ms +- 0.2 ms ......................................... dumps_class_nested_py_leaf_100x200: Mean +- std dev: 111 ms +- 5 ms ......................................... dumps_dataclass_nested_50x100: Mean +- std dev: 27.1 ms +- 0.3 ms ......................................... WARNING: the benchmark result may be unstable * the standard deviation (6.42 ms) is 11% of the mean (57.8 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydantic_nested_50x100: Mean +- std dev: 57.8 ms +- 6.4 ms ......................................... WARNING: the benchmark result may be unstable * the standard deviation (29.2 ms) is 14% of the mean (212 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydanticv1_nested_50x100: Mean +- std dev: 212 ms +- 29 ms
Comparison against main: python/Makefile#L1
+-----------------------------------+---------+-----------------------+ | Benchmark | main | changes | +===================================+=========+=======================+ | dumps_class_nested_py_leaf_50x100 | 26.8 ms | 26.7 ms: 1.00x faster | +-----------------------------------+---------+-----------------------+ | Geometric mean | (ref) | 1.00x faster | +-----------------------------------+---------+-----------------------+ Benchmark hidden because not significant (8): dumps_pydanticv1_nested_50x100, create_10_000_run_trees, dumps_dataclass_nested_50x100, dumps_pydantic_nested_50x100, dumps_class_nested_py_branch_and_leaf_200x400, create_5_000_run_trees, dumps_class_nested_py_leaf_100x200, create_20_000_run_trees