Skip to content

python[patch]: pass Runnable to evaluate #280

python[patch]: pass Runnable to evaluate

python[patch]: pass Runnable to evaluate #280

Triggered via pull request November 11, 2024 23:04
Status Success
Total duration 21m 54s
Artifacts

py-bench.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

1 warning and 2 notices
benchmark
The following actions use a deprecated Node.js version and will be forced to run on node20: actions/cache@v3. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
Benchmark results: python/langsmith/evaluation/_arunner.py#L1
......................................... create_5_000_run_trees: Mean +- std dev: 620 ms +- 47 ms ......................................... create_10_000_run_trees: Mean +- std dev: 1.19 sec +- 0.06 sec ......................................... create_20_000_run_trees: Mean +- std dev: 1.20 sec +- 0.06 sec ......................................... dumps_class_nested_py_branch_and_leaf_200x400: Mean +- std dev: 706 us +- 14 us ......................................... dumps_class_nested_py_leaf_50x100: Mean +- std dev: 25.7 ms +- 0.5 ms ......................................... dumps_class_nested_py_leaf_100x200: Mean +- std dev: 104 ms +- 2 ms ......................................... dumps_dataclass_nested_50x100: Mean +- std dev: 25.8 ms +- 0.3 ms ......................................... WARNING: the benchmark result may be unstable * the standard deviation (17.0 ms) is 25% of the mean (68.5 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydantic_nested_50x100: Mean +- std dev: 68.5 ms +- 17.0 ms ......................................... WARNING: the benchmark result may be unstable * the standard deviation (32.2 ms) is 14% of the mean (224 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydanticv1_nested_50x100: Mean +- std dev: 224 ms +- 32 ms
Comparison against main: python/langsmith/evaluation/_arunner.py#L1
+------------------------------------+----------+------------------------+ | Benchmark | main | changes | +====================================+==========+========================+ | dumps_class_nested_py_leaf_100x200 | 105 ms | 104 ms: 1.01x faster | +------------------------------------+----------+------------------------+ | create_20_000_run_trees | 1.18 sec | 1.20 sec: 1.02x slower | +------------------------------------+----------+------------------------+ | dumps_class_nested_py_leaf_50x100 | 25.2 ms | 25.7 ms: 1.02x slower | +------------------------------------+----------+------------------------+ | Geometric mean | (ref) | 1.01x slower | +------------------------------------+----------+------------------------+ Benchmark hidden because not significant (6): dumps_dataclass_nested_50x100, dumps_class_nested_py_branch_and_leaf_200x400, create_10_000_run_trees, create_5_000_run_trees, dumps_pydanticv1_nested_50x100, dumps_pydantic_nested_50x100