fix: trace LLMEvaluator runs #257
Annotations
1 warning and 2 notices
benchmark
The following actions use a deprecated Node.js version and will be forced to run on node20: actions/cache@v3. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
|
Benchmark results:
python/langsmith/evaluation/llm_evaluator.py#L1
.........................................
create_5_000_run_trees: Mean +- std dev: 588 ms +- 40 ms
.........................................
create_10_000_run_trees: Mean +- std dev: 1.13 sec +- 0.05 sec
.........................................
create_20_000_run_trees: Mean +- std dev: 1.12 sec +- 0.05 sec
.........................................
dumps_class_nested_py_branch_and_leaf_200x400: Mean +- std dev: 661 us +- 19 us
.........................................
dumps_class_nested_py_leaf_50x100: Mean +- std dev: 24.1 ms +- 0.8 ms
.........................................
dumps_class_nested_py_leaf_100x200: Mean +- std dev: 94.6 ms +- 1.0 ms
.........................................
dumps_dataclass_nested_50x100: Mean +- std dev: 24.2 ms +- 0.7 ms
.........................................
WARNING: the benchmark result may be unstable
* the standard deviation (14.0 ms) is 22% of the mean (63.3 ms)
Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m pyperf system tune' command to reduce the system jitter.
Use pyperf stats, pyperf dump and pyperf hist to analyze results.
Use --quiet option to hide these warnings.
dumps_pydantic_nested_50x100: Mean +- std dev: 63.3 ms +- 14.0 ms
.........................................
WARNING: the benchmark result may be unstable
* the standard deviation (27.4 ms) is 13% of the mean (205 ms)
Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m pyperf system tune' command to reduce the system jitter.
Use pyperf stats, pyperf dump and pyperf hist to analyze results.
Use --quiet option to hide these warnings.
dumps_pydanticv1_nested_50x100: Mean +- std dev: 205 ms +- 27 ms
|
Comparison against main:
python/langsmith/evaluation/llm_evaluator.py#L1
+-----------------------------------------------+----------+------------------------+
| Benchmark | main | changes |
+===============================================+==========+========================+
| dumps_pydantic_nested_50x100 | 71.7 ms | 63.3 ms: 1.13x faster |
+-----------------------------------------------+----------+------------------------+
| dumps_pydanticv1_nested_50x100 | 228 ms | 205 ms: 1.11x faster |
+-----------------------------------------------+----------+------------------------+
| dumps_class_nested_py_leaf_100x200 | 104 ms | 94.6 ms: 1.10x faster |
+-----------------------------------------------+----------+------------------------+
| create_20_000_run_trees | 1.21 sec | 1.12 sec: 1.08x faster |
+-----------------------------------------------+----------+------------------------+
| dumps_dataclass_nested_50x100 | 25.9 ms | 24.2 ms: 1.07x faster |
+-----------------------------------------------+----------+------------------------+
| dumps_class_nested_py_branch_and_leaf_200x400 | 705 us | 661 us: 1.07x faster |
+-----------------------------------------------+----------+------------------------+
| create_5_000_run_trees | 625 ms | 588 ms: 1.06x faster |
+-----------------------------------------------+----------+------------------------+
| create_10_000_run_trees | 1.20 sec | 1.13 sec: 1.06x faster |
+-----------------------------------------------+----------+------------------------+
| dumps_class_nested_py_leaf_50x100 | 25.5 ms | 24.1 ms: 1.06x faster |
+-----------------------------------------------+----------+------------------------+
| Geometric mean | (ref) | 1.08x faster |
+-----------------------------------------------+----------+------------------------+
|