Skip to content

Commit

Permalink
More clearly state the inputs to summary evals (#179)
Browse files Browse the repository at this point in the history
  • Loading branch information
samnoyes authored Apr 18, 2024
2 parents 84895c1 + 06de0c9 commit 9772f3b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/evaluation/faq/custom-evaluators.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ evaluate(

## Summary Evaluators

Some metrics can only be defined on the entire experiment level as opposed to the individual runs of the experiment. For example, you may want to compute the f1 score of a classifier across all runs in an experiment kicked off from a dataset. These are called `summary_evaluators`.
Some metrics can only be defined on the entire experiment level as opposed to the individual runs of the experiment. For example, you may want to compute the f1 score of a classifier across all runs in an experiment kicked off from a dataset. These are called `summary_evaluators`. Instead of taking in a single `Run` and `Example`, these evaluators take a list of each.

```python
from typing import List
Expand Down

0 comments on commit 9772f3b

Please sign in to comment.