From 06de0c9e8fdcd9ec6c96dbd0f5fe186df7dfe12b Mon Sep 17 00:00:00 2001 From: SN <6432132+samnoyes@users.noreply.github.com> Date: Thu, 18 Apr 2024 09:21:46 -0700 Subject: [PATCH] More clearly state the inputs to summary evals --- docs/evaluation/faq/custom-evaluators.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/evaluation/faq/custom-evaluators.mdx b/docs/evaluation/faq/custom-evaluators.mdx index 1bf09fd5..81e521bc 100644 --- a/docs/evaluation/faq/custom-evaluators.mdx +++ b/docs/evaluation/faq/custom-evaluators.mdx @@ -329,7 +329,7 @@ evaluate( ## Summary Evaluators -Some metrics can only be defined on the entire experiment level as opposed to the individual runs of the experiment. For example, you may want to compute the f1 score of a classifier across all runs in an experiment kicked off from a dataset. These are called `summary_evaluators`. +Some metrics can only be defined on the entire experiment level as opposed to the individual runs of the experiment. For example, you may want to compute the f1 score of a classifier across all runs in an experiment kicked off from a dataset. These are called `summary_evaluators`. Instead of taking in a single `Run` and `Example`, these evaluators take a list of each. ```python from typing import List