From a60049227a6c252c58d04aae7ee86c2ea85be042 Mon Sep 17 00:00:00 2001 From: SN <6432132+samnoyes@users.noreply.github.com> Date: Tue, 17 Dec 2024 18:27:36 -0800 Subject: [PATCH] Update docs/evaluation/how_to_guides/annotation_queues.mdx Co-authored-by: Tanushree <87711021+tanushree-sharma@users.noreply.github.com> --- docs/evaluation/how_to_guides/annotation_queues.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/evaluation/how_to_guides/annotation_queues.mdx b/docs/evaluation/how_to_guides/annotation_queues.mdx index 429615e2..f42f70ba 100644 --- a/docs/evaluation/how_to_guides/annotation_queues.mdx +++ b/docs/evaluation/how_to_guides/annotation_queues.mdx @@ -73,7 +73,7 @@ To assign runs to an annotation queue, either: 3. [Set up an automation rule](../../../observability/how_to_guides/monitoring/rules) that automatically assigns runs which pass a certain filter and sampling condition to an annotation queue. -4. Select one or multiple experiments from the dataset page that you want a human to review and click "Annotate". From the resulting popup, you may either create a new queue or add to an existing one: +4. Select one or multiple experiments from the dataset page and click **Annotate**. From the resulting popup, you may either create a new queue or add the runs to an existing one: ![](./static/annotate_experiment.png) :::tip