diff --git a/docs/evaluation/how_to_guides/annotation_queues.mdx b/docs/evaluation/how_to_guides/annotation_queues.mdx index 07e53327..f42f70ba 100644 --- a/docs/evaluation/how_to_guides/annotation_queues.mdx +++ b/docs/evaluation/how_to_guides/annotation_queues.mdx @@ -73,6 +73,9 @@ To assign runs to an annotation queue, either: 3. [Set up an automation rule](../../../observability/how_to_guides/monitoring/rules) that automatically assigns runs which pass a certain filter and sampling condition to an annotation queue. +4. Select one or multiple experiments from the dataset page and click **Annotate**. From the resulting popup, you may either create a new queue or add the runs to an existing one: + ![](./static/annotate_experiment.png) + :::tip It is often a very good idea to assign runs that have a certain user feedback score (eg thumbs up, thumbs down) from the application to an annotation queue. This way, you can identify and address issues that are causing user dissatisfaction. diff --git a/docs/evaluation/how_to_guides/static/annotate_experiment.png b/docs/evaluation/how_to_guides/static/annotate_experiment.png new file mode 100644 index 00000000..ac936b9d Binary files /dev/null and b/docs/evaluation/how_to_guides/static/annotate_experiment.png differ