From 4ebd84e0ec0e5a177311d2ed86ced9723f5f6a84 Mon Sep 17 00:00:00 2001 From: Marc Klingen Date: Fri, 12 Jul 2024 11:58:48 +0200 Subject: [PATCH] docs: add llm as a judge --- pages/docs/scores/model-based-evals.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/docs/scores/model-based-evals.mdx b/pages/docs/scores/model-based-evals.mdx index ed172aa1e..3c213b598 100644 --- a/pages/docs/scores/model-based-evals.mdx +++ b/pages/docs/scores/model-based-evals.mdx @@ -4,7 +4,7 @@ description: Langfuse (open source) helps run model-based evaluations on product # Model-based Evaluations in Langfuse -Model-based evaluations are a powerful tool to automate the evaluation of LLM applications integrated with Langfuse. With model-based evalutions, LLMs are used to score a specific session/trace/LLM-call in Langfuse on criteria such as correctness, toxicity, or hallucinations. +Model-based evaluations (_LLM-as-a-judge_) are a powerful tool to automate the evaluation of LLM applications integrated with Langfuse. With model-based evalutions, LLMs are used to score a specific session/trace/LLM-call in Langfuse on criteria such as correctness, toxicity, or hallucinations. There are two ways to run model-based evaluations in Langfuse: