Skip to content

Commit

Permalink
docs: add llm as a judge
Browse files Browse the repository at this point in the history
  • Loading branch information
marcklingen committed Jul 12, 2024
1 parent 8b63d08 commit 4ebd84e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion pages/docs/scores/model-based-evals.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Langfuse (open source) helps run model-based evaluations on product

# Model-based Evaluations in Langfuse

Model-based evaluations are a powerful tool to automate the evaluation of LLM applications integrated with Langfuse. With model-based evalutions, LLMs are used to score a specific session/trace/LLM-call in Langfuse on criteria such as correctness, toxicity, or hallucinations.
Model-based evaluations (_LLM-as-a-judge_) are a powerful tool to automate the evaluation of LLM applications integrated with Langfuse. With model-based evalutions, LLMs are used to score a specific session/trace/LLM-call in Langfuse on criteria such as correctness, toxicity, or hallucinations.

There are two ways to run model-based evaluations in Langfuse:

Expand Down

0 comments on commit 4ebd84e

Please sign in to comment.