From 384e4423023282a121974b5d9f65dcfad0582d4b Mon Sep 17 00:00:00 2001 From: Jeffrey Ip <143328635+penguine-ip@users.noreply.github.com> Date: Mon, 26 Feb 2024 23:58:44 +0800 Subject: [PATCH] Update deepeval.md (#195) * Update deepeval.md * Update deepeval.md --- integrations/deepeval.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/integrations/deepeval.md b/integrations/deepeval.md index 203cdcd6..665de432 100644 --- a/integrations/deepeval.md +++ b/integrations/deepeval.md @@ -26,7 +26,7 @@ toc: true ## Overview -[DeepEval](https://docs.confident-ai.com/) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator). +[DeepEval](https://github.com/confident-ai/deepeval) (by [Confident AI](https://www.confident-ai.com/)) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator). ## Installation @@ -44,6 +44,8 @@ Once installed, you will have access to a [DeepEvalEvaluator](https://docs.hayst - Contextual Recall - Contextual Relevance +In addition to evaluation scores, DeepEval's evaluators offer additional reasoning for each evaluation. + ### DeepEvalEvaluator To use this integration for calculating model-based evaluation metrics, initialize a `DeepEvalEvaluator` with the metric name and metric input parameters: