diff --git a/integrations/deepeval.md b/integrations/deepeval.md index 203cdcd6..665de432 100644 --- a/integrations/deepeval.md +++ b/integrations/deepeval.md @@ -26,7 +26,7 @@ toc: true ## Overview -[DeepEval](https://docs.confident-ai.com/) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator). +[DeepEval](https://github.com/confident-ai/deepeval) (by [Confident AI](https://www.confident-ai.com/)) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator). ## Installation @@ -44,6 +44,8 @@ Once installed, you will have access to a [DeepEvalEvaluator](https://docs.hayst - Contextual Recall - Contextual Relevance +In addition to evaluation scores, DeepEval's evaluators offer additional reasoning for each evaluation. + ### DeepEvalEvaluator To use this integration for calculating model-based evaluation metrics, initialize a `DeepEvalEvaluator` with the metric name and metric input parameters: