Skip to content

Commit

Permalink
Update deepeval.md (#195)
Browse files Browse the repository at this point in the history
* Update deepeval.md

* Update deepeval.md
  • Loading branch information
penguine-ip authored Feb 26, 2024
1 parent 8d1d7a7 commit 384e442
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion integrations/deepeval.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ toc: true

## Overview

[DeepEval](https://docs.confident-ai.com/) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator).
[DeepEval](https://github.com/confident-ai/deepeval) (by [Confident AI](https://www.confident-ai.com/)) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator).

## Installation

Expand All @@ -44,6 +44,8 @@ Once installed, you will have access to a [DeepEvalEvaluator](https://docs.hayst
- Contextual Recall
- Contextual Relevance

In addition to evaluation scores, DeepEval's evaluators offer additional reasoning for each evaluation.

### DeepEvalEvaluator

To use this integration for calculating model-based evaluation metrics, initialize a `DeepEvalEvaluator` with the metric name and metric input parameters:
Expand Down

0 comments on commit 384e442

Please sign in to comment.