Skip to content

Commit

Permalink
docs: make clear that eval via sdks is available in self-hosted
Browse files Browse the repository at this point in the history
  • Loading branch information
marcklingen committed May 13, 2024
1 parent f2ff844 commit 8ffaef7
Showing 1 changed file with 10 additions and 4 deletions.
14 changes: 10 additions & 4 deletions pages/docs/scores/model-based-evals.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ There are two ways to run model-based evaluations in Langfuse:
1. [Via the Langfuse UI (beta)](#ui)
2. [Via the Python SDK or API](#sdk)


## Via Langfuse UI (beta) [#ui]

<AvailabilityBanner
Expand Down Expand Up @@ -68,7 +67,6 @@ Once the configruation is saved, Langfuse will start running the evals on the tr
![Langfuse](/images/docs/evals-log.png)
</Frame>


### See scores

Upon receiving new traces, navigate to the trace detail view to see the associated scores.
Expand All @@ -79,9 +77,17 @@ Upon receiving new traces, navigate to the trace detail view to see the associat

</Steps>


## Via Python SDK [#sdk]

<AvailabilityBanner
availability={{
hobby: "full",
pro: "full",
team: "full",
selfHosted: "full",
}}
/>

You can run your own model-based evals on data in Langfuse via the Python SDK. This gives you full flexibility to run various eval libraries on your production data and discover which work well for your use case.

Popular libraries:
Expand All @@ -90,4 +96,4 @@ Popular libraries:
- Langchain Evaluators ([Cookbook](/guides/cookbook/evaluation_with_langchain))
- RAGAS for RAG applications ([Cookbook](/guides/cookbook/evaluation_of_rag_with_ragas))
- UpTrain evals ([Cookbook](/guides/cookbook/evaluation_with_uptrain))
- Whylabs Langkit
- Whylabs Langkit

0 comments on commit 8ffaef7

Please sign in to comment.