Skip to content

Commit

Permalink
docs: many minor fixes (#776)
Browse files Browse the repository at this point in the history
* new-pages-added

* docs: many minor fixes

* small fix
  • Loading branch information
clemra authored Aug 16, 2024
1 parent 58ea46a commit c2ab901
Show file tree
Hide file tree
Showing 22 changed files with 92 additions and 63 deletions.
2 changes: 1 addition & 1 deletion pages/blog/2024-06-monitoring-llm-security.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Monitoring LLM Security
title: Monitoring LLM Security & Reducing LLM Risks
date: 2024/06/06
description: LLM security requires effective run-time checks and ex-post monitoring and evaluation. Learn how to use Langfuse together with popular security libraries to protect prevent common security risks.
tag: showcase
Expand Down
4 changes: 2 additions & 2 deletions pages/docs/data-security-privacy.mdx
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
description: Langfuse is enteprise-ready with its data privacy and security measures and controls.
description: Langfuse is enteprise-ready with its data privacy and security measures and controls. Langfuse is SOC2 Type2 and ISO 27001 certified.
---

# Data Privacy and Security

At Langfuse, we prioritize data privacy and security. We understand that the data you entrust to us is a vital asset to your business, and we treat it with the utmost care.
**At Langfuse, we prioritize data privacy and security.** We understand that the data you entrust to us is a vital asset to your business, and we treat it with the utmost care.

We take active steps to demonstrate our commitment to data security and privacy such as annual SOC2 Type 2 and ISO27001 audits.

Expand Down
4 changes: 2 additions & 2 deletions pages/docs/demo.mdx
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
description: Try Langfuse in action with a live demo project. Interact with the chatbot to see new traces and user feedback (👍/👎) in Langfuse.
description: Try Langfuse in action with a live demo project for free. Interact with the chatbot to see new traces and user feedback (👍/👎) in Langfuse. No credit card required.
---

# Interactive Demo
# Interactive Langfuse Demo

import { Button } from "@/components/ui/button";

Expand Down
2 changes: 1 addition & 1 deletion pages/docs/deployment/self-host.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
description: Self-host Langfuse in your infrastructure using Docker.
---

# Self-Hosting Guide
# Self-Hosting Langfuse

[![Docker Image](https://img.shields.io/badge/docker-langfuse-blue?logo=Docker&logoColor=white&style=flat-square)](https://github.com/langfuse/langfuse/pkgs/container/langfuse)

Expand Down
2 changes: 1 addition & 1 deletion pages/docs/experimentation.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
description: Langfuse helps to rapidly iterate on your LLM application by providing insights into the effect of experiments on costs, latencies and quality.
description: Langfuse allows for rapid iteration on LLM applications by providing insights into the effect of experiments such as A/B tests on LLM costs, latencies and quality.
---

# Experimentation (releases & versions)
Expand Down
2 changes: 1 addition & 1 deletion pages/docs/fine-tuning.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
description: Langfuse traces all your development and production LLM calls. You can export this data to train or fine-tune models at any time.
description: You can export your Langfuse observability data to easily train or fine-tune models.
---

# Fine-tuning
Expand Down
2 changes: 1 addition & 1 deletion pages/docs/integrations/dify.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
description: Open source observability for Dify applications. Automatically capture detailed traces and metrics for every request.
---

# Dify Integration
# Dify - Observability & Metrics for your LLM apps

**Dify** ([GitHub](https://github.com/langgenius/dify)) is an open-source LLM app development platform which is natively integrated with Langfuse. With the native integration, you can use Dify to quickly create complex LLM applications and then use Langfuse to monitor and improve them.

Expand Down
2 changes: 1 addition & 1 deletion pages/docs/integrations/haystack/get-started.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: OSS Observability for Haystack
title: Open Source Observability for Haystack
description: Langfuse integration to easily observe and monitor pipelines built with Haystack, an open-source Python framework developed by deepset.
---

Expand Down
24 changes: 12 additions & 12 deletions pages/docs/integrations/langchain/tracing.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: OSS Observability for LangChain
title: Open Source Observability for LangChain
description: Open source tracing and monitoring for your LangChain application. Python and JS/TS. Automatically capture rich traces and metrics and evaluate outputs.
ogImage: /images/docs/langchain_og.png
---
Expand All @@ -9,7 +9,7 @@ import GetStartedLangchainPythonEnv from "@/components-mdx/get-started-langchain
import GetStartedLangchainJsArgs from "@/components-mdx/get-started-langchain-js-constructor-args.mdx";
import GetStartedLangchainJsEnv from "@/components-mdx/get-started-langchain-js-env.mdx";

# Tracing for Langchain (Python & JS/TS)
# Observability & Tracing for Langchain (Python & JS/TS)

[Langfuse Tracing](/docs/tracing) integrates with Langchain using Langchain Callbacks ([Python](https://python.langchain.com/docs/modules/callbacks/), [JS](https://js.langchain.com/docs/modules/callbacks/)). Thereby, the Langfuse SDK automatically creates a nested trace for every run of your Langchain applications. This allows you to log, analyze and debug your LangChain application.

Expand All @@ -20,7 +20,7 @@ import GetStartedLangchainJsEnv from "@/components-mdx/get-started-langchain-js-
className="max-w-2xl"
/>

## Add Langfuse to your Langchain application
## Add Langfuse to your Langchain Application

You can configure the integration via (1) constructor arguments or (2) environment variables. Get your Langfuse credentials from the Langfuse dashboard.

Expand Down Expand Up @@ -104,15 +104,15 @@ We are interested in your feedback! Raise an issue on [GitHub](/ideas) to reques

When initializing the Langfuse handler, you can pass the following **optional** arguments to use more advanced features.

| Python | JS/TS | Type | Description |
| ------------ | ----------- | ------- | ----------------------------------------------------------------------------------------------- |
| `user_id` | `userId` | string | The current [user](/docs/tracing-features/users). |
| `session_id` | `sessionId` | string | The current [session](/docs/tracing-features/sessions). |
| `release` | `release` | string | The release of your application. See [experimentation docs](/docs/experimentation) for details. |
| `version` | `version` | string | The version of your application. See [experimentation docs](/docs/experimentation) for details. |
| `trace_name` | | string | Customize the name of the created traces. Defaults to name of chain. |
| `enabled` | `enabled` | boolean | Enable or disable the Langfuse integration. Defaults to `true`. |
| `sample_rate` | `-` | float | [Sample rate](/docs/tracing-features/sampling) for tracing. |
| Python | JS/TS | Type | Description |
| ------------- | ----------- | ------- | ----------------------------------------------------------------------------------------------- |
| `user_id` | `userId` | string | The current [user](/docs/tracing-features/users). |
| `session_id` | `sessionId` | string | The current [session](/docs/tracing-features/sessions). |
| `release` | `release` | string | The release of your application. See [experimentation docs](/docs/experimentation) for details. |
| `version` | `version` | string | The version of your application. See [experimentation docs](/docs/experimentation) for details. |
| `trace_name` | | string | Customize the name of the created traces. Defaults to name of chain. |
| `enabled` | `enabled` | boolean | Enable or disable the Langfuse integration. Defaults to `true`. |
| `sample_rate` | `-` | float | [Sample rate](/docs/tracing-features/sampling) for tracing. |

### Interoperability with Langfuse SDKs [#interoperability]

Expand Down
20 changes: 19 additions & 1 deletion pages/docs/integrations/litellm/tracing.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: OSS Observability for LiteLLM
title: Observability for LiteLLM
description: Open source observability for LiteLLM via the native integration. Automatically capture detailed traces and metrics for every request.
---

Expand Down Expand Up @@ -205,3 +205,21 @@ chat(messages)
### Customize Langfuse Python SDK via Environment Variables

To customise Langfuse settings, use the [Langfuse environment variables](/docs/sdk/python/low-level-sdk#initialize-client). These will be picked up by the LiteLLM SDK on initialization as it uses the Langfuse Python SDK under the hood.

### Learn more about LiteLLM

#### What is LiteLLM?

[LiteLLM](https://litellm.ai) is an open source proxy server to manage auth, loadbalancing, and spend tracking across more than 100 LLMs. LiteLLM has grown to be a popular utility for developers working with LLMs and is universally thought to be a useful abstraction.

#### Is LiteLLM an Open Source project?

Yes, LiteLLM is open source. The majority of its code is permissively MIT-licesned. You can find the open source LiteLLM repository on [GitHub](https://github.com/BerriAI/litellm).

#### Can I use LiteLLM with Ollama and local models?

Yes, you can use LiteLLM with Ollama and other local models. LiteLLM supports all models from Ollama, and it provides a Docker image for an OpenAI API-compatible server for local LLMs like llama2, mistral, and codellama.

#### How does LiteLLM simplify API calls across multiple LLM providers?

LiteLLM provides a unified interface for calling models such as OpenAI, Anthrophic, Cohere, Ollama and others. This means you can call any supported model using a consistent method, such as `completion(model, messages)`, and expect a uniform response format. The library does away with the need for if/else statements or provider-specific code, making it easier to manage and debug LLM interactions in your application.
22 changes: 11 additions & 11 deletions pages/docs/integrations/llama-index/get-started.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: OSS Observability for LlamaIndex
title: Open Source Observability for LlamaIndex
description: Open source observability for LlamaIndex. Automatically capture detailed traces and metrics for every request of your RAG application.
---

Expand Down Expand Up @@ -86,16 +86,16 @@ Learn more about queuing and batching of events [here](/docs/tracing).

You can update trace parameters at any time to add additional context to a trace, such as a user ID, session ID, or tags. See the [Python SDK Trace documentation](/docs/sdk/python#traces) for more information. All _subsequent_ traces will include these set parameters.

| Property | Description |
| ------------ | ------------------------------------------------------------------------- |
| `name` | Identify a specific type of trace, e.g. a use case or functionality. |
| `metadata` | Additional information that you want to see in Langfuse. Can be any JSON. |
| `session_id` | The current [session](/docs/tracing-features/sessions). |
| `user_id` | The current [user_id](/docs/tracing-features/users). |
| `tags` | [Tags](/docs/tracing-features/tags) to categorize and filter traces. |
| `version` | The specified version to trace [experiments](/docs/experimentation). |
| `release` | The specified release to trace [experiments](/docs/experimentation). |
| `sample_rate`| [Sample rate](/docs/tracing-features/sampling) for tracing. |
| Property | Description |
| ------------- | ------------------------------------------------------------------------- |
| `name` | Identify a specific type of trace, e.g. a use case or functionality. |
| `metadata` | Additional information that you want to see in Langfuse. Can be any JSON. |
| `session_id` | The current [session](/docs/tracing-features/sessions). |
| `user_id` | The current [user_id](/docs/tracing-features/users). |
| `tags` | [Tags](/docs/tracing-features/tags) to categorize and filter traces. |
| `version` | The specified version to trace [experiments](/docs/experimentation). |
| `release` | The specified release to trace [experiments](/docs/experimentation). |
| `sample_rate` | [Sample rate](/docs/tracing-features/sampling) for tracing. |

```python {11-15}
from llama_index.core import Settings
Expand Down
2 changes: 1 addition & 1 deletion pages/docs/integrations/vercel-ai-sdk.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Open source observability for Vercel AI SDK using its native OpenTe

import EnvJS from "@/components-mdx/env-js.mdx";

# Vercel AI SDK Integration
# Vercel AI SDK - Observability & Analytics

<Callout type="info">
Telemetry is an experimental feature of the AI SDK and might change in the
Expand Down
6 changes: 3 additions & 3 deletions pages/docs/open-source.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Langfuse is open source for the following reasons:

Learn more about what's next for Langfuse on our [roadmap](/docs/roadmap).

## License
## Langfuse License

Langfuse is licensed under an MIT license, with the exception of the `/ee` and `/web/src/ee` folders of the repository. These directories contain features that are commercially licensed. They are available on Langfuse Cloud and in the Enterprise Edition of Langfuse Self-Hosted. See [License](https://github.com/langfuse/langfuse/blob/main/LICENSE) for more details.

Expand All @@ -25,14 +25,14 @@ The Langfuse core product and all Langfuse-maintained integrations and SDKs are
- Do I risk executing EE code when using Langfuse without a license? No, the EE features are only available when you have a license. You do not risk using EE features if you self-host Langfuse without a license, unless you modify the codebase to circumvent the checks.
- Do I need to care about the difference between MIT and EE when using Langfuse Cloud? No, depending on which tier of Langfuse Cloud you are on, you will have access to features that are both MIT and EE licensed.

## Repositories
## Langfuse Repositories

- Langfuse Server (UI and API): [`langfuse/langfuse`](https://github.com/langfuse/langfuse)
- Langfuse Python SDK and integrations: [`langfuse/langfuse-python`](https://github.com/langfuse/langfuse-python)
- JS SDK and integrations: [`langfuse/langfuse-js`](https://github.com/langfuse/langfuse-js)
- Docs: [`langfuse/langfuse-docs`](https://github.com/langfuse/langfuse-docs)

## Self-host vs Langfuse Cloud
## Self-hosting Langfuse vs. Langfuse Cloud

The Langfuse team provides Langfuse Cloud as a managed solution to simplify the initial setup of Langfuse and to minimize the operational overhead of maintaining high availability in production. Get started for free on: https://cloud.langfuse.com.

Expand Down
4 changes: 2 additions & 2 deletions pages/docs/playground.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ description: Test, iterate, and compare different prompts and models within the
}}
/>

Test and iterate on your prompts directly in the Langfuse LLM Playground. Tweak the prompt and the model parameters to see how different models responds to these changes inputs. This allows you to quickly iterate on your prompts and optimize them for the best results in your LLM app without having to switch between tools or use any code.
Test and iterate on your prompts directly in the Langfuse Prompt Playground. Tweak the prompt and the model parameters to see how different models responds to these changes inputs. This allows you to quickly iterate on your prompts and optimize them for the best results in your LLM app without having to switch between tools or use any code.

<CloudflareVideo
videoId="3cfab665df39518f15fc18813cf82e3f"
Expand Down Expand Up @@ -42,7 +42,7 @@ You can either start from scratch or jump into the playground from an existing p
className="max-w-3xl"
/>

## Supported models
## OpenAI Playground & Anthropic Playground

Currently the playground supports the following models:

Expand Down
29 changes: 20 additions & 9 deletions pages/docs/prompts/get-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,11 @@ description: Manage and version your prompts in Langfuse (open source). When ret

# Prompt Management

Use Langfuse to effectively **manage** and **version** your prompts. Langfuse prompt management is basically a **Prompt CMS** (Content Management System).
Use Langfuse to effectively **manage** and **version** your prompts. Langfuse prompt management is a **Prompt CMS** (Content Management System).

## What is prompt management?

**Prompt management is a systematic approach to storing, versioning and retrieving prompts in LLM applications.** Key aspects of prompt management include version control, decoupling prompts from code, monitoring, logging and optimizing prompts as well as integrating prompts with the rest of your application and tool stack.

## Why use prompt management?

Expand All @@ -18,7 +22,7 @@ Typical benefits of using a CMS apply here:

Platform benefits:

- Track performance of prompt versions in Langfuse Tracing.
- Track performance of prompt versions in [Langfuse Tracing](/docs/tracing).

## Langfuse prompt object

Expand Down Expand Up @@ -249,15 +253,22 @@ const prompt = await langfuse.getPrompt("movie-critic", undefined, {
});

// Number of retries on fetching prompts from the server. Default is 2.
const promptWithMaxRetries = await langfuse.getPrompt("movie-critic", undefined, {
maxRetries: 5,
});
const promptWithMaxRetries = await langfuse.getPrompt(
"movie-critic",
undefined,
{
maxRetries: 5,
}
);

// Timeout per call to the Langfuse API in milliseconds. Default is 10 seconds.
const promptWithFetchTimeout = await langfuse.getPrompt("movie-critic", undefined, {
fetchTimeoutMs: 5000,
});

const promptWithFetchTimeout = await langfuse.getPrompt(
"movie-critic",
undefined,
{
fetchTimeoutMs: 5000,
}
);
```

Attributes
Expand Down
4 changes: 2 additions & 2 deletions pages/docs/scores/annotation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
description: Annotate traces and observations with scores in the Langfuse UI to record human-in-the-loop evaluations.
---

# Annotation in UI
# Human Annotation for LLM apps

Collaborate with your team and add [`scores`](/docs/scores) via the Langfuse UI. You can add scores to both traces and observations within a trace.

Expand Down Expand Up @@ -35,7 +35,7 @@ Your configs are now available for annotation of traces and observations.
![Create config](/images/docs/score-configs.gif)
</Frame>

### Annotate a trace or observation
### Data Labelling on LLM traces or observations

To annotate a trace or observation:

Expand Down
2 changes: 1 addition & 1 deletion pages/docs/scores/model-based-evals.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
description: Langfuse (open source) helps run model-based evaluations on production data to monitor and improve LLMs applications.
description: Langfuse (open source) helps run model-based evaluations (llm-as-a-judge) on production data to monitor and improve LLMs applications.
---

# Model-based Evaluations in Langfuse
Expand Down
4 changes: 2 additions & 2 deletions pages/docs/scores/overview.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Evaluation of LLM Applications
description: With Langfuse you can capture all your LLM evaluations in one place. You can combine a variety of different evaluation metrics like model-based evaluations, manual annotations or user feedback. This allows you to measure quality, tonality, factual accuracy, completeness, and other dimensions of your LLM application.
description: With Langfuse you can capture all your LLM evaluations in one place. You can combine a variety of different evaluation metrics like model-based evaluations (LLM-as-a-Judge), manual annotations or user feedback. This allows you to measure quality, tonality, factual accuracy, completeness, and other dimensions of your LLM application.
---

# LLM Evaluation & Scoring
Expand All @@ -11,7 +11,7 @@ Evaluation is a critical aspect of developing and deploying LLM applications. Us

Langfuse provides a flexible [scoring system](/docs/scores/getting-started) to capture all your evaluations in one place and make them actionable.

## Why is Evaluation Important?
## Why are LLM Evals Important?

LLM evaluation is crucial for improving the accuracy and robustness of language models, ultimately enhancing the user experience and trust in your AI application. It helps detect hallucinations and measure performance across diverse tasks. A structured evaluation in production is vital for continuously improving your application.

Expand Down
2 changes: 1 addition & 1 deletion pages/docs/scores/user-feedback.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Capture feedback from users of your LLM application to measure over

import FeedbackPreview from "@/components/feedbackPreview";

# User Feedback
# User Feedback in LLM apps

User feedback is a great source to evaluate the quality of an LLM app's output. In Langfuse, feedback is collected as a [`score`](/docs/scores) and attached to an execution trace or an individual LLM generation.

Expand Down
Loading

0 comments on commit c2ab901

Please sign in to comment.