Skip to content

Commit

Permalink
docs - faq: add chatbot analytics seo post (#829)
Browse files Browse the repository at this point in the history
Co-authored-by: Clemo <[email protected]>
  • Loading branch information
jannikmaierhoefer and clemra authored Sep 30, 2024
1 parent 64f3d16 commit 9dc982b
Show file tree
Hide file tree
Showing 2 changed files with 111 additions and 0 deletions.
111 changes: 111 additions & 0 deletions pages/faq/all/chatbot-analytics.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
title: Chatbot Analytics - How to Improve your AI Chatbot
tags: [product]
---

# Chatbot Analytics: How to Improve your AI Chatbot with Langfuse

<Frame border fullWidth>
![Chatbot Analytics](/images/blog/faq/chatbot-analytics.png)
</Frame>

Monitoring and testing AI chatbots is important due to the **unique challenges** faced while building LLM applications. Unlike traditional software engineering, LLM-based applications involve **complex, repeated, and chained calls** to foundation models, making debugging difficult.

[Langfuse](https://langfuse.com) is an open-source tool that simplifies this by capturing the full context of an AI chatbot application, allowing developers to trace and control the flow of interactions.

Langfuse supports various integrations, including [OpenAI](/docs/integrations/openai/python/get-started), [Langchain](/docs/integrations/langchain/tracing), [LlamaIndex](/docs/integrations/llama-index/get-started), and [more](/docs/integrations/overview).

**In this guide, we will cover how to:**

1. Develop robust AI chatbots
2. Monitor chatbots in production
3. Test chatbots for safety and performance

### 1. Develop Robust AI Chatbots

**Instrument your Application:**

When developing an AI chatbot, it is helpful to instrument your application to capture all chatbot interactions. This process allows you to monitor and debug your chatbot's performance in real-time. By tracking all LLM calls and other relevant logic in your chatbot application, you can gain insights into its operation.

Langfuse integrates with various platforms such as [OpenAI, Langchain, LlamaIndex and LiteLLM](/docs/integrations/overview), providing flexibility for different use cases. Additionally, the [Langfuse API](https://api.reference.langfuse.com) enables you to tailor the monitoring to your specific needs.

**Analyze Traces:**

The Langfuse UI helps you to inspect and debug complex logs and user sessions. It allows you to understand the flow of interactions and identify issues that may arise. Langfuse [tracing](/docs/tracing) offers information on single chatbot generations while [sessions](/docs/tracing-features/sessions) allows you to review user sessions, providing an overview of your bot's performance.

**Prompt Management:**

Prompt management is helpful for maintaining the quality of your chatbot's responses. By managing and optimizing prompts, you can ensure that the LLM generates accurate responses. This involves continuously testing prompts to adapt to new use cases.

Langfuse [Prompt Management](/docs/prompts/get-started) allows you to manage and test your prompts via UI and Python SDK. New prompts can be tested in the [LLM Playground](/docs/playground).

### 2. Monitor AI Chatbots in Production

Monitoring your AI chatbot involves:

- **Tracking latency**: Analyze the latency caused by security checks to ensure they are worth the wait and do not significantly impact performance.
- **Blocking harmful prompts**: Prevent potentially harmful or inappropriate prompts from being sent to the model.
- **Redacting sensitive PII**: Redact sensitive personally identifiable information before sending it into the model and then un-redact it in the response.
- **Evaluating prompts and completions**: Assess prompts and completions for toxicity, relevance, or sensitive material at run-time and block the response if necessary.
- **Monitoring security scores**: Track security scores over time to evaluate the effectiveness of security measures.
- **LLM cost**: Track the cost of LLM usage.

Langfuse enables you to collect and calculate scores for your model completions. [Model-based evaluations](/docs/scores/model-based-evals/overview) within Langfuse help assess the quality of responses, while [user feedback](/docs/scores/user-feedback) collection offers insights into user satisfaction.

[Annotating](/docs/scores/annotation) observations adds additional context and insights, helping you understand the nuances of your chatbot's interactions.

### 3. Test Chatbots for Safety and Performance

Before deploying a new version of your chatbot, it is important to track and test its behavior to ensure it performs as expected.

Langfuse allows you to use [Datasets](/docs/datasets/overview) to test expected input and output pairs, benchmarking performance and identifying potential issues.

[Experimentation](/docs/experimentation) features enable you to track versions and releases in your application, maintaining a history of changes and improvements.

### Start Tracking your Chatbot

The [`@observe()` decorator](/docs/sdk/python/decorators) makes it easy to trace any Python LLM application. In this quickstart we also use the Langfuse [OpenAI integration](/docs/integrations/openai) to automatically capture all model parameters.

Not using OpenAI? Check out how you can [trace any LLM with Langfuse](/docs/get-started).

1. [Create Langfuse account](https://cloud.langfuse.com/auth/sign-up) or [self-host](/docs/deployment/self-host)
2. Create a new project
3. Create new API credentials in the project settings

```bash
pip install langfuse openai
```

```python
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_HOST="https://us.cloud.langfuse.com" # 🇺🇸 US region
```

```python
from langfuse.decorators import observe
from langfuse.openai import openai # OpenAI integration

@observe()
def story():
return openai.chat.completions.create(
model="gpt-3.5-turbo",
max_tokens=100,
messages=[
{"role": "system", "content": "You are a great storyteller."},
{"role": "user", "content": "Once upon a time in a galaxy far, far away..."}
],
).choices[0].message.content

@observe()
def main():
return story()

main()
```
## Resources

- To see chatbot tracing in action, have a look at our interactive demo [here](https://langfuse.com/demo).
- Have a look at [this guide](https://langfuse.com/blog/qa-chatbot-for-langfuse-docs) to see how we built and instrumented a chatbot for the Langfuse docs.

Binary file added public/images/blog/faq/chatbot-analytics.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 9dc982b

Please sign in to comment.