-
Notifications
You must be signed in to change notification settings - Fork 65
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs - faq: add chatbot analytics seo post (#829)
Co-authored-by: Clemo <[email protected]>
- Loading branch information
1 parent
64f3d16
commit 9dc982b
Showing
2 changed files
with
111 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,111 @@ | ||
--- | ||
title: Chatbot Analytics - How to Improve your AI Chatbot | ||
tags: [product] | ||
--- | ||
|
||
# Chatbot Analytics: How to Improve your AI Chatbot with Langfuse | ||
|
||
<Frame border fullWidth> | ||
![Chatbot Analytics](/images/blog/faq/chatbot-analytics.png) | ||
</Frame> | ||
|
||
Monitoring and testing AI chatbots is important due to the **unique challenges** faced while building LLM applications. Unlike traditional software engineering, LLM-based applications involve **complex, repeated, and chained calls** to foundation models, making debugging difficult. | ||
|
||
[Langfuse](https://langfuse.com) is an open-source tool that simplifies this by capturing the full context of an AI chatbot application, allowing developers to trace and control the flow of interactions. | ||
|
||
Langfuse supports various integrations, including [OpenAI](/docs/integrations/openai/python/get-started), [Langchain](/docs/integrations/langchain/tracing), [LlamaIndex](/docs/integrations/llama-index/get-started), and [more](/docs/integrations/overview). | ||
|
||
**In this guide, we will cover how to:** | ||
|
||
1. Develop robust AI chatbots | ||
2. Monitor chatbots in production | ||
3. Test chatbots for safety and performance | ||
|
||
### 1. Develop Robust AI Chatbots | ||
|
||
**Instrument your Application:** | ||
|
||
When developing an AI chatbot, it is helpful to instrument your application to capture all chatbot interactions. This process allows you to monitor and debug your chatbot's performance in real-time. By tracking all LLM calls and other relevant logic in your chatbot application, you can gain insights into its operation. | ||
|
||
Langfuse integrates with various platforms such as [OpenAI, Langchain, LlamaIndex and LiteLLM](/docs/integrations/overview), providing flexibility for different use cases. Additionally, the [Langfuse API](https://api.reference.langfuse.com) enables you to tailor the monitoring to your specific needs. | ||
|
||
**Analyze Traces:** | ||
|
||
The Langfuse UI helps you to inspect and debug complex logs and user sessions. It allows you to understand the flow of interactions and identify issues that may arise. Langfuse [tracing](/docs/tracing) offers information on single chatbot generations while [sessions](/docs/tracing-features/sessions) allows you to review user sessions, providing an overview of your bot's performance. | ||
|
||
**Prompt Management:** | ||
|
||
Prompt management is helpful for maintaining the quality of your chatbot's responses. By managing and optimizing prompts, you can ensure that the LLM generates accurate responses. This involves continuously testing prompts to adapt to new use cases. | ||
|
||
Langfuse [Prompt Management](/docs/prompts/get-started) allows you to manage and test your prompts via UI and Python SDK. New prompts can be tested in the [LLM Playground](/docs/playground). | ||
|
||
### 2. Monitor AI Chatbots in Production | ||
|
||
Monitoring your AI chatbot involves: | ||
|
||
- **Tracking latency**: Analyze the latency caused by security checks to ensure they are worth the wait and do not significantly impact performance. | ||
- **Blocking harmful prompts**: Prevent potentially harmful or inappropriate prompts from being sent to the model. | ||
- **Redacting sensitive PII**: Redact sensitive personally identifiable information before sending it into the model and then un-redact it in the response. | ||
- **Evaluating prompts and completions**: Assess prompts and completions for toxicity, relevance, or sensitive material at run-time and block the response if necessary. | ||
- **Monitoring security scores**: Track security scores over time to evaluate the effectiveness of security measures. | ||
- **LLM cost**: Track the cost of LLM usage. | ||
|
||
Langfuse enables you to collect and calculate scores for your model completions. [Model-based evaluations](/docs/scores/model-based-evals/overview) within Langfuse help assess the quality of responses, while [user feedback](/docs/scores/user-feedback) collection offers insights into user satisfaction. | ||
|
||
[Annotating](/docs/scores/annotation) observations adds additional context and insights, helping you understand the nuances of your chatbot's interactions. | ||
|
||
### 3. Test Chatbots for Safety and Performance | ||
|
||
Before deploying a new version of your chatbot, it is important to track and test its behavior to ensure it performs as expected. | ||
|
||
Langfuse allows you to use [Datasets](/docs/datasets/overview) to test expected input and output pairs, benchmarking performance and identifying potential issues. | ||
|
||
[Experimentation](/docs/experimentation) features enable you to track versions and releases in your application, maintaining a history of changes and improvements. | ||
|
||
### Start Tracking your Chatbot | ||
|
||
The [`@observe()` decorator](/docs/sdk/python/decorators) makes it easy to trace any Python LLM application. In this quickstart we also use the Langfuse [OpenAI integration](/docs/integrations/openai) to automatically capture all model parameters. | ||
|
||
Not using OpenAI? Check out how you can [trace any LLM with Langfuse](/docs/get-started). | ||
|
||
1. [Create Langfuse account](https://cloud.langfuse.com/auth/sign-up) or [self-host](/docs/deployment/self-host) | ||
2. Create a new project | ||
3. Create new API credentials in the project settings | ||
|
||
```bash | ||
pip install langfuse openai | ||
``` | ||
|
||
```python | ||
LANGFUSE_SECRET_KEY="sk-lf-..." | ||
LANGFUSE_PUBLIC_KEY="pk-lf-..." | ||
LANGFUSE_HOST="https://cloud.langfuse.com" # 🇪🇺 EU region | ||
# LANGFUSE_HOST="https://us.cloud.langfuse.com" # 🇺🇸 US region | ||
``` | ||
|
||
```python | ||
from langfuse.decorators import observe | ||
from langfuse.openai import openai # OpenAI integration | ||
|
||
@observe() | ||
def story(): | ||
return openai.chat.completions.create( | ||
model="gpt-3.5-turbo", | ||
max_tokens=100, | ||
messages=[ | ||
{"role": "system", "content": "You are a great storyteller."}, | ||
{"role": "user", "content": "Once upon a time in a galaxy far, far away..."} | ||
], | ||
).choices[0].message.content | ||
|
||
@observe() | ||
def main(): | ||
return story() | ||
|
||
main() | ||
``` | ||
## Resources | ||
|
||
- To see chatbot tracing in action, have a look at our interactive demo [here](https://langfuse.com/demo). | ||
- Have a look at [this guide](https://langfuse.com/blog/qa-chatbot-for-langfuse-docs) to see how we built and instrumented a chatbot for the Langfuse docs. | ||
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.