Skip to content

Commit

Permalink
docs: improve assistants notebook (#653)
Browse files Browse the repository at this point in the history
  • Loading branch information
marcklingen authored May 29, 2024
1 parent 5eb93c3 commit 43fdf25
Show file tree
Hide file tree
Showing 4 changed files with 82 additions and 37 deletions.
45 changes: 32 additions & 13 deletions cookbook/integration_openai_assistants.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,10 @@
"\n",
"The Assistants API from OpenAI allows developers to build AI assistants that can utilize multiple tools and data sources in parallel, such as code interpreters, file search, and custom tools created by calling functions. These assistants can access OpenAI's language models like GPT-4 with specific prompts, maintain persistent conversation histories, and process various file formats like text, images, and spreadsheets. Developers can fine-tune the language models on their own data and control aspects like output randomness. The API provides a framework for creating AI applications that combine language understanding with external tools and data.\n",
"\n",
"## Example Trace Output\n",
"\n",
"![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)\n",
"\n",
"## Setup\n",
"\n",
"Install the required packages:"
Expand Down Expand Up @@ -189,7 +193,7 @@
"import json\n",
"from langfuse.decorators import langfuse_context\n",
"\n",
"@observe(as_type=\"generation\")\n",
"@observe()\n",
"def get_response(thread_id, run_id):\n",
" client = OpenAI()\n",
" \n",
Expand All @@ -206,8 +210,14 @@
" thread_id=thread_id,\n",
" )\n",
" input_messages = [{\"role\": message.role, \"content\": message.content[0].text.value} for message in message_log.data[::-1][:-1]]\n",
" \n",
" langfuse_context.update_current_observation(\n",
"\n",
" # log internal generation within the openai assistant as a separate child generation to langfuse\n",
" # get langfuse client used by the decorator, uses the low-level Python SDK\n",
" langfuse_client = langfuse_context._get_langfuse()\n",
" # pass trace_id and current observation ids to the newly created child generation\n",
" langfuse_client.generation(\n",
" trace_id=langfuse_context.get_current_trace_id(),\n",
" parent_observation_id=langfuse_context.get_current_observation_id(),\n",
" model=run.model,\n",
" usage=run.usage,\n",
" input=input_messages,\n",
Expand All @@ -216,21 +226,15 @@
" \n",
" return assistant_response, run\n",
"\n",
"# wrapper function as we want get_response to be a generation to track tokens\n",
"# -> generations need to have a parent trace\n",
"@observe()\n",
"def get_response_trace(thread_id, run_id):\n",
" return get_response(thread_id, run_id)\n",
"\n",
"response = get_response_trace(thread.id, run.id)\n",
"response = get_response(thread.id, run.id)\n",
"print(f\"Assistant response: {response[0]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/3020450b-e9b7-4c12-b4fe-7288b6324118?observation=a083878e-73dd-4c47-867e-db4e23050fac) of fetching the response**"
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e0933ea5-6806-4eb7-aed8-a42d23c57096?observation=401fb816-22e5-45ac-a4c9-e437b120f2e7) of fetching the response**"
]
},
{
Expand All @@ -246,10 +250,15 @@
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"\n",
"@observe()\n",
"def run_math_tutor(user_input):\n",
" assistant = create_assistant()\n",
" run, thread = run_assistant(assistant.id, user_input)\n",
"\n",
" time.sleep(5) # notebook only, wait for the assistant to finish\n",
"\n",
" response = get_response(thread.id, run.id)\n",
" \n",
" return response[0]\n",
Expand All @@ -265,8 +274,18 @@
"source": [
"The Langfuse trace shows the flow of creating the assistant, running it on a thread with user input, and retrieving the response, along with the captured input/output data.\n",
"\n",
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1b2d53ad-f5d2-4f1e-9121-628b5ca1b5b2)**\n",
"\n"
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/b3b7b128-5664-4f42-9fab-31999da9e2f1)**\n",
"\n",
"![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Learn more\n",
"\n",
"If you use non-Assistants API endpoints, you can use the OpenAI SDK wrapper for tracing. Check out the [Langfuse documentation](https://langfuse.com/docs/integrations/openai/python/get-started) for more details."
]
}
],
Expand Down
37 changes: 25 additions & 12 deletions pages/docs/integrations/openai/python/assistants-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ Note: The native [OpenAI SDK wrapper](https://langfuse.com/docs/integrations/ope

The Assistants API from OpenAI allows developers to build AI assistants that can utilize multiple tools and data sources in parallel, such as code interpreters, file search, and custom tools created by calling functions. These assistants can access OpenAI's language models like GPT-4 with specific prompts, maintain persistent conversation histories, and process various file formats like text, images, and spreadsheets. Developers can fine-tune the language models on their own data and control aspects like output randomness. The API provides a framework for creating AI applications that combine language understanding with external tools and data.

## Example Trace Output

![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)

## Setup

Install the required packages:
Expand Down Expand Up @@ -111,7 +115,7 @@ Retrieve the assistant's response from the thread:
import json
from langfuse.decorators import langfuse_context

@observe(as_type="generation")
@observe()
def get_response(thread_id, run_id):
client = OpenAI()

Expand All @@ -128,8 +132,14 @@ def get_response(thread_id, run_id):
thread_id=thread_id,
)
input_messages = [{"role": message.role, "content": message.content[0].text.value} for message in message_log.data[::-1][:-1]]

langfuse_context.update_current_observation(

# log internal generation within the openai assistant as a separate child generation to langfuse
# get langfuse client used by the decorator, uses the low-level Python SDK
langfuse_client = langfuse_context._get_langfuse()
# pass trace_id and current observation ids to the newly created child generation
langfuse_client.generation(
trace_id=langfuse_context.get_current_trace_id(),
parent_observation_id=langfuse_context.get_current_observation_id(),
model=run.model,
usage=run.usage,
input=input_messages,
Expand All @@ -138,26 +148,25 @@ def get_response(thread_id, run_id):

return assistant_response, run

# wrapper function as we want get_response to be a generation to track tokens
# -> generations need to have a parent trace
@observe()
def get_response_trace(thread_id, run_id):
return get_response(thread_id, run_id)

response = get_response_trace(thread.id, run.id)
response = get_response(thread.id, run.id)
print(f"Assistant response: {response[0]}")
```

**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/3020450b-e9b7-4c12-b4fe-7288b6324118?observation=a083878e-73dd-4c47-867e-db4e23050fac) of fetching the response**
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e0933ea5-6806-4eb7-aed8-a42d23c57096?observation=401fb816-22e5-45ac-a4c9-e437b120f2e7) of fetching the response**

## All in one trace


```python
import time

@observe()
def run_math_tutor(user_input):
assistant = create_assistant()
run, thread = run_assistant(assistant.id, user_input)

time.sleep(5) # notebook only, wait for the assistant to finish

response = get_response(thread.id, run.id)

return response[0]
Expand All @@ -169,6 +178,10 @@ print(f"Assistant response: {response}")

The Langfuse trace shows the flow of creating the assistant, running it on a thread with user input, and retrieving the response, along with the captured input/output data.

**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1b2d53ad-f5d2-4f1e-9121-628b5ca1b5b2)**
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/b3b7b128-5664-4f42-9fab-31999da9e2f1)**

![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)

## Learn more

If you use non-Assistants API endpoints, you can use the OpenAI SDK wrapper for tracing. Check out the [Langfuse documentation](https://langfuse.com/docs/integrations/openai/python/get-started) for more details.
37 changes: 25 additions & 12 deletions pages/guides/cookbook/integration_openai_assistants.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ Note: The native [OpenAI SDK wrapper](https://langfuse.com/docs/integrations/ope

The Assistants API from OpenAI allows developers to build AI assistants that can utilize multiple tools and data sources in parallel, such as code interpreters, file search, and custom tools created by calling functions. These assistants can access OpenAI's language models like GPT-4 with specific prompts, maintain persistent conversation histories, and process various file formats like text, images, and spreadsheets. Developers can fine-tune the language models on their own data and control aspects like output randomness. The API provides a framework for creating AI applications that combine language understanding with external tools and data.

## Example Trace Output

![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)

## Setup

Install the required packages:
Expand Down Expand Up @@ -111,7 +115,7 @@ Retrieve the assistant's response from the thread:
import json
from langfuse.decorators import langfuse_context

@observe(as_type="generation")
@observe()
def get_response(thread_id, run_id):
client = OpenAI()

Expand All @@ -128,8 +132,14 @@ def get_response(thread_id, run_id):
thread_id=thread_id,
)
input_messages = [{"role": message.role, "content": message.content[0].text.value} for message in message_log.data[::-1][:-1]]

langfuse_context.update_current_observation(

# log internal generation within the openai assistant as a separate child generation to langfuse
# get langfuse client used by the decorator, uses the low-level Python SDK
langfuse_client = langfuse_context._get_langfuse()
# pass trace_id and current observation ids to the newly created child generation
langfuse_client.generation(
trace_id=langfuse_context.get_current_trace_id(),
parent_observation_id=langfuse_context.get_current_observation_id(),
model=run.model,
usage=run.usage,
input=input_messages,
Expand All @@ -138,26 +148,25 @@ def get_response(thread_id, run_id):

return assistant_response, run

# wrapper function as we want get_response to be a generation to track tokens
# -> generations need to have a parent trace
@observe()
def get_response_trace(thread_id, run_id):
return get_response(thread_id, run_id)

response = get_response_trace(thread.id, run.id)
response = get_response(thread.id, run.id)
print(f"Assistant response: {response[0]}")
```

**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/3020450b-e9b7-4c12-b4fe-7288b6324118?observation=a083878e-73dd-4c47-867e-db4e23050fac) of fetching the response**
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e0933ea5-6806-4eb7-aed8-a42d23c57096?observation=401fb816-22e5-45ac-a4c9-e437b120f2e7) of fetching the response**

## All in one trace


```python
import time

@observe()
def run_math_tutor(user_input):
assistant = create_assistant()
run, thread = run_assistant(assistant.id, user_input)

time.sleep(5) # notebook only, wait for the assistant to finish

response = get_response(thread.id, run.id)

return response[0]
Expand All @@ -169,6 +178,10 @@ print(f"Assistant response: {response}")

The Langfuse trace shows the flow of creating the assistant, running it on a thread with user input, and retrieving the response, along with the captured input/output data.

**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1b2d53ad-f5d2-4f1e-9121-628b5ca1b5b2)**
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/b3b7b128-5664-4f42-9fab-31999da9e2f1)**

![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)

## Learn more

If you use non-Assistants API endpoints, you can use the OpenAI SDK wrapper for tracing. Check out the [Langfuse documentation](https://langfuse.com/docs/integrations/openai/python/get-started) for more details.
Binary file added public/images/docs/openai-assistants-trace.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 43fdf25

Please sign in to comment.