From 55b4391ff059a22101487bbc593310ee8b97026c Mon Sep 17 00:00:00 2001 From: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com> Date: Thu, 25 Apr 2024 15:13:35 -0700 Subject: [PATCH] name --- docs/tracing/faq/custom_llm_token_counting.mdx | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/docs/tracing/faq/custom_llm_token_counting.mdx b/docs/tracing/faq/custom_llm_token_counting.mdx index 51182e3d..bab1e647 100644 --- a/docs/tracing/faq/custom_llm_token_counting.mdx +++ b/docs/tracing/faq/custom_llm_token_counting.mdx @@ -1,7 +1,6 @@ --- -sidebar_label: Customize Trace Attributes +--- +sidebar_label: Token Counting for Custom LLMs sidebar_position: 7 - --- # Custom LLM Token Counting @@ -9,6 +8,10 @@ sidebar_position: 7 This guide shows how to get your custom functions to have their token count tracked by LangSmith. The key is to coerce your inputs and outputs to conform with a minimal version OpenAI's API format. We will review adding support for both chat models (llms that expect a list of chat messages as inputs and return with a chat message) and completion models (models that expect a string as input and return a string). +:::note +This guide assumes you are using the `traceable` decorator, though the same principles can be applied to other tracing methods. +::: + ## Chat Models (messages in, completion message out) For chat models, your inputs must contain a list of messages as input. The output must return an object that, when serialized, contains the key `choices` with a list of dicts. Each dict must contain the key `message` with a dict value. The message dict must contain the key `content` with a string value and the key `role`.