Skip to content

Commit

Permalink
name
Browse files Browse the repository at this point in the history
  • Loading branch information
hinthornw committed Apr 25, 2024
1 parent b377e95 commit 55b4391
Showing 1 changed file with 6 additions and 3 deletions.
9 changes: 6 additions & 3 deletions docs/tracing/faq/custom_llm_token_counting.mdx
Original file line number Diff line number Diff line change
@@ -1,14 +1,17 @@
--
sidebar_label: Customize Trace Attributes
---
sidebar_label: Token Counting for Custom LLMs
sidebar_position: 7

---

# Custom LLM Token Counting

This guide shows how to get your custom functions to have their token count tracked by LangSmith. The key is to coerce your inputs and outputs to conform with a minimal version OpenAI's API format.
We will review adding support for both chat models (llms that expect a list of chat messages as inputs and return with a chat message) and completion models (models that expect a string as input and return a string).

:::note
This guide assumes you are using the `traceable` decorator, though the same principles can be applied to other tracing methods.
:::

## Chat Models (messages in, completion message out)

For chat models, your inputs must contain a list of messages as input. The output must return an object that, when serialized, contains the key `choices` with a list of dicts. Each dict must contain the key `message` with a dict value. The message dict must contain the key `content` with a string value and the key `role`.
Expand Down

0 comments on commit 55b4391

Please sign in to comment.