From edaff27dc933b05a44fe2639d0f2a5e65d71c7d3 Mon Sep 17 00:00:00 2001 From: DeDe Morton Date: Tue, 11 Jun 2024 14:58:02 -0700 Subject: [PATCH] Update docs/en/serverless/ai-assistant/ai-assistant.mdx Co-authored-by: Mike Birnstiehl <114418652+mdbirnstiehl@users.noreply.github.com> --- docs/en/serverless/ai-assistant/ai-assistant.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/serverless/ai-assistant/ai-assistant.mdx b/docs/en/serverless/ai-assistant/ai-assistant.mdx index a2130e6c69..ad5bc96694 100644 --- a/docs/en/serverless/ai-assistant/ai-assistant.mdx +++ b/docs/en/serverless/ai-assistant/ai-assistant.mdx @@ -301,6 +301,6 @@ Most LLMs have a set number of tokens they can manage in single a conversation. When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error. The exact number of tokens that the LLM can support depends on the LLM provider and model you're using. If you are using an OpenAI connector, you can monitor token usage in **OpenAI Token Usage** dashboard. -For more information, refer to the [OpenAI Connector documentation](((kibana-ref))/openai-action-type.html#openai-connector-token-dashboard) +For more information, refer to the [OpenAI Connector documentation](((kibana-ref))/openai-action-type.html#openai-connector-token-dashboard).