diff --git a/docs/en/serverless/ai-assistant/ai-assistant.mdx b/docs/en/serverless/ai-assistant/ai-assistant.mdx index a2130e6c69..ad5bc96694 100644 --- a/docs/en/serverless/ai-assistant/ai-assistant.mdx +++ b/docs/en/serverless/ai-assistant/ai-assistant.mdx @@ -301,6 +301,6 @@ Most LLMs have a set number of tokens they can manage in single a conversation. When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error. The exact number of tokens that the LLM can support depends on the LLM provider and model you're using. If you are using an OpenAI connector, you can monitor token usage in **OpenAI Token Usage** dashboard. -For more information, refer to the [OpenAI Connector documentation](((kibana-ref))/openai-action-type.html#openai-connector-token-dashboard) +For more information, refer to the [OpenAI Connector documentation](((kibana-ref))/openai-action-type.html#openai-connector-token-dashboard).