From db3bf396b84a9d3c2e6266a5c85bf0eb78a3d0d4 Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Fri, 24 May 2024 16:02:11 -0500 Subject: [PATCH 1/2] Update ai-assistant.mdx --- docs/en/serverless/ai-assistant/ai-assistant.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/en/serverless/ai-assistant/ai-assistant.mdx b/docs/en/serverless/ai-assistant/ai-assistant.mdx index b26544a236..25e6ee6424 100644 --- a/docs/en/serverless/ai-assistant/ai-assistant.mdx +++ b/docs/en/serverless/ai-assistant/ai-assistant.mdx @@ -6,6 +6,7 @@ title: AI Assistant tags: [ 'serverless', 'observability', 'overview' ] --- +

The AI Assistant uses generative AI to provide: From 334601df029d16f9fa1e3ec0b92d9a8055a86869 Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Fri, 24 May 2024 16:23:06 -0500 Subject: [PATCH 2/2] Update ai-assistant.mdx --- docs/en/serverless/ai-assistant/ai-assistant.mdx | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/en/serverless/ai-assistant/ai-assistant.mdx b/docs/en/serverless/ai-assistant/ai-assistant.mdx index 25e6ee6424..32a62d2258 100644 --- a/docs/en/serverless/ai-assistant/ai-assistant.mdx +++ b/docs/en/serverless/ai-assistant/ai-assistant.mdx @@ -247,3 +247,5 @@ You can continue a conversation from a contextual prompt by clicking **Start cha Most LLMs have a set number of tokens they can manage in single a conversation. When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error. The exact number of tokens that the LLM can support depends on the LLM provider and model you're using. + +