From 6c21dc9849a9a2f28ee202f02f4155bcdced09b3 Mon Sep 17 00:00:00 2001 From: Eugene Yurtsev Date: Tue, 15 Oct 2024 16:35:35 -0400 Subject: [PATCH] fix some links --- docs/docs/concepts/llms.mdx | 2 +- docs/docs/concepts/multimodality.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/concepts/llms.mdx b/docs/docs/concepts/llms.mdx index c065283da61b2..3520118f9d5c5 100644 --- a/docs/docs/concepts/llms.mdx +++ b/docs/docs/concepts/llms.mdx @@ -26,7 +26,7 @@ However, users must know that there are two distinct interfaces for LLMs in Lang Modern LLMs (aka Chat Models): * [Conceptual Guide about Chat Models](/docs/concepts/chat_models/) -* [Chat Model Integrations](/docs/integrations/chat_models/) +* [Chat Model Integrations](/docs/integrations/chat/) * How-to Guides: [LLMs](/docs/how_to/#chat_models) Text-in, text-out LLMs (older or lower-level models): diff --git a/docs/docs/concepts/multimodality.mdx b/docs/docs/concepts/multimodality.mdx index 6c38e40399ade..508a72225b00d 100644 --- a/docs/docs/concepts/multimodality.mdx +++ b/docs/docs/concepts/multimodality.mdx @@ -5,7 +5,7 @@ LLMs are models that operate on sequences of tokens to predict the next token in Tokens are abstract representations of input data that can take a variety of forms, such as text, code, images, audio, video, and more. -The core technology powering these models, based on the [transformer architectures](https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture), operates on sequences of [tokens](/docs/concepts/tokenization). +The core technology powering these models, based on the [transformer architectures](https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture), operates on sequences of [tokens](/docs/concepts/tokens). LLMs are trained to predict the next token in a sequence of tokens. Tokens are abstract representations of input data which can take a variety of forms, such as text, code, images, audio, video, but could represent even more abstract input such as DNA sequences, protein sequences, and more.