Skip to content

Commit

Permalink
fix some links
Browse files Browse the repository at this point in the history
  • Loading branch information
eyurtsev committed Oct 15, 2024
1 parent cb27158 commit 6c21dc9
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/docs/concepts/llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ However, users must know that there are two distinct interfaces for LLMs in Lang
Modern LLMs (aka Chat Models):

* [Conceptual Guide about Chat Models](/docs/concepts/chat_models/)
* [Chat Model Integrations](/docs/integrations/chat_models/)
* [Chat Model Integrations](/docs/integrations/chat/)
* How-to Guides: [LLMs](/docs/how_to/#chat_models)

Text-in, text-out LLMs (older or lower-level models):
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/multimodality.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ LLMs are models that operate on sequences of tokens to predict the next token in

Tokens are abstract representations of input data that can take a variety of forms, such as text, code, images, audio, video, and more.

The core technology powering these models, based on the [transformer architectures](https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture), operates on sequences of [tokens](/docs/concepts/tokenization).
The core technology powering these models, based on the [transformer architectures](https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture), operates on sequences of [tokens](/docs/concepts/tokens).

LLMs are trained to predict the next token in a sequence of tokens. Tokens are abstract representations of input data which can take a variety of forms, such as text, code, images, audio, video, but could represent even more abstract input such as DNA sequences, protein sequences, and more.

Expand Down

0 comments on commit 6c21dc9

Please sign in to comment.