Skip to content

Commit

Permalink
concept docs: bagatur nits (#27564)
Browse files Browse the repository at this point in the history
  • Loading branch information
baskaryan authored Oct 23, 2024
1 parent b0be238 commit 3183467
Show file tree
Hide file tree
Showing 23 changed files with 145 additions and 157 deletions.
12 changes: 7 additions & 5 deletions docs/docs/concepts/agents.mdx
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
# Agents

We recommend that you use [LangGraph](/docs/concepts/architecture#langgraph) for building agents.
By themselves, language models can't take actions - they just output text. Agents are systems that take a high-level task and use an LLM as a reasoning engine to decide what actions to take and execute those actions.

[LangGraph](/docs/concepts/architecture#langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. We recommend that you use LangGraph for building agents.

Please see the following resources for more information:

* LangGraph docs for conceptual architecture about [Agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)
* [Pre-built agent in LangGraph](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent)
* LangGraph docs on [common agent architectures](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)
* [Pre-built agents in LangGraph](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent)

## Legacy Agent Concept: AgentExecutor
## Legacy agent concept: AgentExecutor

LangChain previously introduced the `AgentExecutor` as a runtime for agents.
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents.
As a result, we're gradually phasing out `AgentExecutor` in favor of more flexible solutions in LangGraph.

### Transitioning from AgentExecutor to LangGraph
### Transitioning from AgentExecutor to langgraph

If you're currently using `AgentExecutor`, don't worry! We've prepared resources to help you:

Expand Down
16 changes: 7 additions & 9 deletions docs/docs/concepts/async.mdx
Original file line number Diff line number Diff line change
@@ -1,20 +1,18 @@
# Async Programming with LangChain
# Async programming with langchain

:::info Prerequisites
* [Runnable Interface](/docs/concepts/runnables)
* [asyncio documentation](https://docs.python.org/3/library/asyncio.html)
* [Runnable interface](/docs/concepts/runnables)
* [asyncio](https://docs.python.org/3/library/asyncio.html)
:::

## Overview

LLM based applications often involve a lot of I/O-bound operations, such as making API calls to language models, databases, or other services. Asynchronous programming (or async programming) is a paradigm that allows a program to perform multiple tasks concurrently without blocking the execution of other tasks, improving efficiency and responsiveness, particularly in I/O-bound operations.

:::note
You are expected to be familiar with asynchronous programming in Python before reading this guide. If you are not, please find appropriate resources online to learn how to program asynchronously in Python.
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynch
:::

## LangChain Asynchronous APIs
## Langchain asynchronous apis

Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.

Expand All @@ -41,7 +39,7 @@ the full [Runnable Interface](/docs/concepts/runnables).

Fore more information, please review the [API reference](https://python.langchain.com/api_reference/) for the specific component you are using.

## Delegation to Sync Methods
## Delegation to sync methods

Most popular LangChain integrations implement asynchronous support of their APIs. For example, the `ainvoke` method of many ChatModel implementations uses the `httpx.AsyncClient` to make asynchronous HTTP requests to the model provider's API.

Expand Down Expand Up @@ -75,9 +73,9 @@ in certain scenarios.

If you are experiencing issues with streaming, callbacks or tracing in async code and are using Python 3.9 or 3.10, this is a likely cause.

Please read [Propagation RunnableConfig](/docs/concepts/runnables#propagation-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
Please read [Propagation RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).

## How to use in IPython and Jupyter Notebooks
## How to use in ipython and jupyter notebooks

As of IPython 7.0, IPython supports asynchronous REPLs. This means that you can use the `await` keyword in the IPython REPL and Jupyter Notebooks without any additional setup. For more information, see the [IPython blog post](https://blog.jupyter.org/ipython-7-0-async-repl-a35ce050f7f7).

4 changes: 1 addition & 3 deletions docs/docs/concepts/callbacks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,11 @@
- [Runnable interface](/docs/concepts/#runnable-interface)
:::

## Overview

LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.

You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail.

## Callback Events
## Callback events

| Event | Event Trigger | Associated Method |
|------------------|---------------------------------------------|-----------------------|
Expand Down
18 changes: 8 additions & 10 deletions docs/docs/concepts/chat_history.mdx
Original file line number Diff line number Diff line change
@@ -1,17 +1,15 @@
# Chat History
# Chat history

:::info Prerequisites

- [Messages](/docs/concepts/messages)
- [Chat Models](/docs/concepts/chat_models)
- [Tool Calling](/docs/concepts/tool_calling)
- [Chat models](/docs/concepts/chat_models)
- [Tool calling](/docs/concepts/tool_calling)
:::

## Overview

Chat history is a record of the conversation between the user and the chat model. It is used to maintain context and state throughout the conversation. The chat history is sequence of [messages](/docs/concepts/messages), each of which is associated with a specific [role](/docs/concepts/messages#role), such as "user", "assistant", "system", or "tool".

## Conversation Patterns
## Conversation patterns

![Conversation patterns](/img/conversation_patterns.png)

Expand All @@ -24,7 +22,7 @@ So a full conversation often involves a combination of two patterns of alternati
1. The **user** and the **assistant** representing a back-and-forth conversation.
2. The **assistant** and **tool messages** representing an ["agentic" workflow](/docs/concepts/agents) where the assistant is invoking tools to perform specific tasks.

## Managing Chat History
## Managing chat history

Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models#context_window).

Expand All @@ -42,7 +40,7 @@ Understanding correct conversation structure is essential for being able to prop
[memory](https://langchain-ai.github.io/langgraph/concepts/memory/) in chat models.
:::

## Related Resources
## Related resources

- [How to Trim Messages](https://python.langchain.com/docs/how_to/trim_messages/)
- [Memory Guide](https://langchain-ai.github.io/langgraph/concepts/memory/) for information on implementing short-term and long-term memory in chat models using [LangGraph](https://langchain-ai.github.io/langgraph/).
- [How to trim messages](https://python.langchain.com/docs/how_to/trim_messages/)
- [Memory guide](https://langchain-ai.github.io/langgraph/concepts/memory/) for information on implementing short-term and long-term memory in chat models using [LangGraph](https://langchain-ai.github.io/langgraph/).
31 changes: 16 additions & 15 deletions docs/docs/concepts/chat_models.mdx
Original file line number Diff line number Diff line change
@@ -1,36 +1,37 @@
# Chat Models
# Chat models

## Overview

Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific tuning for every scenario.

Modern LLMs are typically accessed through a chat model interface that takes [messages](/docs/concepts/messages) as input and returns [messages](/docs/concepts/messages) as output.
Modern LLMs are typically accessed through a chat model interface that takes a list of [messages](/docs/concepts/messages) as input and returns a [message](/docs/concepts/messages) as output.

The newest generation of chat models offer additional capabilities:

* [Tool Calling](/docs/concepts#tool-calling): Many popular chat models offer a native [tool calling](/docs/concepts#tool-calling) API. This API allows developers to build rich applications that enable AI to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Tool calling](/docs/concepts#tool-calling): Many popular chat models offer a native [tool calling](/docs/concepts#tool-calling) API. This API allows developers to build rich applications that enable AI to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Structured output](/docs/concepts/structured_outputs): A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
* [Multimodality](/docs/concepts/multimodality): The ability to work with data other than text; for example, images, audio, and video.

## Features

LangChain provides a consistent interface for working with chat models from different providers while offering additional features for monitoring, debugging, and optimizing the performance of applications that use LLMs.

* Integrations with many chat model providers (e.g., Anthropic, OpenAI, Ollama, Cohere, Hugging Face, Groq, Microsoft Azure, Google Vertex, Amazon Bedrock). Please see [chat model integrations](/docs/integrations/chat/) for an up-to-date list of supported models.
* Integrations with many chat model providers (e.g., Anthropic, OpenAI, Ollama, Microsoft Azure, Google Vertex, Amazon Bedrock, Hugging Face, Cohere, Groq). Please see [chat model integrations](/docs/integrations/chat/) for an up-to-date list of supported models.
* Use either LangChain's [messages](/docs/concepts/messages) format or OpenAI format.
* Standard [tool calling API](/docs/concepts#tool-calling): standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model.
* Standard API for structuring outputs (/docs/concepts/structured_outputs) via the `with_structured_output` method.
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables#batch), [a rich streaming API](/docs/concepts/streaming).
* Integration with [LangSmith](https://docs.smith.langchain.com) for monitoring and debugging production-grade applications based on LLMs.
* Additional features like standardized [token usage](/docs/concepts/messages#token_usage), [rate limiting](#rate-limiting), [caching](#cache) and more.

## Available Integrations
## Integrations

LangChain has many chat model integrations that allow you to use a wide variety of models from different providers.

These integrations are one of two types:

1. **Official Models**: These are models that are officially supported by LangChain and/or model provider. You can find these models in the `langchain-<provider>` packages.
2. **Community Models**: There are models that are mostly contributed and supported by the community. You can find these models in the `langchain-community` package.
1. **Official models**: These are models that are officially supported by LangChain and/or model provider. You can find these models in the `langchain-<provider>` packages.
2. **Community models**: There are models that are mostly contributed and supported by the community. You can find these models in the `langchain-community` package.

LangChain chat models are named with a convention that prefixes "Chat" to their class names (e.g., `ChatOllama`, `ChatAnthropic`, `ChatOpenAI`, etc.).

Expand All @@ -56,7 +57,7 @@ However, LangChain also has implementations of older LLMs that do not follow the
These models implement the [BaseLLM](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.llms.BaseLLM.html#langchain_core.language_models.llms.BaseLLM) interface and may be named with the "LLM" suffix (e.g., `OllamaLLM`, `AnthropicLLM`, `OpenAILLM`, etc.). Generally, users should not use these models.
:::

### Key Methods
### Key methods

The key methods of a chat model are:

Expand All @@ -68,7 +69,7 @@ The key methods of a chat model are:

Other important methods can be found in the [BaseChatModel API Reference](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html).

### Inputs and Outputs
### Inputs and outputs

Modern LLMs are typically accessed through a chat model interface that takes [messages](/docs/concepts/messages) as input and returns [messages](/docs/concepts/messages) as output. Messages are typically associated with a role (e.g., "system", "human", "assistant") and one or more content blocks that contain text or potentially multimodal data (e.g., images, audio, video).

Expand All @@ -77,7 +78,7 @@ LangChain supports two message formats to interact with chat models:
1. **LangChain Message Format**: LangChain's own message format, which is used by default and is used internally by LangChain.
2. **OpenAI's Message Format**: OpenAI's message format.

### Standard Parameters
### Standard parameters

Many chat models have standardized parameters that can be used to configure the model:

Expand All @@ -100,12 +101,12 @@ Some important things to note:

ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the [API reference](https://python.langchain.com/api_reference/) for that model.

## Tool Calling
## Tool calling

Chat models can call [tools](/docs/concepts/tools) to perform tasks such as fetching data from a database, making API requests, or running custom code. Please
see the [tool calling](/docs/concepts#tool-calling) guide for more information.

## Structured Outputs
## Structured outputs

Chat models can be requested to respond in a particular format (e.g., JSON or matching a particular schema). This feature is extremely
useful for information extraction tasks. Please read more about
Expand All @@ -117,15 +118,15 @@ Large Language Models (LLMs) are not limited to processing text. They can also b

Currently, only some LLMs support multimodal inputs, and almost none support multimodal outputs. Please consult the specific model documentation for details.

## Context Window
## Context window

A chat model's context window refers to the maximum size of the input sequence the model can process at one time. While the context windows of modern LLMs are quite large, they still present a limitation that developers must keep in mind when working with chat models.

If the input exceeds the context window, the model may not be able to process the entire input and could raise an error. In conversational applications, this is especially important because the context window determines how much information the model can "remember" throughout a conversation. Developers often need to manage the input within the context window to maintain a coherent dialogue without exceeding the limit. For more details on handling memory in conversations, refer to the [memory](https://langchain-ai.github.io/langgraph/concepts/memory/).

The size of the input is measured in [tokens](/docs/concepts/tokens) which are the unit of processing that the model uses.

## Advanced Topics
## Advanced topics

### Rate-limiting

Expand Down Expand Up @@ -153,7 +154,7 @@ However, there might be situations where caching chat model responses is benefic

Please see the [how to cache chat model responses](/docs/how_to/#chat-model-caching) guide for more details.

## Related Resources
## Related resources

* How-to guides on using chat models: [how-to guides](/docs/how_to/#chat-models).
* List of supported chat models: [chat model integrations](/docs/integrations/chat/).
Expand Down
12 changes: 5 additions & 7 deletions docs/docs/concepts/document_loaders.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,12 @@

:::info[Prerequisites]

* [Document API Reference](https://python.langchain.com/docs/how_to/#document-loaders)
* [Document loaders API reference](https://python.langchain.com/docs/how_to/#document-loaders)
:::

## Overview

Document loaders are designed to load document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.

## Available Integrations
## Integrations

You can find available integrations on the [Document Loaders Integrations page](https://python.langchain.com/docs/integrations/document_loaders/).

Expand Down Expand Up @@ -38,10 +36,10 @@ for document in loader.lazy_load():
print(document)
```

## Related Resources
## Related resources

Please see the following resources for more information:

* [How-to guides for document loaders](https://python.langchain.com/docs/how_to/#document-loaders)
* [Document API Reference](https://python.langchain.com/docs/how_to/#document-loaders)
* [Document Loaders Integrations](https://python.langchain.com/docs/integrations/document_loaders/)
* [Document API reference](https://python.langchain.com/docs/how_to/#document-loaders)
* [Document loaders integrations](https://python.langchain.com/docs/integrations/document_loaders/)
8 changes: 3 additions & 5 deletions docs/docs/concepts/embedding_models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,7 @@ This conceptual overview focuses on text-based embedding models.
Embedding models can also be [multimodal](/docs/concepts/multimodality) though such models are not currently supported by LangChain.
:::

## Overview

Imagine being able to capture the essence of any text - a tweet, document, or book - in a single, compact representation.
Imagine being able to capture the essence of any text - a tweet, document, or book - in a single, compact representation.
This is the power of embedding models, which lie at the heart of many retrieval systems.
Embedding models transform human language into a format that machines can understand and compare with speed and accuracy.
These models take text as input and produce a fixed-length array of numbers, a numerical fingerprint of the text's semantic meaning.
Expand Down Expand Up @@ -49,7 +47,7 @@ To navigate this variety, researchers and practitioners often turn to benchmarks

:::

### LangChain Interface
### Interface

LangChain provides a universal interface for working with them, providing standard methods for common operations.
This common interface simplifies interaction with various embedding providers through two central methods:
Expand Down Expand Up @@ -89,7 +87,7 @@ query_embedding = embeddings_model.embed_query("What is the meaning of life?")

:::

### Available integrations
### Integrations

LangChain offers many embedding model integrations which you can find [on the embedding models](/docs/integrations/text_embedding/) integrations page.

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/example_selectors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ Sometimes these examples are hardcoded into the prompt, but for more advanced si

**Example Selectors** are classes responsible for selecting and then formatting examples into prompts.

## Related Resources
## Related resources

* [Example selector how-to guides](/docs/how_to/#example-selectors)
Loading

0 comments on commit 3183467

Please sign in to comment.