Skip to content

Commit

Permalink
x
Browse files Browse the repository at this point in the history
  • Loading branch information
eyurtsev committed Oct 31, 2024
1 parent c335c23 commit 6f2c2b4
Show file tree
Hide file tree
Showing 8 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/docs/concepts/async.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ in certain scenarios.

If you are experiencing issues with streaming, callbacks or tracing in async code and are using Python 3.9 or 3.10, this is a likely cause.

Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).

## How to use in ipython and jupyter notebooks

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/chat_history.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ So a full conversation often involves a combination of two patterns of alternati

## Managing chat history

Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models/#context_window).
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models/#context-window).

While processing chat history, it's essential to preserve a correct conversation structure.

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/chat_models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ LangChain provides a consistent interface for working with chat models from diff
* Standard API for structuring outputs (/docs/concepts/structured_outputs) via the `with_structured_output` method.
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), [a rich streaming API](/docs/concepts/streaming).
* Integration with [LangSmith](https://docs.smith.langchain.com) for monitoring and debugging production-grade applications based on LLMs.
* Additional features like standardized [token usage](/docs/concepts/messages/#aimessage), [rate limiting](#rate-limiting), [caching](#cache) and more.
* Additional features like standardized [token usage](/docs/concepts/messages/#aimessage), [rate limiting](#rate-limiting), [caching](#caching) and more.

## Integrations

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/concepts/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[Context window](/docs/concepts/chat_models#context-window)**: The maximum size of input a chat model can process.
- **[Conversation patterns](/docs/concepts/chat_history#conversation-patterns)**: Common patterns in chat interactions.
- **[Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)**: LangChain's representation of a document.
- **[Embedding models](/docs/concepts/multimodality/##multimodality-in-embedding-models)**: Models that generate vector embeddings for various data types.
- **[Embedding models](/docs/concepts/multimodality/#multimodality-in-embedding-models)**: Models that generate vector embeddings for various data types.
- **[HumanMessage](/docs/concepts/messages#humanmessage)**: Represents a message from a human user.
- **[InjectedState](/docs/concepts/tools#injectedstate)**: A state injected into a tool function.
- **[InjectedStore](/docs/concepts/tools#injectedstore)**: A store that can be injected into a tool for data persistence.
Expand All @@ -70,7 +70,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[langserve](/docs/concepts/architecture#langserve)**: Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
- **[Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-runnableconfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
- **[Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
- **[rate-limiting](/docs/concepts/chat_models#rate-limiting)**: Client side rate limiting for chat models.
- **[RemoveMessage](/docs/concepts/messages/#removemessage)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
- **[role](/docs/concepts/messages#role)**: Represents the role (e.g., user, assistant) of a chat message.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/lcel.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ We often refer to a `Runnable` created using LCEL as a "chain". It's important t

LangChain optimizes the run-time execution of chains built with LCEL in a number of ways:

- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#RunnableParallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out).

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/runnables.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ Please read the [Callbacks Conceptual Guide](/docs/concepts/callbacks) for more
:::important
If you're using Python 3.9 or 3.10 in an async environment, you must propagate
the `RunnableConfig` manually to sub-calls in some cases. Please see the
[Propagating RunnableConfig](#propagation-of-RunnableConfig) section for more information.
[Propagating RunnableConfig](#propagation-of-runnableconfig) section for more information.
:::

## Creating a runnable from a function
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/tools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ The `config` will not be part of the tool's schema and will be injected at runti
:::note
You may need to access the `config` object to manually propagate it to subclass. This happens if you're working with python 3.9 / 3.10 in an [async](/docs/concepts/async) environment and need to manually propagate the `config` object to sub-calls.

Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
:::

### InjectedState
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/why_langchain.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ See our video playlist on [LangSmith tracing and evaluations](https://youtube.co

LangChain offers standard interfaces for components that are central to many AI applications, which offers a few specific advantages:
- **Ease of swapping providers:** It allows you to swap out different component providers without having to change the underlying code.
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/runnables/#streaming) and [tool calling](/docs/concepts/tool_calling/).
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/streaming) and [tool calling](/docs/concepts/tool_calling/).

[LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/) makes it possible to orchestrate complex applications (e.g., [agents](/docs/concepts/agents/)) and provide features like including [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), or [memory](https://langchain-ai.github.io/langgraph/concepts/memory/).

Expand Down

0 comments on commit 6f2c2b4

Please sign in to comment.