From 88366f44fd7e7166d3246a0a709f858de8a2b744 Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 12:55:36 +0800 Subject: [PATCH 01/12] Updated the language used --- docs/docs/concepts/why_langchain.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/docs/concepts/why_langchain.mdx b/docs/docs/concepts/why_langchain.mdx index 1eae06eea3705..da7a7e9717840 100644 --- a/docs/docs/concepts/why_langchain.mdx +++ b/docs/docs/concepts/why_langchain.mdx @@ -1,9 +1,9 @@ -# Why langchain? +# Why LangChain? -The goal of `langchain` the Python package and LangChain the company is to make it as easy possible for developers to build applications that reason. +The goal of `langchain` the Python package and LangChain the company is to make it as easy as possible for developers to build applications that reason. While LangChain originally started as a single open source package, it has evolved into a company and a whole ecosystem. This page will talk about the LangChain ecosystem as a whole. -Most of the components within in the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best. +Most of the components within the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best for your own use case! ## Features @@ -17,8 +17,8 @@ LangChain exposes a standard interface for key components, making it easy to swi [Orchestration](https://en.wikipedia.org/wiki/Orchestration_(computing)) is crucial for building such applications. 3. **Observability and evaluation:** As applications become more complex, it becomes increasingly difficult to understand what is happening within them. -Furthermore, the pace of development can become rate-limited by the [paradox of choice](https://en.wikipedia.org/wiki/Paradox_of_choice): -for example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. +Furthermore, the pace of development can become rate-limited by the [paradox of choice](https://en.wikipedia.org/wiki/Paradox_of_choice). +For example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. [Observability](https://en.wikipedia.org/wiki/Observability) and evaluations can help developers monitor their applications and rapidly answer these types of questions with confidence. @@ -72,11 +72,11 @@ There are several common characteristics of LLM applications that this orchestra * **[Persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/):** The application needs to maintain [short-term and / or long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/). * **[Human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/):** The application needs human interaction, e.g., pausing, reviewing, editing, approving certain steps. -The recommended way to do orchestration for these complex applications is [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/). +The recommended way to orchestrate components for complex applications is [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/). LangGraph is a library that gives developers a high degree of control by expressing the flow of the application as a set of nodes and edges. LangGraph comes with built-in support for [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), [memory](https://langchain-ai.github.io/langgraph/concepts/memory/), and other features. -It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications. -Importantly, individual LangChain components can be used within LangGraph nodes, but you can also use LangGraph **without** using LangChain components. +It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications. +Importantly, individual LangChain components can be used as LangGraph nodes, but you can also use LangGraph **without** using LangChain components. :::info[Further reading] @@ -87,7 +87,7 @@ Have a look at our free course, [Introduction to LangGraph](https://academy.lang ## Observability and evaluation The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice. -Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. +Developers often wonder how to engineer their prompt or which choose LLMs which best balances accuracy, latency, and cost. High quality tracing and evaluations can help you rapidly answer these types of questions with confidence. [LangSmith](https://docs.smith.langchain.com/) is our platform that supports observability and evaluation for AI applications. See our conceptual guides on [evaluations](https://docs.smith.langchain.com/concepts/evaluation) and [tracing](https://docs.smith.langchain.com/concepts/tracing) for more details. From 0836843c92a639897e12f96eecd2b69adc635546 Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 13:09:06 +0800 Subject: [PATCH 02/12] Updated language and link a relevant page for users to learn how to define schemas --- docs/docs/concepts/chat_models.mdx | 12 ++++++------ docs/docs/concepts/why_langchain.mdx | 3 ++- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/docs/docs/concepts/chat_models.mdx b/docs/docs/concepts/chat_models.mdx index 1c264fcac1654..1c7f6d132da57 100644 --- a/docs/docs/concepts/chat_models.mdx +++ b/docs/docs/concepts/chat_models.mdx @@ -2,7 +2,7 @@ ## Overview -Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific tuning for every scenario. +Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific fine tuning for every scenario. Modern LLMs are typically accessed through a chat model interface that takes a list of [messages](/docs/concepts/messages) as input and returns a [message](/docs/concepts/messages) as output. @@ -44,7 +44,7 @@ Models that do **not** include the prefix "Chat" in their name or include "LLM" ## Interface -LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because [BaseChatModel] also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables#batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details. +LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because [BaseChatModel]((https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables#batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details. Many of the key methods of chat models operate on [messages](/docs/concepts/messages) as input and return messages as output. @@ -85,7 +85,7 @@ Many chat models have standardized parameters that can be used to configure the | Parameter | Description | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `model` | The name or identifier of the specific AI model you want to use (e.g., `"gpt-3.5-turbo"` or `"gpt-4"`). | -| `temperature` | Controls the randomness of the model's output. A higher value (e.g., 1.0) makes responses more creative, while a lower value (e.g., 0.1) makes them more deterministic and focused. | +| `temperature` | Controls the randomness of the model's output. A higher value (e.g., 1.0) makes responses more creative, while a lower value (e.g., 0.0) makes them more deterministic and focused. | | `timeout` | The maximum time (in seconds) to wait for a response from the model before canceling the request. Ensures the request doesn’t hang indefinitely. | | `max_tokens` | Limits the total number of tokens (words and punctuation) in the response. This controls how long the output can be. | | `stop` | Specifies stop sequences that indicate when the model should stop generating tokens. For example, you might use specific strings to signal the end of a response. | @@ -97,9 +97,9 @@ Many chat models have standardized parameters that can be used to configure the Some important things to note: - Standard parameters only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max_tokens can't be supported on these. -- Standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in ``langchain-community``. +- Standard parameters are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in `langchain-community`. -ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the [API reference](https://python.langchain.com/api_reference/) for that model. +Chat models also accept other parameters that are specific to that integration. To find all the parameters supported by a Chat model head to the their respective [API reference](https://python.langchain.com/api_reference/) for that model. ## Tool calling @@ -150,7 +150,7 @@ An alternative approach is to use semantic caching, where you cache responses ba A semantic cache introduces a dependency on another model on the critical path of your application (e.g., the semantic cache may rely on an [embedding model](/docs/concepts/embedding_models) to convert text to a vector representation), and it's not guaranteed to capture the meaning of the input accurately. -However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times. +However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider, costs, and improve response times. Please see the [how to cache chat model responses](/docs/how_to/#chat-model-caching) guide for more details. diff --git a/docs/docs/concepts/why_langchain.mdx b/docs/docs/concepts/why_langchain.mdx index da7a7e9717840..3adcc3529f1b5 100644 --- a/docs/docs/concepts/why_langchain.mdx +++ b/docs/docs/concepts/why_langchain.mdx @@ -29,7 +29,7 @@ As an example, all [chat models](/docs/concepts/chat_models/) implement the [Bas This provides a standard way to interact with chat models, supporting important but often provider-specific features like [tool calling](/docs/concepts/tool_calling/) and [structured outputs](/docs/concepts/structured_outputs/). -### Example: chat models +### Example: Chat models Many [model providers](/docs/concepts/chat_models/) support [tool calling](/docs/concepts/tool_calling/), a critical features for many applications (e.g., [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)), that allows a developer to request model responses that match a particular schema. The APIs for each provider differ. @@ -45,6 +45,7 @@ model_with_tools = model.bind_tools(tools) Similarly, getting models to produce [structured outputs](/docs/concepts/structured_outputs/) is an extremely common use case. Providers support different approaches for this, including [JSON mode or tool calling](https://platform.openai.com/docs/guides/structured-outputs), with different APIs. LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to produce structured outputs using the `with_structured_output()` method: +Learn how to define schemas [here](docs/how_to/structured_output/) ```python # Define schema From 2097bdcf2a86d917f226147ee48f8a285004461e Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 13:13:37 +0800 Subject: [PATCH 03/12] added "systemmessage" link at the top and made language smoother --- docs/docs/concepts/messages.mdx | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/docs/concepts/messages.mdx b/docs/docs/concepts/messages.mdx index a12b428719591..1eef1fb72a38e 100644 --- a/docs/docs/concepts/messages.mdx +++ b/docs/docs/concepts/messages.mdx @@ -8,7 +8,7 @@ Messages are the unit of communication in [chat models](/docs/concepts/chat_models). They are used to represent the input and output of a chat model, as well as any additional context or metadata that may be associated with a conversation. -Each message has a **role** (e.g., "user", "assistant"), **content** (e.g., text, multimodal data), and additional metadata that can vary depending on the chat model provider. +Each message has a **role** (e.g., "user", "assistant") and **content** (e.g., text, multimodal data) with additional metadata that varies depending on the chat model provider. LangChain provides a unified message format that can be used across chat models, allowing users to work with different chat models without worrying about the specific details of the message format used by each model provider. @@ -39,6 +39,7 @@ The content of a message text or a list of dictionaries representing [multimodal Currently, most chat models support text as the primary content type, with some models also supporting multimodal data. However, support for multimodal data is still limited across most chat model providers. For more information see: +* [SystemMessage](#systemmessage) -- for content which should be passed to direct the conversation * [HumanMessage](#humanmessage) -- for content in the input from the user. * [AIMessage](#aimessage) -- for content in the response from the model. * [Multimodality](/docs/concepts/multimodality) -- for more information on multimodal content. From 5c853c11c3c15bad9233d7e6f14ed3117bbb7e53 Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 13:15:05 +0800 Subject: [PATCH 04/12] Minor Language Update --- docs/docs/concepts/chat_history.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/concepts/chat_history.mdx b/docs/docs/concepts/chat_history.mdx index b22862512a873..0d393bda8687a 100644 --- a/docs/docs/concepts/chat_history.mdx +++ b/docs/docs/concepts/chat_history.mdx @@ -17,7 +17,7 @@ Most conversations start with a **system message** that sets the context for the The **assistant** may respond directly to the user or if configured with tools request that a [tool](/docs/concepts/tool_calling) be invoked to perform a specific task. -So a full conversation often involves a combination of two patterns of alternating messages: +A full conversation often involves a combination of two patterns of alternating messages: 1. The **user** and the **assistant** representing a back-and-forth conversation. 2. The **assistant** and **tool messages** representing an ["agentic" workflow](/docs/concepts/agents) where the assistant is invoking tools to perform specific tasks. From 8a2f93cf0b447dc66de4ba3de568f70bf14cbddd Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 13:24:56 +0800 Subject: [PATCH 05/12] Updated Language --- docs/docs/concepts/structured_outputs.mdx | 4 ++-- docs/docs/concepts/tools.mdx | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/docs/concepts/structured_outputs.mdx b/docs/docs/concepts/structured_outputs.mdx index a334ecc1276f4..dad1c1a49cd89 100644 --- a/docs/docs/concepts/structured_outputs.mdx +++ b/docs/docs/concepts/structured_outputs.mdx @@ -119,11 +119,11 @@ json_object = json.loads(ai_msg.content) There are a few challenges when producing structured output with the above methods: -(1) If using tool calling, tool call arguments needs to be parsed from a dictionary back to the original schema. +(1) When tool calling is used, tool call arguments needs to be parsed from a dictionary back to the original schema. (2) In addition, the model needs to be instructed to *always* use the tool when we want to enforce structured output, which is a provider specific setting. -(3) If using JSON mode, the output needs to be parsed into a JSON object. +(3) When JSON mode is used, the output needs to be parsed into a JSON object. With these challenges in mind, LangChain provides a helper function (`with_structured_output()`) to streamline the process. diff --git a/docs/docs/concepts/tools.mdx b/docs/docs/concepts/tools.mdx index 5c079808bd65f..86a5c5a33383b 100644 --- a/docs/docs/concepts/tools.mdx +++ b/docs/docs/concepts/tools.mdx @@ -6,7 +6,7 @@ ## Overview -The **tool** abstraction in LangChain associates a python **function** with a **schema** that defines the function's **name**, **description** and **input**. +The **tool** abstraction in LangChain associates a Python **function** with a **schema** that defines the function's **name**, **description** and **expected arguments**. **Tools** can be passed to [chat models](/docs/concepts/chat_models) that support [tool calling](/docs/concepts/tool_calling) allowing the model to request the execution of a specific function with specific inputs. @@ -14,7 +14,7 @@ The **tool** abstraction in LangChain associates a python **function** with a ** - Tools are a way to encapsulate a function and its schema in a way that can be passed to a chat model. - Create tools using the [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) decorator, which simplifies the process of tool creation, supporting the following: - - Automatically infer the tool's **name**, **description** and **inputs**, while also supporting customization. + - Automatically infer the tool's **name**, **description** and **expected arguments**, while also supporting customization. - Defining tools that return **artifacts** (e.g. images, dataframes, etc.) - Hiding input arguments from the schema (and hence from the model) using **injected tool arguments**. From ce1cc2b7b8d4ebaa73e9fd3807ea293449e2c3c9 Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 13:38:58 +0800 Subject: [PATCH 06/12] Updated Language and Shorten sentences --- docs/docs/concepts/lcel.mdx | 10 +++++----- docs/docs/concepts/runnables.mdx | 8 ++++---- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/docs/concepts/lcel.mdx b/docs/docs/concepts/lcel.mdx index 32da281ec1c2e..4c7063a0c291e 100644 --- a/docs/docs/concepts/lcel.mdx +++ b/docs/docs/concepts/lcel.mdx @@ -6,7 +6,7 @@ The **L**ang**C**hain **E**xpression **L**anguage (LCEL) takes a [declarative](https://en.wikipedia.org/wiki/Declarative_programming) approach to building new [Runnables](/docs/concepts/runnables) from existing Runnables. -This means that you describe what you want to happen, rather than how you want it to happen, allowing LangChain to optimize the run-time execution of the chains. +This means that you describe what *should* happen, rather than *how* it should happen, allowing LangChain to optimize the run-time execution of the chains. We often refer to a `Runnable` created using LCEL as a "chain". It's important to remember that a "chain" is `Runnable` and it implements the full [Runnable Interface](/docs/concepts/runnables). @@ -20,9 +20,9 @@ We often refer to a `Runnable` created using LCEL as a "chain". It's important t LangChain optimizes the run-time execution of chains built with LCEL in a number of ways: -- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#RunnableParallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables#batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially. -- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables#async-api). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently. -- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out). +- **Optimized parallel execution**: Run Runnables in parallel using [RunnableParallel](#RunnableParallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables#batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially. +- **Guaranteed Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables#async-api). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently. +- **Simplified streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out). Other benefits include: @@ -38,7 +38,7 @@ LCEL is an [orchestration solution](https://en.wikipedia.org/wiki/Orchestration_ While we have seen users run chains with hundreds of steps in production, we generally recommend using LCEL for simpler orchestration tasks. When the application requires complex state management, branching, cycles or multiple agents, we recommend that users take advantage of [LangGraph](/docs/concepts/architecture#langgraph). -In LangGraph, users define graphs that specify the flow of the application. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable. +In LangGraph, users define graphs that specify the application's flow. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable. Here are some guidelines: diff --git a/docs/docs/concepts/runnables.mdx b/docs/docs/concepts/runnables.mdx index 4a383e623a3e5..96ed74119cfe2 100644 --- a/docs/docs/concepts/runnables.mdx +++ b/docs/docs/concepts/runnables.mdx @@ -1,6 +1,6 @@ # Runnable interface -The Runnable interface is foundational for working with LangChain components, and it's implemented across many of them, such as [language models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [retrievers](/docs/concepts/retrievers), [compiled LangGraph graphs]( +The Runnable interface is the foundation for working with LangChain components, and it's implemented across many of them, such as [language models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [retrievers](/docs/concepts/retrievers), [compiled LangGraph graphs]( https://langchain-ai.github.io/langgraph/concepts/low_level/#compiling-your-graph) and more. This guide covers the main concepts and methods of the Runnable interface, which allows developers to interact with various LangChain components in a consistent and predictable manner. @@ -42,7 +42,7 @@ Some Runnables may provide their own implementations of `batch` and `batch_as_co rely on a `batch` API provided by a model provider). :::note -The async versions of `abatch` and `abatch_as_completed` these rely on asyncio's [gather](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) and [as_completed](https://docs.python.org/3/library/asyncio-task.html#asyncio.as_completed) functions to run the `ainvoke` method in parallel. +The async versions of `abatch` and `abatch_as_completed` relies on asyncio's [gather](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) and [as_completed](https://docs.python.org/3/library/asyncio-task.html#asyncio.as_completed) functions to run the `ainvoke` method in parallel. ::: :::tip @@ -58,7 +58,7 @@ Runnables expose an asynchronous API, allowing them to be called using the `awai Please refer to the [Async Programming with LangChain](/docs/concepts/async) guide for more details. -## Streaming apis +## Streaming APIs Streaming is critical in making applications based on LLMs feel responsive to end-users. @@ -101,7 +101,7 @@ This is an advanced feature that is unnecessary for most users. You should proba skip this section unless you have a specific need to inspect the schema of a Runnable. ::: -In some advanced uses, you may want to programmatically **inspect** the Runnable and determine what input and output types the Runnable expects and produces. +In more advanced use cases, you may want to programmatically **inspect** the Runnable and determine what input and output types the Runnable expects and produces. The Runnable interface provides methods to get the [JSON Schema](https://json-schema.org/) of the input and output types of a Runnable, as well as [Pydantic schemas](https://docs.pydantic.dev/latest/) for the input and output types. From d56ad43bba808e363566f821b547c7b690db84a9 Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 13:40:15 +0800 Subject: [PATCH 07/12] Language Updates --- docs/docs/concepts/document_loaders.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/concepts/document_loaders.mdx b/docs/docs/concepts/document_loaders.mdx index d9b1f13babd00..c38e81610e35d 100644 --- a/docs/docs/concepts/document_loaders.mdx +++ b/docs/docs/concepts/document_loaders.mdx @@ -29,7 +29,7 @@ loader = CSVLoader( data = loader.load() ``` -or if working with large datasets, you can use the `.lazy_load` method: +When working with large datasets, you can use the `.lazy_load` method: ```python for document in loader.lazy_load(): From 35eac358ba6b707d9dfb10720ed8ed94596921ba Mon Sep 17 00:00:00 2001 From: KW Date: Wed, 30 Oct 2024 13:46:57 +0800 Subject: [PATCH 08/12] Updated Language --- docs/docs/concepts/retrieval.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/concepts/retrieval.mdx b/docs/docs/concepts/retrieval.mdx index a69fb8d4f9d54..0ded476b80016 100644 --- a/docs/docs/concepts/retrieval.mdx +++ b/docs/docs/concepts/retrieval.mdx @@ -27,7 +27,7 @@ These systems accommodate various data formats: - Unstructured text (e.g., documents) is often stored in vector stores or lexical search indexes. - Structured data is typically housed in relational or graph databases with defined schemas. -Despite this diversity in data formats, modern AI applications increasingly aim to make all types of data accessible through natural language interfaces. +Despite the growing diversity in data formats, modern AI applications increasingly aim to make all types of data accessible through natural language interfaces. Models play a crucial role in this process by translating natural language queries into formats compatible with the underlying search index or database. This translation enables more intuitive and flexible interactions with complex data structures. @@ -41,7 +41,7 @@ This translation enables more intuitive and flexible interactions with complex d ## Query analysis -While users typically prefer to interact with retrieval systems using natural language, retrieval systems can specific query syntax or benefit from particular keywords. +While users typically prefer to interact with retrieval systems using natural language, these systems may require specific query syntax or benefit from certain keywords. Query analysis serves as a bridge between raw user input and optimized search queries. Some common applications of query analysis include: 1. **Query Re-writing**: Queries can be re-written or expanded to improve semantic or lexical searches. From 4c1440eda15ae8c32a214230bcdff521cd66d39f Mon Sep 17 00:00:00 2001 From: Eugene Yurtsev Date: Thu, 31 Oct 2024 16:01:19 -0400 Subject: [PATCH 09/12] Update docs/docs/concepts/why_langchain.mdx --- docs/docs/concepts/why_langchain.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/concepts/why_langchain.mdx b/docs/docs/concepts/why_langchain.mdx index 3adcc3529f1b5..a095b6fa08007 100644 --- a/docs/docs/concepts/why_langchain.mdx +++ b/docs/docs/concepts/why_langchain.mdx @@ -29,7 +29,7 @@ As an example, all [chat models](/docs/concepts/chat_models/) implement the [Bas This provides a standard way to interact with chat models, supporting important but often provider-specific features like [tool calling](/docs/concepts/tool_calling/) and [structured outputs](/docs/concepts/structured_outputs/). -### Example: Chat models +### Example: chat models Many [model providers](/docs/concepts/chat_models/) support [tool calling](/docs/concepts/tool_calling/), a critical features for many applications (e.g., [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)), that allows a developer to request model responses that match a particular schema. The APIs for each provider differ. From 690fbde2780338edae83c57ccee590051af318f8 Mon Sep 17 00:00:00 2001 From: Eugene Yurtsev Date: Thu, 31 Oct 2024 16:02:19 -0400 Subject: [PATCH 10/12] Update docs/docs/concepts/why_langchain.mdx --- docs/docs/concepts/why_langchain.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/concepts/why_langchain.mdx b/docs/docs/concepts/why_langchain.mdx index a095b6fa08007..936ef8e477aa1 100644 --- a/docs/docs/concepts/why_langchain.mdx +++ b/docs/docs/concepts/why_langchain.mdx @@ -88,7 +88,7 @@ Have a look at our free course, [Introduction to LangGraph](https://academy.lang ## Observability and evaluation The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice. -Developers often wonder how to engineer their prompt or which choose LLMs which best balances accuracy, latency, and cost. +Developers often wonder how to engineer their prompt or which LLMs best balances accuracy, latency, and cost. High quality tracing and evaluations can help you rapidly answer these types of questions with confidence. [LangSmith](https://docs.smith.langchain.com/) is our platform that supports observability and evaluation for AI applications. See our conceptual guides on [evaluations](https://docs.smith.langchain.com/concepts/evaluation) and [tracing](https://docs.smith.langchain.com/concepts/tracing) for more details. From 3735d056fdc158b675b588c50a1f1b7125096721 Mon Sep 17 00:00:00 2001 From: Eugene Yurtsev Date: Thu, 31 Oct 2024 16:02:35 -0400 Subject: [PATCH 11/12] Update docs/docs/concepts/why_langchain.mdx --- docs/docs/concepts/why_langchain.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/concepts/why_langchain.mdx b/docs/docs/concepts/why_langchain.mdx index 936ef8e477aa1..6d1d5d784f4ec 100644 --- a/docs/docs/concepts/why_langchain.mdx +++ b/docs/docs/concepts/why_langchain.mdx @@ -88,7 +88,7 @@ Have a look at our free course, [Introduction to LangGraph](https://academy.lang ## Observability and evaluation The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice. -Developers often wonder how to engineer their prompt or which LLMs best balances accuracy, latency, and cost. +Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. High quality tracing and evaluations can help you rapidly answer these types of questions with confidence. [LangSmith](https://docs.smith.langchain.com/) is our platform that supports observability and evaluation for AI applications. See our conceptual guides on [evaluations](https://docs.smith.langchain.com/concepts/evaluation) and [tracing](https://docs.smith.langchain.com/concepts/tracing) for more details. From 0e4593a3ff62ec6dc13d430461752bcb78a139cc Mon Sep 17 00:00:00 2001 From: Eugene Yurtsev Date: Wed, 13 Nov 2024 10:42:05 -0500 Subject: [PATCH 12/12] remove bad link --- docs/docs/concepts/why_langchain.mdx | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/docs/concepts/why_langchain.mdx b/docs/docs/concepts/why_langchain.mdx index b7aa150847c51..584a080c9566b 100644 --- a/docs/docs/concepts/why_langchain.mdx +++ b/docs/docs/concepts/why_langchain.mdx @@ -45,7 +45,6 @@ model_with_tools = model.bind_tools(tools) Similarly, getting models to produce [structured outputs](/docs/concepts/structured_outputs/) is an extremely common use case. Providers support different approaches for this, including [JSON mode or tool calling](https://platform.openai.com/docs/guides/structured-outputs), with different APIs. LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to produce structured outputs using the `with_structured_output()` method: -Learn how to define schemas [here](docs/how_to/structured_output/) ```python # Define schema