diff --git a/docs/core_docs/docs/integrations/text_embedding/baidu_qianfan.mdx b/docs/core_docs/docs/integrations/text_embedding/baidu_qianfan.mdx new file mode 100644 index 000000000000..341310c6d9f5 --- /dev/null +++ b/docs/core_docs/docs/integrations/text_embedding/baidu_qianfan.mdx @@ -0,0 +1,32 @@ +--- +sidebar_class_name: node-only +--- + +# Baidu Qianfan + +The `BaiduQianfanEmbeddings` class uses the Baidu Qianfan API to generate embeddings for a given text. + +## Setup + +Official Website: https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu + +An API key is required to use this embedding model. You can get one by registering at https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu. + +Please set the acquired API key as an environment variable named BAIDU_API_KEY, and set your secret key as an environment variable named BAIDU_SECRET_KEY. + +Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package: + +import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; + + + +```bash npm2yarn +npm install @langchain/community +``` + +## Usage + +import CodeBlock from "@theme/CodeBlock"; +import BaiduQianFanExample from "@examples/embeddings/baidu_qianfan.ts"; + +{BaiduQianFanExample} diff --git a/docs/core_docs/docs/modules/data_connection/retrievers/index.mdx b/docs/core_docs/docs/modules/data_connection/retrievers/index.mdx index e55da71a3303..a9734392f90b 100644 --- a/docs/core_docs/docs/modules/data_connection/retrievers/index.mdx +++ b/docs/core_docs/docs/modules/data_connection/retrievers/index.mdx @@ -29,7 +29,7 @@ LangChain provides several advanced retrieval types. A full list is below, along | [Vectorstore](/docs/modules/data_connection/retrievers/vectorstore) | Vectorstore | No | If you are just getting started and looking for something quick and easy. | This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. | | [ParentDocument](/docs/modules/data_connection/retrievers/parent-document-retriever) | Vectorstore + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). | | [Multi Vector](/docs/modules/data_connection/retrievers/multi-vector-retriever) | Vectorstore + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. | -| [Self Query](/docs/modules/data_connection/retrievers/self_query/) | Vectorstore | Yes | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filer to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). | +| [Self Query](/docs/modules/data_connection/retrievers/self_query/) | Vectorstore | Yes | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). | | [Contextual Compression](/docs/modules/data_connection/retrievers/contextual_compression) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. | | [Time-Weighted Vectorstore](/docs/modules/data_connection/retrievers/time_weighted_vectorstore) | Vectorstore | No | If you have timestamps associated with your documents, and you want to retrieve the most recent ones | This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) | | [Multi-Query Retriever](/docs/modules/data_connection/retrievers/multi-query-retriever) | Any | Yes | If users are asking questions that are complex and require multiple pieces of distinct information to respond | This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them. | diff --git a/docs/core_docs/docs/use_cases/graph/construction.ipynb b/docs/core_docs/docs/use_cases/graph/construction.ipynb new file mode 100644 index 000000000000..c7c5808ac99f --- /dev/null +++ b/docs/core_docs/docs/use_cases/graph/construction.ipynb @@ -0,0 +1,253 @@ +{ + "cells": [ + { + "cell_type": "raw", + "id": "5e61b0f2-15b9-4241-9ab5-ff0f3f732232", + "metadata": {}, + "source": [ + "---\n", + "sidebar_position: 1\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "846ef4f4-ee38-4a42-a7d3-1a23826e4830", + "metadata": {}, + "source": [ + "# Constructing knowledge graphs\n", + "In this guide we'll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a RAG application. At a high-level, the steps of constructing a knowledge are from text are:\n", + "\n", + "1. Extracting structured information from text: Model is used to extract structured graph information from text.\n", + "2. Storing into graph database: Storing the extracted structured graph information into a graph database enables downstream RAG applications" + ] + }, + { + "cell_type": "markdown", + "id": "26677b08", + "metadata": {}, + "source": [ + "## Setup\n", + "#### Install dependencies\n", + "\n", + "```{=mdx}\n", + "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", + "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", + "\n", + "\n", + "\n", + "\n", + " langchain @langchain/community @langchain/openai neo4j-driver zod\n", + "\n", + "```\n", + "\n", + "#### Set environment variables\n", + "\n", + "We'll use OpenAI in this example:\n", + "\n", + "```env\n", + "OPENAI_API_KEY=your-api-key\n", + "\n", + "# Optional, use LangSmith for best-in-class observability\n", + "LANGSMITH_API_KEY=your-api-key\n", + "LANGCHAIN_TRACING_V2=true\n", + "```\n", + "\n", + "Next, we need to define Neo4j credentials.\n", + "Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.\n", + "\n", + "```env\n", + "NEO4J_URI=\"bolt://localhost:7687\"\n", + "NEO4J_USERNAME=\"neo4j\"\n", + "NEO4J_PASSWORD=\"password\"\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "50fa4510-29b7-49b6-8496-5e86f694e81f", + "metadata": {}, + "source": [ + "The below example will create a connection with a Neo4j database." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4ee9ef7a-eef9-4289-b9fd-8fbc31041688", + "metadata": {}, + "outputs": [], + "source": [ + "import \"neo4j-driver\";\n", + "import { Neo4jGraph } from \"@langchain/community/graphs/neo4j_graph\";\n", + "\n", + "const url = Deno.env.get(\"NEO4J_URI\");\n", + "const username = Deno.env.get(\"NEO4J_USER\");\n", + "const password = Deno.env.get(\"NEO4J_PASSWORD\");\n", + "const graph = await Neo4jGraph.initialize({ url, username, password });" + ] + }, + { + "cell_type": "markdown", + "id": "0cb0ea30-ca55-4f35-aad6-beb57453de66", + "metadata": {}, + "source": [ + "## LLM Graph Transformer\n", + "Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The LLMGraphTransformer converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "e1a19424-6046-40c2-81d1-f3b88193a293", + "metadata": {}, + "outputs": [], + "source": [ + "import { ChatOpenAI } from \"@langchain/openai\";\n", + "import { LLMGraphTransformer } from \"@langchain/community/experimental/graph_transformers/llm\";\n", + "\n", + "const model = new ChatOpenAI({\n", + " temperature: 0,\n", + " modelName: \"gpt-4-turbo-preview\",\n", + "});\n", + "\n", + "const llmGraphTransformer = new LLMGraphTransformer({\n", + " llm: model\n", + "});\n" + ] + }, + { + "cell_type": "markdown", + "id": "9c14084c-37a7-4a9c-a026-74e12961c781", + "metadata": {}, + "source": [ + "Now we can pass in example text and examine the results." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "bbfe0d8f-982e-46e6-88fb-8a4f0d850b07", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Nodes: 8\n", + "Relationships:7\n" + ] + } + ], + "source": [ + "import { Document } from \"@langchain/core/documents\";\n", + "\n", + "let text = `\n", + "Marie Curie, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.\n", + "She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.\n", + "Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.\n", + "She was, in 1906, the first woman to become a professor at the University of Paris.\n", + "`\n", + "\n", + "const result = await llmGraphTransformer.convertToGraphDocuments([\n", + " new Document({ pageContent: text }),\n", + "]);\n", + "\n", + "console.log(`Nodes: ${result[0].nodes.length}`);\n", + "console.log(`Relationships:${result[0].relationships.length}`);" + ] + }, + { + "cell_type": "markdown", + "id": "a8afbf13-05d0-4383-8050-f88b8c2f6fab", + "metadata": {}, + "source": [ + "Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution.\n", + "Examine the following image to better grasp the structure of the generated knowledge graph.\n", + "\n", + "![graph_construction1.png](../../../static/img/graph_construction1.png)\n", + "\n", + "Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "6f92929f-74fb-4db2-b7e1-eb1e9d386a67", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Nodes: 6\n", + "Relationships:4\n" + ] + } + ], + "source": [ + "const llmGraphTransformerFiltered = new LLMGraphTransformer({\n", + " llm: model,\n", + " allowedNodes: [\"PERSON\", \"COUNTRY\", \"ORGANIZATION\"],\n", + " allowedRelationships:[\"NATIONALITY\", \"LOCATED_IN\", \"WORKED_AT\", \"SPOUSE\"],\n", + " strictMode:false\n", + "});\n", + "\n", + "const result_filtered = await llmGraphTransformerFiltered.convertToGraphDocuments([\n", + " new Document({ pageContent: text }),\n", + "]);\n", + "\n", + "console.log(`Nodes: ${result_filtered[0].nodes.length}`);\n", + "console.log(`Relationships:${result_filtered[0].relationships.length}`);" + ] + }, + { + "cell_type": "markdown", + "id": "f66c6756-6efb-4b1e-9b5d-87ed914a5212", + "metadata": {}, + "source": [ + "For a better understanding of the generated graph, we can again visualize it.\n", + "\n", + "![graph_construction1.png](../../../static/img/graph_construction2.png)\n", + "\n", + "## Storing to graph database\n", + "The generated graph documents can be stored to a graph database using the `addGraphDocuments` method." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "8ef3e21d-f1c2-45e2-9511-4920d1cf6e7e", + "metadata": {}, + "outputs": [], + "source": [ + "await graph.addGraphDocuments(result_filtered)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e67382aa-7324-4983-b834-1fdd841cc92c", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Deno", + "language": "typescript", + "name": "deno" + }, + "language_info": { + "file_extension": ".ts", + "mimetype": "text/x.typescript", + "name": "typescript", + "nb_converter": "script", + "pygments_lexer": "typescript", + "version": "5.4.3" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/core_docs/docs/use_cases/graph/index.ipynb b/docs/core_docs/docs/use_cases/graph/index.ipynb index 43d28138957f..1c5b57af94e8 100644 --- a/docs/core_docs/docs/use_cases/graph/index.ipynb +++ b/docs/core_docs/docs/use_cases/graph/index.ipynb @@ -32,8 +32,16 @@ "\n", "* [Prompting strategies](/docs/use_cases/graph/prompting): Advanced prompt engineering techniques.\n", "* [Mapping values](/docs/use_cases/graph/mapping): Techniques for mapping values from questions to database.\n", - "* [Semantic layer](/docs/use_cases/graph/semantic): Techniques for working implementing semantic layers." + "* [Semantic layer](/docs/use_cases/graph/semantic): Techniques for working implementing semantic layers.\n", + "* [Constructing graphs](/docs/use_cases/graph/construction): Techniques for constructing knowledge graphs.\n" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -43,16 +51,12 @@ "name": "deno" }, "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", + "file_extension": ".ts", + "mimetype": "text/x.typescript", "name": "typescript", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.5" + "nb_converter": "script", + "pygments_lexer": "typescript", + "version": "5.4.3" } }, "nbformat": 4, diff --git a/docs/core_docs/docs/use_cases/graph/quickstart.ipynb b/docs/core_docs/docs/use_cases/graph/quickstart.ipynb index b0582fdc79bb..bea82cf95dcc 100644 --- a/docs/core_docs/docs/use_cases/graph/quickstart.ipynb +++ b/docs/core_docs/docs/use_cases/graph/quickstart.ipynb @@ -225,8 +225,16 @@ "\n", "* [Prompting strategies](/docs/use_cases/graph/prompting): Advanced prompt engineering techniques.\n", "* [Mapping values](/docs/use_cases/graph/mapping): Techniques for mapping values from questions to database.\n", - "* [Semantic layer](/docs/use_cases/graph/semantic): Techniques for working implementing semantic layers." + "* [Semantic layer](/docs/use_cases/graph/semantic): Techniques for working implementing semantic layers.\n", + "* [Constructing graphs](/docs/use_cases/graph/construction): Techniques for constructing knowledge graphs.\n" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -241,7 +249,7 @@ "name": "typescript", "nb_converter": "script", "pygments_lexer": "typescript", - "version": "5.3.3" + "version": "5.4.3" } }, "nbformat": 4, diff --git a/docs/core_docs/static/img/graph_construction1.png b/docs/core_docs/static/img/graph_construction1.png new file mode 100644 index 000000000000..9639f751e6ca Binary files /dev/null and b/docs/core_docs/static/img/graph_construction1.png differ diff --git a/docs/core_docs/static/img/graph_construction2.png b/docs/core_docs/static/img/graph_construction2.png new file mode 100644 index 000000000000..4e34adc95bb6 Binary files /dev/null and b/docs/core_docs/static/img/graph_construction2.png differ diff --git a/examples/src/embeddings/baidu_qianfan.ts b/examples/src/embeddings/baidu_qianfan.ts new file mode 100644 index 000000000000..9a85ddc6d7b4 --- /dev/null +++ b/examples/src/embeddings/baidu_qianfan.ts @@ -0,0 +1,7 @@ +import { BaiduQianfanEmbeddings } from "@langchain/community/embeddings/baidu_qianfan"; + +const embeddings = new BaiduQianfanEmbeddings(); +const res = await embeddings.embedQuery( + "What would be a good company name a company that makes colorful socks?" +); +console.log({ res }); diff --git a/langchain-core/package.json b/langchain-core/package.json index a6bfac17898f..7714ddc9f8f7 100644 --- a/langchain-core/package.json +++ b/langchain-core/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/core", - "version": "0.1.54", + "version": "0.1.55", "description": "Core LangChain.js abstractions and schemas", "type": "module", "engines": { diff --git a/langchain/package.json b/langchain/package.json index 24d8306de772..b041adc02087 100644 --- a/langchain/package.json +++ b/langchain/package.json @@ -1,6 +1,6 @@ { "name": "langchain", - "version": "0.1.31", + "version": "0.1.32", "description": "Typescript bindings for langchain", "type": "module", "engines": { diff --git a/libs/langchain-anthropic/package.json b/libs/langchain-anthropic/package.json index 23d7a18a82d2..e0c5384fb27b 100644 --- a/libs/langchain-anthropic/package.json +++ b/libs/langchain-anthropic/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/anthropic", - "version": "0.1.10", + "version": "0.1.12", "description": "Anthropic integrations for LangChain.js", "type": "module", "engines": { @@ -39,7 +39,7 @@ "author": "LangChain", "license": "MIT", "dependencies": { - "@anthropic-ai/sdk": "^0.17.2", + "@anthropic-ai/sdk": "^0.20.1", "@langchain/core": "~0.1.54", "fast-xml-parser": "^4.3.5", "zod": "^3.22.4", diff --git a/libs/langchain-anthropic/src/chat_models.ts b/libs/langchain-anthropic/src/chat_models.ts index d20a4ff7331e..6fb46b39d0b6 100644 --- a/libs/langchain-anthropic/src/chat_models.ts +++ b/libs/langchain-anthropic/src/chat_models.ts @@ -527,7 +527,7 @@ export class ChatAnthropicMessages< role, content: message.content, }; - } else if ("type" in message.content) { + } else { const contentBlocks = message.content.map((contentPart) => { if (contentPart.type === "image_url") { let source; @@ -546,6 +546,15 @@ export class ChatAnthropicMessages< type: "text" as const, // Explicitly setting the type as "text" text: contentPart.text, }; + } else if ( + contentPart.type === "tool_use" || + contentPart.type === "tool_result" + ) { + // TODO: Fix when SDK types are fixed + return { + ...contentPart, + // eslint-disable-next-line @typescript-eslint/no-explicit-any + } as any; } else { throw new Error("Unsupported message content format"); } @@ -554,8 +563,6 @@ export class ChatAnthropicMessages< role, content: contentBlocks, }; - } else { - throw new Error("Unsupported message content format"); } }); return { diff --git a/libs/langchain-anthropic/src/experimental/tests/tool_calling.int.test.ts b/libs/langchain-anthropic/src/experimental/tests/tool_calling.int.test.ts index d8acd086ae9a..f5c863c60a88 100644 --- a/libs/langchain-anthropic/src/experimental/tests/tool_calling.int.test.ts +++ b/libs/langchain-anthropic/src/experimental/tests/tool_calling.int.test.ts @@ -7,7 +7,7 @@ import { BaseMessageChunk, HumanMessage } from "@langchain/core/messages"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatAnthropicTools } from "../tool_calling.js"; -test("Test ChatAnthropicTools", async () => { +test.skip("Test ChatAnthropicTools", async () => { const chat = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", maxRetries: 0, @@ -17,7 +17,7 @@ test("Test ChatAnthropicTools", async () => { console.log(JSON.stringify(res)); }); -test("Test ChatAnthropicTools streaming", async () => { +test.skip("Test ChatAnthropicTools streaming", async () => { const chat = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", maxRetries: 0, @@ -32,7 +32,7 @@ test("Test ChatAnthropicTools streaming", async () => { expect(chunks.length).toBeGreaterThan(1); }); -test("Test ChatAnthropicTools with tools", async () => { +test.skip("Test ChatAnthropicTools with tools", async () => { const chat = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", temperature: 0.1, @@ -71,7 +71,7 @@ test("Test ChatAnthropicTools with tools", async () => { ); }); -test("Test ChatAnthropicTools with a forced function call", async () => { +test.skip("Test ChatAnthropicTools with a forced function call", async () => { const chat = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", temperature: 0.1, @@ -117,7 +117,7 @@ test("Test ChatAnthropicTools with a forced function call", async () => { ); }); -test("ChatAnthropicTools with Zod schema", async () => { +test.skip("ChatAnthropicTools with Zod schema", async () => { const schema = z.object({ people: z.array( z.object({ @@ -168,7 +168,7 @@ test("ChatAnthropicTools with Zod schema", async () => { }); }); -test("ChatAnthropicTools with parallel tool calling", async () => { +test.skip("ChatAnthropicTools with parallel tool calling", async () => { const schema = z.object({ name: z.string().describe("The name of a person"), height: z.number().describe("The person's height"), @@ -215,7 +215,7 @@ test("ChatAnthropicTools with parallel tool calling", async () => { ); }); -test("Test ChatAnthropic withStructuredOutput", async () => { +test.skip("Test ChatAnthropic withStructuredOutput", async () => { const runnable = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", maxRetries: 0, @@ -235,7 +235,7 @@ test("Test ChatAnthropic withStructuredOutput", async () => { expect(res).toEqual({ name: "Alex", height: 5, hairColor: "blonde" }); }); -test("Test ChatAnthropic withStructuredOutput on a single array item", async () => { +test.skip("Test ChatAnthropic withStructuredOutput on a single array item", async () => { const runnable = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", maxRetries: 0, @@ -258,7 +258,7 @@ test("Test ChatAnthropic withStructuredOutput on a single array item", async () }); }); -test("Test ChatAnthropic withStructuredOutput on a single array item", async () => { +test.skip("Test ChatAnthropic withStructuredOutput on a single array item", async () => { const runnable = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", maxRetries: 0, @@ -305,7 +305,7 @@ test("Test ChatAnthropic withStructuredOutput on a single array item", async () }); }); -test("Test ChatAnthropicTools", async () => { +test.skip("Test ChatAnthropicTools", async () => { const chat = new ChatAnthropicTools({ modelName: "claude-3-sonnet-20240229", maxRetries: 0, diff --git a/libs/langchain-anthropic/src/output_parsers.ts b/libs/langchain-anthropic/src/output_parsers.ts index 928a128c184b..8e0e72818d6a 100644 --- a/libs/langchain-anthropic/src/output_parsers.ts +++ b/libs/langchain-anthropic/src/output_parsers.ts @@ -1,10 +1,15 @@ -import { BaseLLMOutputParser } from "@langchain/core/output_parsers"; +import { z } from "zod"; +import { + BaseLLMOutputParser, + OutputParserException, +} from "@langchain/core/output_parsers"; import { JsonOutputKeyToolsParserParams } from "@langchain/core/output_parsers/openai_tools"; import { ChatGeneration } from "@langchain/core/outputs"; import { AnthropicToolResponse } from "./types.js"; -interface AnthropicToolsOutputParserParams - extends JsonOutputKeyToolsParserParams {} +// eslint-disable-next-line @typescript-eslint/no-explicit-any +interface AnthropicToolsOutputParserParams> + extends JsonOutputKeyToolsParserParams {} export class AnthropicToolsOutputParser< // eslint-disable-next-line @typescript-eslint/no-explicit-any @@ -24,10 +29,32 @@ export class AnthropicToolsOutputParser< /** Whether to return only the first tool call. */ returnSingle = false; - constructor(params: AnthropicToolsOutputParserParams) { + zodSchema?: z.ZodType; + + constructor(params: AnthropicToolsOutputParserParams) { super(params); this.keyName = params.keyName; this.returnSingle = params.returnSingle ?? this.returnSingle; + this.zodSchema = params.zodSchema; + } + + protected async _validateResult(result: unknown): Promise { + if (this.zodSchema === undefined) { + return result as T; + } + const zodParsedResult = await this.zodSchema.safeParseAsync(result); + if (zodParsedResult.success) { + return zodParsedResult.data; + } else { + throw new OutputParserException( + `Failed to parse. Text: "${JSON.stringify( + result, + null, + 2 + )}". Error: ${JSON.stringify(zodParsedResult.error.errors)}`, + JSON.stringify(result, null, 2) + ); + } } // eslint-disable-next-line @typescript-eslint/no-explicit-any @@ -45,10 +72,13 @@ export class AnthropicToolsOutputParser< | undefined; return tool; }); - if (tools.length === 0 || !tools[0]) { - throw new Error("No tools provided to AnthropicToolsOutputParser."); + if (tools[0] === undefined) { + throw new Error( + "No parseable tool calls provided to AnthropicToolsOutputParser." + ); } const [tool] = tools; - return tool.input as T; + const validatedResult = await this._validateResult(tool.input); + return validatedResult; } } diff --git a/libs/langchain-anthropic/src/tests/chat_models.test.ts b/libs/langchain-anthropic/src/tests/chat_models.test.ts new file mode 100644 index 000000000000..537c0c8e4fef --- /dev/null +++ b/libs/langchain-anthropic/src/tests/chat_models.test.ts @@ -0,0 +1,107 @@ +import { jest, test } from "@jest/globals"; +import { AIMessage } from "@langchain/core/messages"; +import { z } from "zod"; +import { OutputParserException } from "@langchain/core/output_parsers"; +import { ChatAnthropic } from "../chat_models.js"; + +test("withStructuredOutput with output validation", async () => { + const model = new ChatAnthropic({ + modelName: "claude-3-haiku-20240307", + temperature: 0, + anthropicApiKey: "testing", + }); + jest + // eslint-disable-next-line @typescript-eslint/no-explicit-any + .spyOn(model as any, "invoke") + .mockResolvedValue( + new AIMessage({ + content: [ + { + type: "tool_use", + id: "notreal", + name: "Extractor", + input: "Incorrect string tool call input", + }, + ], + }) + ); + const schema = z.object({ + alerts: z + .array( + z.object({ + description: z.string().describe("A description of the alert."), + severity: z + .enum(["HIGH", "MEDIUM", "LOW"]) + .describe("How severe the alert is."), + }) + ) + .describe( + "Important security or infrastructure alerts present in the given text." + ), + }); + + const modelWithStructuredOutput = model.withStructuredOutput(schema, { + name: "Extractor", + }); + + await expect(async () => { + await modelWithStructuredOutput.invoke(` + Enumeration of Kernel Modules via Proc + Prompt for Credentials with OSASCRIPT + User Login + Modification of Standard Authentication Module + Suspicious Automator Workflows Execution + `); + }).rejects.toThrowError(OutputParserException); +}); + +test("withStructuredOutput with proper output", async () => { + const model = new ChatAnthropic({ + modelName: "claude-3-haiku-20240307", + temperature: 0, + anthropicApiKey: "testing", + }); + jest + // eslint-disable-next-line @typescript-eslint/no-explicit-any + .spyOn(model as any, "invoke") + .mockResolvedValue( + new AIMessage({ + content: [ + { + type: "tool_use", + id: "notreal", + name: "Extractor", + input: { alerts: [{ description: "test", severity: "LOW" }] }, + }, + ], + }) + ); + const schema = z.object({ + alerts: z + .array( + z.object({ + description: z.string().describe("A description of the alert."), + severity: z + .enum(["HIGH", "MEDIUM", "LOW"]) + .describe("How severe the alert is."), + }) + ) + .describe( + "Important security or infrastructure alerts present in the given text." + ), + }); + + const modelWithStructuredOutput = model.withStructuredOutput(schema, { + name: "Extractor", + }); + + const result = await modelWithStructuredOutput.invoke(` + Enumeration of Kernel Modules via Proc + Prompt for Credentials with OSASCRIPT + User Login + Modification of Standard Authentication Module + Suspicious Automator Workflows Execution + `); + + console.log(result); +}); diff --git a/libs/langchain-community/.gitignore b/libs/langchain-community/.gitignore index 0b2ed0633985..ea93ab28de1c 100644 --- a/libs/langchain-community/.gitignore +++ b/libs/langchain-community/.gitignore @@ -118,6 +118,10 @@ embeddings/alibaba_tongyi.cjs embeddings/alibaba_tongyi.js embeddings/alibaba_tongyi.d.ts embeddings/alibaba_tongyi.d.cts +embeddings/baidu_qianfan.cjs +embeddings/baidu_qianfan.js +embeddings/baidu_qianfan.d.ts +embeddings/baidu_qianfan.d.cts embeddings/bedrock.cjs embeddings/bedrock.js embeddings/bedrock.d.ts diff --git a/libs/langchain-community/langchain.config.js b/libs/langchain-community/langchain.config.js index 65b01ff2385d..afe5975fd5d1 100644 --- a/libs/langchain-community/langchain.config.js +++ b/libs/langchain-community/langchain.config.js @@ -59,6 +59,7 @@ export const config = { "agents/toolkits/connery": "agents/toolkits/connery/index", // embeddings "embeddings/alibaba_tongyi": "embeddings/alibaba_tongyi", + "embeddings/baidu_qianfan": "embeddings/baidu_qianfan", "embeddings/bedrock": "embeddings/bedrock", "embeddings/cloudflare_workersai": "embeddings/cloudflare_workersai", "embeddings/cohere": "embeddings/cohere", diff --git a/libs/langchain-community/package.json b/libs/langchain-community/package.json index 535a27fba108..63ddd649ca06 100644 --- a/libs/langchain-community/package.json +++ b/libs/langchain-community/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/community", - "version": "0.0.44", + "version": "0.0.45", "description": "Third-party integrations for LangChain.js", "type": "module", "engines": { @@ -822,6 +822,15 @@ "import": "./embeddings/alibaba_tongyi.js", "require": "./embeddings/alibaba_tongyi.cjs" }, + "./embeddings/baidu_qianfan": { + "types": { + "import": "./embeddings/baidu_qianfan.d.ts", + "require": "./embeddings/baidu_qianfan.d.cts", + "default": "./embeddings/baidu_qianfan.d.ts" + }, + "import": "./embeddings/baidu_qianfan.js", + "require": "./embeddings/baidu_qianfan.cjs" + }, "./embeddings/bedrock": { "types": { "import": "./embeddings/bedrock.d.ts", @@ -2359,6 +2368,10 @@ "embeddings/alibaba_tongyi.js", "embeddings/alibaba_tongyi.d.ts", "embeddings/alibaba_tongyi.d.cts", + "embeddings/baidu_qianfan.cjs", + "embeddings/baidu_qianfan.js", + "embeddings/baidu_qianfan.d.ts", + "embeddings/baidu_qianfan.d.cts", "embeddings/bedrock.cjs", "embeddings/bedrock.js", "embeddings/bedrock.d.ts", diff --git a/libs/langchain-community/src/embeddings/baidu_qianfan.ts b/libs/langchain-community/src/embeddings/baidu_qianfan.ts new file mode 100644 index 000000000000..1c5d74bd83a1 --- /dev/null +++ b/libs/langchain-community/src/embeddings/baidu_qianfan.ts @@ -0,0 +1,238 @@ +import { Embeddings, type EmbeddingsParams } from "@langchain/core/embeddings"; +import { chunkArray } from "@langchain/core/utils/chunk_array"; +import { getEnvironmentVariable } from "@langchain/core/utils/env"; + +export interface BaiduQianfanEmbeddingsParams extends EmbeddingsParams { + /** Model name to use */ + modelName: "embedding-v1" | "bge_large_zh" | "bge-large-en" | "tao-8k"; + + /** + * Timeout to use when making requests to BaiduQianfan. + */ + timeout?: number; + + /** + * The maximum number of characters allowed for embedding in a single request varies by model: + * - Embedding-V1 model: up to 1000 characters + * - bge-large-zh model: up to 2000 characters + * - bge-large-en model: up to 2000 characters + * - tao-8k model: up to 28000 characters + * + * Note: These limits are model-specific and should be adhered to for optimal performance. + */ + batchSize?: number; + + /** + * Whether to strip new lines from the input text. + */ + stripNewLines?: boolean; +} + +interface EmbeddingCreateParams { + input: string[]; +} + +interface EmbeddingResponse { + data: { object: "embedding"; index: number; embedding: number[] }[]; + + usage: { + prompt_tokens: number; + total_tokens: number; + }; + + id: string; +} + +interface EmbeddingErrorResponse { + error_code: number | string; + error_msg: string; +} + +export class BaiduQianfanEmbeddings + extends Embeddings + implements BaiduQianfanEmbeddingsParams +{ + modelName: BaiduQianfanEmbeddingsParams["modelName"] = "embedding-v1"; + + batchSize = 16; + + stripNewLines = true; + + baiduApiKey: string; + + baiduSecretKey: string; + + accessToken: string; + + constructor( + fields?: Partial & { + verbose?: boolean; + baiduApiKey?: string; + baiduSecretKey?: string; + } + ) { + const fieldsWithDefaults = { maxConcurrency: 2, ...fields }; + super(fieldsWithDefaults); + + const baiduApiKey = + fieldsWithDefaults?.baiduApiKey ?? + getEnvironmentVariable("BAIDU_API_KEY"); + + const baiduSecretKey = + fieldsWithDefaults?.baiduSecretKey ?? + getEnvironmentVariable("BAIDU_SECRET_KEY"); + + if (!baiduApiKey) { + throw new Error("Baidu API key not found"); + } + + if (!baiduSecretKey) { + throw new Error("Baidu Secret key not found"); + } + + this.baiduApiKey = baiduApiKey; + this.baiduSecretKey = baiduSecretKey; + + this.modelName = fieldsWithDefaults?.modelName ?? this.modelName; + + if (this.modelName === "tao-8k") { + if (fieldsWithDefaults?.batchSize && fieldsWithDefaults.batchSize !== 1) { + throw new Error( + "tao-8k model supports only a batchSize of 1. Please adjust your batchSize accordingly" + ); + } + this.batchSize = 1; + } else { + this.batchSize = fieldsWithDefaults?.batchSize ?? this.batchSize; + } + + this.stripNewLines = + fieldsWithDefaults?.stripNewLines ?? this.stripNewLines; + } + + /** + * Method to generate embeddings for an array of documents. Splits the + * documents into batches and makes requests to the BaiduQianFan API to generate + * embeddings. + * @param texts Array of documents to generate embeddings for. + * @returns Promise that resolves to a 2D array of embeddings for each document. + */ + async embedDocuments(texts: string[]): Promise { + const batches = chunkArray( + this.stripNewLines ? texts.map((t) => t.replace(/\n/g, " ")) : texts, + this.batchSize + ); + + const batchRequests = batches.map((batch) => { + const params = this.getParams(batch); + + return this.embeddingWithRetry(params); + }); + + const batchResponses = await Promise.all(batchRequests); + + const embeddings: number[][] = []; + + for (let i = 0; i < batchResponses.length; i += 1) { + const batch = batches[i]; + const batchResponse = batchResponses[i] || []; + for (let j = 0; j < batch.length; j += 1) { + embeddings.push(batchResponse[j]); + } + } + + return embeddings; + } + + /** + * Method to generate an embedding for a single document. Calls the + * embeddingWithRetry method with the document as the input. + * @param text Document to generate an embedding for. + * @returns Promise that resolves to an embedding for the document. + */ + async embedQuery(text: string): Promise { + const params = this.getParams([ + this.stripNewLines ? text.replace(/\n/g, " ") : text, + ]); + + const embeddings = (await this.embeddingWithRetry(params)) || [[]]; + return embeddings[0]; + } + + /** + * Method to generate an embedding params. + * @param texts Array of documents to generate embeddings for. + * @returns an embedding params. + */ + private getParams( + texts: EmbeddingCreateParams["input"] + ): EmbeddingCreateParams { + return { + input: texts, + }; + } + + /** + * Private method to make a request to the BaiduAI API to generate + * embeddings. Handles the retry logic and returns the response from the + * API. + * @param request Request to send to the BaiduAI API. + * @returns Promise that resolves to the response from the API. + */ + private async embeddingWithRetry(body: EmbeddingCreateParams) { + if (!this.accessToken) { + this.accessToken = await this.getAccessToken(); + } + + return fetch( + `https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/embeddings/${this.modelName}?access_token=${this.accessToken}`, + { + method: "POST", + headers: { + "Content-Type": "application/json", + }, + body: JSON.stringify(body), + } + ).then(async (response) => { + const embeddingData: EmbeddingResponse | EmbeddingErrorResponse = + await response.json(); + + if ("error_code" in embeddingData && embeddingData.error_code) { + throw new Error( + `${embeddingData.error_code}: ${embeddingData.error_msg}` + ); + } + + return (embeddingData as EmbeddingResponse).data.map( + ({ embedding }) => embedding + ); + }); + } + + /** + * Method that retrieves the access token for making requests to the Baidu + * API. + * @returns The access token for making requests to the Baidu API. + */ + private async getAccessToken() { + const url = `https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=${this.baiduApiKey}&client_secret=${this.baiduSecretKey}`; + const response = await fetch(url, { + method: "POST", + headers: { + "Content-Type": "application/json", + Accept: "application/json", + }, + }); + if (!response.ok) { + const text = await response.text(); + const error = new Error( + `Baidu get access token failed with status code ${response.status}, response: ${text}` + ); + // eslint-disable-next-line @typescript-eslint/no-explicit-any + (error as any).response = response; + throw error; + } + const json = await response.json(); + return json.access_token; + } +} diff --git a/libs/langchain-community/src/embeddings/tests/baidu_qianfan.int.test.ts b/libs/langchain-community/src/embeddings/tests/baidu_qianfan.int.test.ts new file mode 100644 index 000000000000..fd4db0252e0e --- /dev/null +++ b/libs/langchain-community/src/embeddings/tests/baidu_qianfan.int.test.ts @@ -0,0 +1,34 @@ +import { test, expect } from "@jest/globals"; +import { BaiduQianfanEmbeddings } from "../baidu_qianfan.js"; + +test.skip("Test BaiduQianfanEmbeddings.embedQuery", async () => { + const embeddings = new BaiduQianfanEmbeddings(); + const res = await embeddings.embedQuery("Hello world"); + expect(typeof res[0]).toBe("number"); +}); + +test.skip("Test BaiduQianfanEmbeddings.embedDocuments", async () => { + const embeddings = new BaiduQianfanEmbeddings(); + const res = await embeddings.embedDocuments(["Hello world", "Bye bye"]); + expect(res).toHaveLength(2); + expect(typeof res[0][0]).toBe("number"); + expect(typeof res[1][0]).toBe("number"); +}); + +test.skip("Test BaiduQianfanEmbeddings concurrency", async () => { + const embeddings = new BaiduQianfanEmbeddings({ + batchSize: 1, + }); + const res = await embeddings.embedDocuments([ + "Hello world", + "Bye bye", + "Hello world", + "Bye bye", + "Hello world", + "Bye bye", + ]); + expect(res).toHaveLength(6); + expect(res.find((embedding) => typeof embedding[0] !== "number")).toBe( + undefined + ); +}); diff --git a/libs/langchain-community/src/experimental/graph_transformers/llm.int.test.ts b/libs/langchain-community/src/experimental/graph_transformers/llm.int.test.ts index 39948f7fef14..8702c1cff782 100644 --- a/libs/langchain-community/src/experimental/graph_transformers/llm.int.test.ts +++ b/libs/langchain-community/src/experimental/graph_transformers/llm.int.test.ts @@ -45,13 +45,52 @@ test("convertToGraphDocuments with allowed", async () => { expect(result).toEqual([ new GraphDocument({ nodes: [ - new Node({ id: "Elon Musk", type: "PERSON" }), - new Node({ id: "OpenAI", type: "ORGANIZATION" }), + new Node({ id: "Elon Musk", type: "Person" }), + new Node({ id: "OpenAI", type: "Organization" }), ], relationships: [ new Relationship({ - source: new Node({ id: "Elon Musk", type: "PERSON" }), - target: new Node({ id: "OpenAI", type: "ORGANIZATION" }), + source: new Node({ id: "Elon Musk", type: "Person" }), + target: new Node({ id: "OpenAI", type: "Organization" }), + type: "SUES", + }), + ], + source: new Document({ + pageContent: "Elon Musk is suing OpenAI", + metadata: {}, + }), + }), + ]); +}); + +test("convertToGraphDocuments with allowed lowercased", async () => { + const model = new ChatOpenAI({ + temperature: 0, + modelName: "gpt-4-turbo-preview", + }); + + const llmGraphTransformer = new LLMGraphTransformer({ + llm: model, + allowedNodes: ["Person", "Organization"], + allowedRelationships: ["SUES"], + }); + + const result = await llmGraphTransformer.convertToGraphDocuments([ + new Document({ pageContent: "Elon Musk is suing OpenAI" }), + ]); + + console.log(JSON.stringify(result)); + + expect(result).toEqual([ + new GraphDocument({ + nodes: [ + new Node({ id: "Elon Musk", type: "Person" }), + new Node({ id: "OpenAI", type: "Organization" }), + ], + relationships: [ + new Relationship({ + source: new Node({ id: "Elon Musk", type: "Person" }), + target: new Node({ id: "OpenAI", type: "Organization" }), type: "SUES", }), ], diff --git a/libs/langchain-community/src/experimental/graph_transformers/llm.ts b/libs/langchain-community/src/experimental/graph_transformers/llm.ts index b808e93abd6a..858e2b88b8f9 100644 --- a/libs/langchain-community/src/experimental/graph_transformers/llm.ts +++ b/libs/langchain-community/src/experimental/graph_transformers/llm.ts @@ -47,6 +47,13 @@ interface OptionalEnumFieldProps { fieldKwargs?: object; } +function toTitleCase(str: string): string { + return str + .split(" ") + .map((w) => w[0].toUpperCase() + w.substring(1).toLowerCase()) + .join(""); +} + function createOptionalEnumType({ enumValues = undefined, description = "", @@ -122,7 +129,7 @@ function createSchema(allowedNodes: string[], allowedRelationships: string[]) { function mapToBaseNode(node: any): Node { return new Node({ id: node.id, - type: node.type.replace(" ", "_").toUpperCase(), + type: toTitleCase(node.type), }); } @@ -131,11 +138,11 @@ function mapToBaseRelationship(relationship: any): Relationship { return new Relationship({ source: new Node({ id: relationship.sourceNodeId, - type: relationship.sourceNodeType.replace(" ", "_").toUpperCase(), + type: toTitleCase(relationship.sourceNodeType), }), target: new Node({ id: relationship.targetNodeId, - type: relationship.targetNodeType.replace(" ", "_").toUpperCase(), + type: toTitleCase(relationship.targetNodeType), }), type: relationship.relationshipType.replace(" ", "_").toUpperCase(), }); @@ -208,16 +215,29 @@ export class LLMGraphTransformer { (this.allowedNodes.length > 0 || this.allowedRelationships.length > 0) ) { if (this.allowedNodes.length > 0) { - nodes = nodes.filter((node) => this.allowedNodes.includes(node.type)); + const allowedNodesLowerCase = this.allowedNodes.map((node) => + node.toLowerCase() + ); + + // For nodes, compare lowercased types + nodes = nodes.filter((node) => + allowedNodesLowerCase.includes(node.type.toLowerCase()) + ); + + // For relationships, compare lowercased types for both source and target nodes relationships = relationships.filter( (rel) => - this.allowedNodes.includes(rel.source.type) && - this.allowedNodes.includes(rel.target.type) + allowedNodesLowerCase.includes(rel.source.type.toLowerCase()) && + allowedNodesLowerCase.includes(rel.target.type.toLowerCase()) ); } + if (this.allowedRelationships.length > 0) { + // For relationships, compare lowercased types relationships = relationships.filter((rel) => - this.allowedRelationships.includes(rel.type) + this.allowedRelationships + .map((rel) => rel.toLowerCase()) + .includes(rel.type.toLowerCase()) ); } } diff --git a/libs/langchain-community/src/graphs/graph_document.ts b/libs/langchain-community/src/graphs/graph_document.ts index 939817a5623d..36a17c053e4d 100644 --- a/libs/langchain-community/src/graphs/graph_document.ts +++ b/libs/langchain-community/src/graphs/graph_document.ts @@ -60,14 +60,14 @@ export class Relationship extends Serializable { } } -export class GraphDocument extends Document { +export class GraphDocument extends Serializable { nodes: Node[]; relationships: Relationship[]; source: Document; - lc_namespace = ["langchain", "graph", "document_node"]; + lc_namespace = ["langchain", "graph", "graph_document"]; constructor({ nodes, @@ -78,7 +78,11 @@ export class GraphDocument extends Document { relationships: Relationship[]; source: Document; }) { - super(source); + super({ + nodes, + relationships, + source, + }); this.nodes = nodes; this.relationships = relationships; this.source = source; diff --git a/libs/langchain-community/src/graphs/neo4j_graph.ts b/libs/langchain-community/src/graphs/neo4j_graph.ts index d51876616a10..ab62d4d91c3f 100644 --- a/libs/langchain-community/src/graphs/neo4j_graph.ts +++ b/libs/langchain-community/src/graphs/neo4j_graph.ts @@ -43,7 +43,7 @@ export const BASE_ENTITY_LABEL = "__Entity__"; const INCLUDE_DOCS_QUERY = ` MERGE (d:Document {id:$document.metadata.id}) - SET d.text = $document.page_content + SET d.text = $document.pageContent SET d += $document.metadata WITH d `; diff --git a/libs/langchain-google-common/package.json b/libs/langchain-google-common/package.json index 1645e38f6f76..185d732f0487 100644 --- a/libs/langchain-google-common/package.json +++ b/libs/langchain-google-common/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/google-common", - "version": "0.0.3", + "version": "0.0.4", "description": "Core types and classes for Google services.", "type": "module", "engines": { diff --git a/libs/langchain-google-common/src/utils/gemini.ts b/libs/langchain-google-common/src/utils/gemini.ts index 8882af6166b1..b79474ab3904 100644 --- a/libs/langchain-google-common/src/utils/gemini.ts +++ b/libs/langchain-google-common/src/utils/gemini.ts @@ -501,7 +501,21 @@ export function responseToChatGenerations( response: GoogleLLMResponse ): ChatGeneration[] { const parts = responseToParts(response); - const ret = parts.map((part) => partToChatGeneration(part)); + let ret = parts.map((part) => partToChatGeneration(part)); + if (ret.every((item) => typeof item.message.content === "string")) { + const combinedContent = ret.map((item) => item.message.content).join(""); + const combinedText = ret.map((item) => item.text).join(""); + ret = [ + new ChatGenerationChunk({ + message: new AIMessageChunk({ + content: combinedContent, + additional_kwargs: ret[ret.length - 1].message.additional_kwargs, + }), + text: combinedText, + generationInfo: ret[ret.length - 1].generationInfo, + }), + ]; + } return ret; } diff --git a/libs/langchain-google-gauth/package.json b/libs/langchain-google-gauth/package.json index 8d0dc322b864..2b45ded0c703 100644 --- a/libs/langchain-google-gauth/package.json +++ b/libs/langchain-google-gauth/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/google-gauth", - "version": "0.0.2", + "version": "0.0.3", "description": "Google auth based authentication support for Google services", "type": "module", "engines": { @@ -40,7 +40,7 @@ "license": "MIT", "dependencies": { "@langchain/core": "~0.1.1", - "@langchain/google-common": "~0.0.3", + "@langchain/google-common": "~0.0.4", "google-auth-library": "^8.9.0" }, "devDependencies": { diff --git a/libs/langchain-google-vertexai-web/package.json b/libs/langchain-google-vertexai-web/package.json index 91f711743850..7ff67031130f 100644 --- a/libs/langchain-google-vertexai-web/package.json +++ b/libs/langchain-google-vertexai-web/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/google-vertexai-web", - "version": "0.0.2", + "version": "0.0.3", "description": "LangChain.js support for Google Vertex AI Web", "type": "module", "engines": { @@ -40,7 +40,7 @@ "license": "MIT", "dependencies": { "@langchain/core": "~0.1.1", - "@langchain/google-webauth": "~0.0.2" + "@langchain/google-webauth": "~0.0.3" }, "devDependencies": { "@jest/globals": "^29.5.0", diff --git a/libs/langchain-google-vertexai/package.json b/libs/langchain-google-vertexai/package.json index f8e354892dcf..944b63bd47fc 100644 --- a/libs/langchain-google-vertexai/package.json +++ b/libs/langchain-google-vertexai/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/google-vertexai", - "version": "0.0.2", + "version": "0.0.3", "description": "LangChain.js support for Google Vertex AI", "type": "module", "engines": { @@ -40,7 +40,7 @@ "license": "MIT", "dependencies": { "@langchain/core": "~0.1.1", - "@langchain/google-gauth": "~0.0.2" + "@langchain/google-gauth": "~0.0.3" }, "devDependencies": { "@jest/globals": "^29.5.0", diff --git a/libs/langchain-google-webauth/package.json b/libs/langchain-google-webauth/package.json index 6878f8ff1574..c8935ca92d11 100644 --- a/libs/langchain-google-webauth/package.json +++ b/libs/langchain-google-webauth/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/google-webauth", - "version": "0.0.2", + "version": "0.0.3", "description": "Web-based authentication support for Google services", "type": "module", "engines": { @@ -40,7 +40,7 @@ "license": "MIT", "dependencies": { "@langchain/core": "~0.1.1", - "@langchain/google-common": "~0.0.3", + "@langchain/google-common": "~0.0.4", "web-auth-library": "^1.0.3" }, "devDependencies": { diff --git a/yarn.lock b/yarn.lock index ea43b4d0ded5..deb9ca59faef 100644 --- a/yarn.lock +++ b/yarn.lock @@ -211,20 +211,19 @@ __metadata: languageName: node linkType: hard -"@anthropic-ai/sdk@npm:^0.17.2": - version: 0.17.2 - resolution: "@anthropic-ai/sdk@npm:0.17.2" +"@anthropic-ai/sdk@npm:^0.20.1": + version: 0.20.1 + resolution: "@anthropic-ai/sdk@npm:0.20.1" dependencies: "@types/node": ^18.11.18 "@types/node-fetch": ^2.6.4 abort-controller: ^3.0.0 agentkeepalive: ^4.2.1 - digest-fetch: ^1.3.0 form-data-encoder: 1.7.2 formdata-node: ^4.3.2 node-fetch: ^2.6.7 web-streams-polyfill: ^3.2.1 - checksum: 33480adddbbf905aadcb310bdbca2baa9e3ca25ae554c94c52b189bd8e8afe89577775dbcf902ab6fa252d3b3c6e4fe5e4973a7caaa2d3e5a813d3727cf98499 + checksum: a880088ffeb993ea835f3ec250d53bf6ba23e97c3dfc54c915843aa8cb4778849fb7b85de0a359155c36595a5a5cc1db64139d407d2e36a2423284ebfe763cce languageName: node linkType: hard @@ -8810,7 +8809,7 @@ __metadata: version: 0.0.0-use.local resolution: "@langchain/anthropic@workspace:libs/langchain-anthropic" dependencies: - "@anthropic-ai/sdk": ^0.17.2 + "@anthropic-ai/sdk": ^0.20.1 "@jest/globals": ^29.5.0 "@langchain/community": "workspace:^" "@langchain/core": ~0.1.54 @@ -9425,7 +9424,7 @@ __metadata: languageName: unknown linkType: soft -"@langchain/google-common@workspace:*, @langchain/google-common@workspace:libs/langchain-google-common, @langchain/google-common@~0.0.3": +"@langchain/google-common@workspace:*, @langchain/google-common@workspace:libs/langchain-google-common, @langchain/google-common@~0.0.4": version: 0.0.0-use.local resolution: "@langchain/google-common@workspace:libs/langchain-google-common" dependencies: @@ -9457,13 +9456,13 @@ __metadata: languageName: unknown linkType: soft -"@langchain/google-gauth@workspace:libs/langchain-google-gauth, @langchain/google-gauth@~0.0.2": +"@langchain/google-gauth@workspace:libs/langchain-google-gauth, @langchain/google-gauth@~0.0.3": version: 0.0.0-use.local resolution: "@langchain/google-gauth@workspace:libs/langchain-google-gauth" dependencies: "@jest/globals": ^29.5.0 "@langchain/core": ~0.1.1 - "@langchain/google-common": ~0.0.3 + "@langchain/google-common": ~0.0.4 "@langchain/scripts": ~0.0 "@swc/core": ^1.3.90 "@swc/jest": ^0.2.29 @@ -9527,7 +9526,7 @@ __metadata: dependencies: "@jest/globals": ^29.5.0 "@langchain/core": ~0.1.1 - "@langchain/google-webauth": ~0.0.2 + "@langchain/google-webauth": ~0.0.3 "@langchain/scripts": ~0.0 "@swc/core": ^1.3.90 "@swc/jest": ^0.2.29 @@ -9558,7 +9557,7 @@ __metadata: dependencies: "@jest/globals": ^29.5.0 "@langchain/core": ~0.1.1 - "@langchain/google-gauth": ~0.0.2 + "@langchain/google-gauth": ~0.0.3 "@langchain/scripts": ~0.0 "@swc/core": ^1.3.90 "@swc/jest": ^0.2.29 @@ -9583,13 +9582,13 @@ __metadata: languageName: unknown linkType: soft -"@langchain/google-webauth@workspace:libs/langchain-google-webauth, @langchain/google-webauth@~0.0.2": +"@langchain/google-webauth@workspace:libs/langchain-google-webauth, @langchain/google-webauth@~0.0.3": version: 0.0.0-use.local resolution: "@langchain/google-webauth@workspace:libs/langchain-google-webauth" dependencies: "@jest/globals": ^29.5.0 "@langchain/core": ~0.1.1 - "@langchain/google-common": ~0.0.3 + "@langchain/google-common": ~0.0.4 "@langchain/scripts": ~0.0 "@swc/core": ^1.3.90 "@swc/jest": ^0.2.29