From 223cdbe7d05179006422ecc869c71688c2edaa0b Mon Sep 17 00:00:00 2001 From: Jacob Lee Date: Tue, 16 Jul 2024 15:22:02 -0700 Subject: [PATCH] Adds tool artifacts guide (#6094) --- docs/core_docs/docs/how_to/custom_tools.ipynb | 4 + docs/core_docs/docs/how_to/index.mdx | 2 +- .../docs/how_to/tool_artifacts.ipynb | 351 ++++++++++++++++++ docs/core_docs/src/theme/ChatModelTabs.js | 2 +- 4 files changed, 357 insertions(+), 2 deletions(-) create mode 100644 docs/core_docs/docs/how_to/tool_artifacts.ipynb diff --git a/docs/core_docs/docs/how_to/custom_tools.ipynb b/docs/core_docs/docs/how_to/custom_tools.ipynb index 1c02c153ffc0..450e9663961c 100644 --- a/docs/core_docs/docs/how_to/custom_tools.ipynb +++ b/docs/core_docs/docs/how_to/custom_tools.ipynb @@ -42,9 +42,11 @@ "source": [ "## `tool` function\n", "\n", + "```{=mdx}\n", ":::caution Compatibility\n", "Only available in `@langchain/core` version 0.2.7 and above.\n", ":::\n", + "```\n", "\n", "\n", "The [`tool`](https://api.js.langchain.com/classes/langchain_core_tools.tool.html) wrapper function is a convenience method for turning a JavaScript function into a tool. It requires the function itself along with some additional arguments that define your tool. The most important are:\n", @@ -191,9 +193,11 @@ "\n", "The Tool and `ToolMessage` interfaces make it possible to distinguish between the parts of the tool output meant for the model (`ToolMessage.content`) and those parts which are meant for use outside the model (`ToolMessage.artifact`).\n", "\n", + "```{=mdx}\n", ":::caution Compatibility\n", "This functionality was added in `@langchain/core>=0.2.16`. Please make sure your package is up to date.\n", ":::\n", + "```\n", "\n", "If you want your tool to distinguish between message content and other artifacts, we need to do three things:\n", "\n", diff --git a/docs/core_docs/docs/how_to/index.mdx b/docs/core_docs/docs/how_to/index.mdx index 29c97268e721..d5686da7387f 100644 --- a/docs/core_docs/docs/how_to/index.mdx +++ b/docs/core_docs/docs/how_to/index.mdx @@ -162,7 +162,7 @@ LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to p - [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin) - [How to: use a chat model to call tools](/docs/how_to/tool_calling/) - [How to: add ad-hoc tool calling capability to LLMs and Chat Models](/docs/how_to/tools_prompting) -- [How to: return extra artifacts from a custom tool](/docs/how_to/custom_tools/#returning-artifacts-of-tool-execution) +- [How to: return extra artifacts from a custom tool](/docs/how_to/tool_artifacts) ### Agents diff --git a/docs/core_docs/docs/how_to/tool_artifacts.ipynb b/docs/core_docs/docs/how_to/tool_artifacts.ipynb new file mode 100644 index 000000000000..f95f757d992f --- /dev/null +++ b/docs/core_docs/docs/how_to/tool_artifacts.ipynb @@ -0,0 +1,351 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "503e36ae-ca62-4f8a-880c-4fe78ff5df93", + "metadata": {}, + "source": [ + "# How to return extra artifacts from a tool\n", + "\n", + "```{=mdx}\n", + ":::info Prerequisites\n", + "This guide assumes familiarity with the following concepts:\n", + "\n", + "- [Tools](/docs/concepts/#tools)\n", + "- [Tool calling](/docs/concepts/#tool-calling)\n", + "\n", + ":::\n", + "```\n", + "\n", + "Tools are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself.\n", + "\n", + "For example if a tool returns something like a custom object or an image, we may want to pass some metadata about this output to the model without passing the actual output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.\n", + "\n", + "The Tool and [ToolMessage](https://api.js.langchain.com/classes/langchain_core_messages_tool.ToolMessage.html) interfaces make it possible to distinguish between the parts of the tool output meant for the model (this is the `ToolMessage.content`) and those parts which are meant for use outside the model (`ToolMessage.artifact`).\n", + "\n", + "```{=mdx}\n", + ":::caution Compatibility\n", + "\n", + "This functionality requires `@langchain/core>=0.2.16`. Please see here for a [guide on upgrading](/docs/how_to/installation/#installing-integration-packages).\n", + "\n", + ":::\n", + "```\n", + "\n", + "## Defining the tool\n", + "\n", + "If we want our tool to distinguish between message content and other artifacts, we need to specify `response_format: \"content_and_artifact\"` when defining our tool and make sure that we return a tuple of [`content`, `artifact`]:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "b9eb179d-1f41-4748-9866-b3d3e8c73cd0", + "metadata": {}, + "outputs": [], + "source": [ + "import { z } from \"zod\";\n", + "import { tool } from \"@langchain/core/tools\";\n", + "\n", + "const randomIntToolSchema = z.object({\n", + " min: z.number(),\n", + " max: z.number(),\n", + " size: z.number(),\n", + "});\n", + "\n", + "const generateRandomInts = tool(async ({ min, max, size }) => {\n", + " const array: number[] = [];\n", + " for (let i = 0; i < size; i++) {\n", + " array.push(Math.floor(Math.random() * (max - min + 1)) + min);\n", + " }\n", + " return [\n", + " `Successfully generated array of ${size} random ints in [${min}, ${max}].`,\n", + " array,\n", + " ];\n", + "}, {\n", + " name: \"generateRandomInts\",\n", + " description: \"Generate size random ints in the range [min, max].\",\n", + " schema: randomIntToolSchema,\n", + " responseFormat: \"content_and_artifact\",\n", + "});" + ] + }, + { + "cell_type": "markdown", + "id": "0ab05d25-af4a-4e5a-afe2-f090416d7ee7", + "metadata": {}, + "source": [ + "## Invoking the tool with ToolCall\n", + "\n", + "If we directly invoke our tool with just the tool arguments, you'll notice that we only get back the content part of the `Tool` output:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "5e7d5e77-3102-4a59-8ade-e4e699dd1817", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Successfully generated array of 10 random ints in [0, 9].\n" + ] + } + ], + "source": [ + "await generateRandomInts.invoke({min: 0, max: 9, size: 10});" + ] + }, + { + "cell_type": "markdown", + "id": "30db7228-f04c-489e-afda-9a572eaa90a1", + "metadata": {}, + "source": [ + "In order to get back both the content and the artifact, we need to invoke our model with a `ToolCall` (which is just a dictionary with `\"name\"`, `\"args\"`, `\"id\"` and `\"type\"` keys), which has additional info needed to generate a ToolMessage like the tool call ID:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "da1d939d-a900-4b01-92aa-d19011a6b034", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ToolMessage {\n", + " lc_serializable: true,\n", + " lc_kwargs: {\n", + " content: 'Successfully generated array of 10 random ints in [0, 9].',\n", + " artifact: [\n", + " 0, 6, 5, 5, 7,\n", + " 0, 6, 3, 7, 5\n", + " ],\n", + " tool_call_id: '123',\n", + " name: 'generateRandomInts',\n", + " additional_kwargs: {},\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ 'langchain_core', 'messages' ],\n", + " content: 'Successfully generated array of 10 random ints in [0, 9].',\n", + " name: 'generateRandomInts',\n", + " additional_kwargs: {},\n", + " response_metadata: {},\n", + " id: undefined,\n", + " tool_call_id: '123',\n", + " artifact: [\n", + " 0, 6, 5, 5, 7,\n", + " 0, 6, 3, 7, 5\n", + " ]\n", + "}\n" + ] + } + ], + "source": [ + "await generateRandomInts.invoke(\n", + " {\n", + " name: \"generate_random_ints\",\n", + " args: {min: 0, max: 9, size: 10},\n", + " id: \"123\", // Required\n", + " type: \"tool_call\", // Required\n", + " }\n", + ");" + ] + }, + { + "cell_type": "markdown", + "id": "a3cfc03d-020b-42c7-b0f8-c824af19e45e", + "metadata": {}, + "source": [ + "## Using with a model\n", + "\n", + "With a [tool-calling model](/docs/how_to/tool_calling/), we can easily use a model to call our Tool and generate ToolMessages:\n", + "\n", + "```{=mdx}\n", + "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", + "\n", + "\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "8a67424b-d19c-43df-ac7b-690bca42146c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[\n", + " {\n", + " name: 'generateRandomInts',\n", + " args: { min: 1, max: 24, size: 6 },\n", + " id: 'toolu_019ygj3YuoU6qFzR66juXALp',\n", + " type: 'tool_call'\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "const llmWithTools = llm.bindTools([generateRandomInts])\n", + "\n", + "const aiMessage = await llmWithTools.invoke(\"generate 6 positive ints less than 25\")\n", + "aiMessage.tool_calls" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "00c4e906-3ca8-41e8-a0be-65cb0db7d574", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ToolMessage {\n", + " lc_serializable: true,\n", + " lc_kwargs: {\n", + " content: 'Successfully generated array of 6 random ints in [1, 24].',\n", + " artifact: [ 18, 20, 16, 15, 17, 19 ],\n", + " tool_call_id: 'toolu_019ygj3YuoU6qFzR66juXALp',\n", + " name: 'generateRandomInts',\n", + " additional_kwargs: {},\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ 'langchain_core', 'messages' ],\n", + " content: 'Successfully generated array of 6 random ints in [1, 24].',\n", + " name: 'generateRandomInts',\n", + " additional_kwargs: {},\n", + " response_metadata: {},\n", + " id: undefined,\n", + " tool_call_id: 'toolu_019ygj3YuoU6qFzR66juXALp',\n", + " artifact: [ 18, 20, 16, 15, 17, 19 ]\n", + "}\n" + ] + } + ], + "source": [ + "await generateRandomInts.invoke(aiMessage.tool_calls[0])" + ] + }, + { + "cell_type": "markdown", + "id": "ddef2690-70de-4542-ab20-2337f77f3e46", + "metadata": {}, + "source": [ + "If we just pass in the tool call args, we'll only get back the content:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "f4a6c9a6-0ffc-4b0e-a59f-f3c3d69d824d", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Successfully generated array of 6 random ints in [1, 24].\n" + ] + } + ], + "source": [ + "await generateRandomInts.invoke(aiMessage.tool_calls[0][\"args\"])" + ] + }, + { + "cell_type": "markdown", + "id": "98d6443b-ff41-4d91-8523-b6274fc74ee5", + "metadata": {}, + "source": [ + "If we wanted to declaratively create a chain, we could do this:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "eb55ec23-95a4-464e-b886-d9679bf3aaa2", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[\n", + " ToolMessage {\n", + " lc_serializable: true,\n", + " lc_kwargs: {\n", + " content: 'Successfully generated array of 1 random ints in [1, 5].',\n", + " artifact: [Array],\n", + " tool_call_id: 'toolu_01CskofJCQW8chkUzmVR1APU',\n", + " name: 'generateRandomInts',\n", + " additional_kwargs: {},\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ 'langchain_core', 'messages' ],\n", + " content: 'Successfully generated array of 1 random ints in [1, 5].',\n", + " name: 'generateRandomInts',\n", + " additional_kwargs: {},\n", + " response_metadata: {},\n", + " id: undefined,\n", + " tool_call_id: 'toolu_01CskofJCQW8chkUzmVR1APU',\n", + " artifact: [ 1 ]\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "const extractToolCalls = (aiMessage) => aiMessage.tool_calls;\n", + "\n", + "const chain = llmWithTools.pipe(extractToolCalls).pipe(generateRandomInts.map());\n", + "\n", + "await chain.invoke(\"give me a random number between 1 and 5\");" + ] + }, + { + "cell_type": "markdown", + "id": "54f74020", + "metadata": {}, + "source": [ + "## Related\n", + "\n", + "You've now seen how to return additional artifacts from a tool call.\n", + "\n", + "These guides may interest you next:\n", + "\n", + "- [Creating custom tools](/docs/how_to/custom_tools)\n", + "- [Building agents with LangGraph](https://langchain-ai.github.io/langgraphjs/)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "TypeScript", + "language": "typescript", + "name": "tslab" + }, + "language_info": { + "codemirror_mode": { + "mode": "typescript", + "name": "javascript", + "typescript": true + }, + "file_extension": ".ts", + "mimetype": "text/typescript", + "name": "typescript", + "version": "3.7.2" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/core_docs/src/theme/ChatModelTabs.js b/docs/core_docs/src/theme/ChatModelTabs.js index 12135b131003..d09a6de082a9 100644 --- a/docs/core_docs/src/theme/ChatModelTabs.js +++ b/docs/core_docs/src/theme/ChatModelTabs.js @@ -26,7 +26,7 @@ function InstallationInfo({ children }) { const DEFAULTS = { openaiParams: `{\n model: "gpt-3.5-turbo",\n temperature: 0\n}`, - anthropicParams: `{\n model: "claude-3-sonnet-20240229",\n temperature: 0\n}`, + anthropicParams: `{\n model: "claude-3-5-sonnet-20240620",\n temperature: 0\n}`, fireworksParams: `{\n model: "accounts/fireworks/models/firefunction-v1",\n temperature: 0\n}`, mistralParams: `{\n model: "mistral-large-latest",\n temperature: 0\n}`, groqParams: `{\n model: "mixtral-8x7b-32768",\n temperature: 0\n}`,