diff --git a/docs/core_docs/docs/integrations/chat/google_vertex_ai.ipynb b/docs/core_docs/docs/integrations/chat/google_vertex_ai.ipynb
new file mode 100644
index 000000000000..68f96df67e16
--- /dev/null
+++ b/docs/core_docs/docs/integrations/chat/google_vertex_ai.ipynb
@@ -0,0 +1,526 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "afaf8039",
+ "metadata": {
+ "vscode": {
+ "languageId": "raw"
+ }
+ },
+ "source": [
+ "---\n",
+ "sidebar_label: Google VertexAI\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e49f1e0d",
+ "metadata": {},
+ "source": [
+ "# ChatVertexAI\n",
+ "\n",
+ "This will help you getting started with `ChatVertexAI` [chat models](/docs/concepts/#chat-models). For detailed documentation of all `ChatVertexAI` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html).\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "LangChain.js supports Google Vertex AI chat models as an integration.\n",
+ "It supports two different methods of authentication based on whether you're running\n",
+ "in a Node environment or a web environment.\n",
+ "\n",
+ "| Class | Package | Local | Serializable | [PY support](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [ChatVertexAI](https://api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html) | [@langchain/google-vertexai](https://api.js.langchain.com/modules/langchain_google_vertexai.html) | ❌ | ✅ | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/google-vertexai?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/google-vertexai?style=flat-square&label=%20&) |\n",
+ "\n",
+ "### Model features\n",
+ "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
+ "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
+ "| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | \n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "To access `ChatVertexAI` models you'll need to setup Google VertexAI in your Google Cloud Platform (GCP) account, save the credentials file, and install the `@langchain/google-vertexai` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "Head to GCP and generate a credentials file. Once you've done this set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable:\n",
+ "\n",
+ "```bash\n",
+ "export GOOGLE_APPLICATION_CREDENTIALS=\"path/to/your/credentials.json\"\n",
+ "```\n",
+ "\n",
+ "If running in a web environment, you should set the `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable as a JSON stringified object, and install the `@langchain/google-vertexai-web` package:\n",
+ "\n",
+ "```bash\n",
+ "GOOGLE_VERTEX_AI_WEB_CREDENTIALS={\"type\":\"service_account\",\"project_id\":\"YOUR_PROJECT-12345\",...}\n",
+ "```\n",
+ "\n",
+ "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
+ "\n",
+ "```bash\n",
+ "# export LANGCHAIN_TRACING_V2=\"true\"\n",
+ "# export LANGCHAIN_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "### Installation\n",
+ "\n",
+ "The LangChain ChatVertexAI integration lives in the `@langchain/google-vertexai` package:\n",
+ "\n",
+ "```{=mdx}\n",
+ "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n",
+ "import Npm2Yarn from \"@theme/Npm2Yarn\";\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ " @langchain/google-vertexai\n",
+ "\n",
+ "\n",
+ "Or if using in a web environment:\n",
+ "\n",
+ "\n",
+ " @langchain/google-vertexai-web\n",
+ "\n",
+ "\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a38cde65-254d-4219-a441-068766c0d4b5",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our model object and generate chat completions:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { ChatVertexAI } from \"@langchain/google-vertexai\"\n",
+ "// Uncomment the following line if you're running in a web environment:\n",
+ "// import { ChatVertexAI } from \"@langchain/google-vertexai-web\"\n",
+ "\n",
+ "const llm = new ChatVertexAI({\n",
+ " model: \"gemini-1.5-pro\",\n",
+ " temperature: 0,\n",
+ " maxRetries: 2,\n",
+ " authOptions: {\n",
+ " // ... auth options\n",
+ " }\n",
+ " // other params...\n",
+ "})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2b4f3e15",
+ "metadata": {},
+ "source": [
+ "## Invocation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "62e0dbc3",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "AIMessageChunk {\n",
+ " \"content\": \"J'adore programmer. \\n\",\n",
+ " \"additional_kwargs\": {},\n",
+ " \"response_metadata\": {},\n",
+ " \"tool_calls\": [],\n",
+ " \"tool_call_chunks\": [],\n",
+ " \"invalid_tool_calls\": [],\n",
+ " \"usage_metadata\": {\n",
+ " \"input_tokens\": 20,\n",
+ " \"output_tokens\": 7,\n",
+ " \"total_tokens\": 27\n",
+ " }\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "const aiMsg = await llm.invoke([\n",
+ " [\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
+ " ],\n",
+ " [\"human\", \"I love programming.\"],\n",
+ "])\n",
+ "aiMsg"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "J'adore programmer. \n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "console.log(aiMsg.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "AIMessageChunk {\n",
+ " \"content\": \"Ich liebe das Programmieren. \\n\",\n",
+ " \"additional_kwargs\": {},\n",
+ " \"response_metadata\": {},\n",
+ " \"tool_calls\": [],\n",
+ " \"tool_call_chunks\": [],\n",
+ " \"invalid_tool_calls\": [],\n",
+ " \"usage_metadata\": {\n",
+ " \"input_tokens\": 15,\n",
+ " \"output_tokens\": 9,\n",
+ " \"total_tokens\": 24\n",
+ " }\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\"\n",
+ "\n",
+ "const prompt = ChatPromptTemplate.fromMessages(\n",
+ " [\n",
+ " [\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
+ " ],\n",
+ " [\"human\", \"{input}\"],\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "const chain = prompt.pipe(llm);\n",
+ "await chain.invoke(\n",
+ " {\n",
+ " input_language: \"English\",\n",
+ " output_language: \"German\",\n",
+ " input: \"I love programming.\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
+ "metadata": {},
+ "source": [
+ "## Multimodal\n",
+ "\n",
+ "The Gemini API can process multimodal inputs. The example below demonstrates how to do this:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "5981e230",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " The image shows a hot dog in a bun. The hot dog is grilled and has a red color. The bun is white and soft.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n",
+ "import { ChatVertexAI } from \"@langchain/google-vertexai\";\n",
+ "import fs from \"node:fs\";\n",
+ "\n",
+ "const llmForMultiModal = new ChatVertexAI({\n",
+ " model: \"gemini-pro-vision\",\n",
+ " temperature: 0.7,\n",
+ "});\n",
+ "\n",
+ "const image = fs.readFileSync(\"../../../../../examples/hotdog.jpg\").toString(\"base64\");\n",
+ "const promptForMultiModal = ChatPromptTemplate.fromMessages([\n",
+ " [\n",
+ " \"human\",\n",
+ " [\n",
+ " {\n",
+ " type: \"text\",\n",
+ " text: \"Describe the following image.\",\n",
+ " },\n",
+ " {\n",
+ " type: \"image_url\",\n",
+ " image_url: \"data:image/png;base64,{image_base64}\",\n",
+ " },\n",
+ " ],\n",
+ " ],\n",
+ "]);\n",
+ "\n",
+ "const multiModalRes = await promptForMultiModal.pipe(llmForMultiModal).invoke({\n",
+ " image_base64: image,\n",
+ "});\n",
+ "\n",
+ "console.log(multiModalRes.content);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "aa6a51dd",
+ "metadata": {},
+ "source": [
+ "## Tool calling\n",
+ "\n",
+ "`ChatVertexAI` also supports calling the model with a tool:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "bc64485f",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[\n",
+ " {\n",
+ " name: 'calculator',\n",
+ " args: { number2: 81623836, operation: 'multiply', number1: 1628253239 },\n",
+ " id: 'a219d75748f445ab8c7ca8b516898e18',\n",
+ " type: 'tool_call'\n",
+ " }\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatVertexAI } from \"@langchain/google-vertexai\";\n",
+ "import { zodToGeminiParameters } from \"@langchain/google-vertexai/utils\";\n",
+ "import { z } from \"zod\";\n",
+ "// Or, if using the web entrypoint:\n",
+ "// import { ChatVertexAI } from \"@langchain/google-vertexai-web\";\n",
+ "\n",
+ "const calculatorSchema = z.object({\n",
+ " operation: z\n",
+ " .enum([\"add\", \"subtract\", \"multiply\", \"divide\"])\n",
+ " .describe(\"The type of operation to execute\"),\n",
+ " number1: z.number().describe(\"The first number to operate on.\"),\n",
+ " number2: z.number().describe(\"The second number to operate on.\"),\n",
+ "});\n",
+ "\n",
+ "const geminiCalculatorTool = {\n",
+ " functionDeclarations: [\n",
+ " {\n",
+ " name: \"calculator\",\n",
+ " description: \"A simple calculator tool\",\n",
+ " parameters: zodToGeminiParameters(calculatorSchema),\n",
+ " },\n",
+ " ],\n",
+ "};\n",
+ "\n",
+ "const llmWithTool = new ChatVertexAI({\n",
+ " temperature: 0.7,\n",
+ " model: \"gemini-1.5-flash-001\",\n",
+ "}).bindTools([geminiCalculatorTool]);\n",
+ "\n",
+ "const toolRes = await llmWithTool.invoke(\"What is 1628253239 times 81623836?\");\n",
+ "console.dir(toolRes.tool_calls, { depth: null });"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "46ce27ae",
+ "metadata": {},
+ "source": [
+ "### `withStructuredOutput`\n",
+ "\n",
+ "Alternatively, you can also use the `withStructuredOutput` method:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "012a9afc",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{ operation: 'multiply', number1: 1628253239, number2: 81623836 }\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatVertexAI } from \"@langchain/google-vertexai\";\n",
+ "import { z } from \"zod\";\n",
+ "// Or, if using the web entrypoint:\n",
+ "// import { ChatVertexAI } from \"@langchain/google-vertexai-web\";\n",
+ "\n",
+ "const calculatorSchemaForWSO = z.object({\n",
+ " operation: z\n",
+ " .enum([\"add\", \"subtract\", \"multiply\", \"divide\"])\n",
+ " .describe(\"The type of operation to execute\"),\n",
+ " number1: z.number().describe(\"The first number to operate on.\"),\n",
+ " number2: z.number().describe(\"The second number to operate on.\"),\n",
+ "});\n",
+ "\n",
+ "const llmWithStructuredOutput = new ChatVertexAI({\n",
+ " temperature: 0.7,\n",
+ " model: \"gemini-1.5-flash-001\",\n",
+ "}).withStructuredOutput(calculatorSchemaForWSO, {\n",
+ " name: \"calculator\"\n",
+ "});\n",
+ "\n",
+ "const wsoRes = await llmWithStructuredOutput.invoke(\"What is 1628253239 times 81623836?\");\n",
+ "console.log(wsoRes);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3b306e5b",
+ "metadata": {},
+ "source": [
+ "## VertexAI tools agent\n",
+ "\n",
+ "The Gemini family of models not only support tool calling, but can also be used in the Tool Calling agent.\n",
+ "Here's an example:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "0391002b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The weather in Paris, France is 28 degrees Celsius. \n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { z } from \"zod\";\n",
+ "\n",
+ "import { tool } from \"@langchain/core/tools\";\n",
+ "import { AgentExecutor, createToolCallingAgent } from \"langchain/agents\";\n",
+ "\n",
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n",
+ "import { ChatVertexAI } from \"@langchain/google-vertexai\";\n",
+ "// Uncomment this if you're running inside a web/edge environment.\n",
+ "// import { ChatVertexAI } from \"@langchain/google-vertexai-web\";\n",
+ "\n",
+ "const llmAgent = new ChatVertexAI({\n",
+ " temperature: 0,\n",
+ " model: \"gemini-1.5-pro\",\n",
+ "});\n",
+ "\n",
+ "// Prompt template must have \"input\" and \"agent_scratchpad input variables\"\n",
+ "const agentPrompt = ChatPromptTemplate.fromMessages([\n",
+ " [\"system\", \"You are a helpful assistant\"],\n",
+ " [\"placeholder\", \"{chat_history}\"],\n",
+ " [\"human\", \"{input}\"],\n",
+ " [\"placeholder\", \"{agent_scratchpad}\"],\n",
+ "]);\n",
+ "\n",
+ "// Mocked tool\n",
+ "const currentWeatherTool = tool(async () => \"28 °C\", {\n",
+ " name: \"get_current_weather\",\n",
+ " description: \"Get the current weather in a given location\",\n",
+ " schema: z.object({\n",
+ " location: z.string().describe(\"The city and state, e.g. San Francisco, CA\"),\n",
+ " }),\n",
+ "});\n",
+ "\n",
+ "const agent = await createToolCallingAgent({\n",
+ " llm: llmAgent,\n",
+ " tools: [currentWeatherTool],\n",
+ " prompt: agentPrompt,\n",
+ "});\n",
+ "\n",
+ "const agentExecutor = new AgentExecutor({\n",
+ " agent,\n",
+ " tools: [currentWeatherTool],\n",
+ "});\n",
+ "\n",
+ "const input = \"What's the weather like in Paris?\";\n",
+ "const agentRes = await agentExecutor.invoke({ input });\n",
+ "\n",
+ "console.log(agentRes.output);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all ChatVertexAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "TypeScript",
+ "language": "typescript",
+ "name": "tslab"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "mode": "typescript",
+ "name": "javascript",
+ "typescript": true
+ },
+ "file_extension": ".ts",
+ "mimetype": "text/typescript",
+ "name": "typescript",
+ "version": "3.7.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/core_docs/docs/integrations/chat/google_vertex_ai.mdx b/docs/core_docs/docs/integrations/chat/google_vertex_ai.mdx
deleted file mode 100644
index ce6f8f0d1d23..000000000000
--- a/docs/core_docs/docs/integrations/chat/google_vertex_ai.mdx
+++ /dev/null
@@ -1,150 +0,0 @@
----
-sidebar_label: Google Vertex AI
-keywords: [gemini, gemini-pro, ChatVertexAI, vertex]
----
-
-import CodeBlock from "@theme/CodeBlock";
-
-# ChatVertexAI
-
-LangChain.js supports Google Vertex AI chat models as an integration.
-It supports two different methods of authentication based on whether you're running
-in a Node environment or a web environment.
-
-## Setup
-
-### Node
-
-To call Vertex AI models in Node, you'll need to install the `@langchain/google-vertexai` package:
-
-import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
-
-
-
-```bash npm2yarn
-npm install @langchain/google-vertexai
-```
-
-import UnifiedModelParamsTooltip from "@mdx_components/unified_model_params_tooltip.mdx";
-
-
-
-You should make sure the Vertex AI API is
-enabled for the relevant project and that you've authenticated to
-Google Cloud using one of these methods:
-
-- You are logged into an account (using `gcloud auth application-default login`)
- permitted to that project.
-- You are running on a machine using a service account that is permitted
- to the project.
-- You have downloaded the credentials for a service account that is permitted
- to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment
- variable to the path of this file.
-
-
-
-```bash npm2yarn
-npm install @langchain/google-vertexai
-```
-
-### Web
-
-To call Vertex AI models in web environments (like Edge functions), you'll need to install
-the `@langchain/google-vertexai-web` package:
-
-```bash npm2yarn
-npm install @langchain/google-vertexai-web
-```
-
-Then, you'll need to add your service account credentials directly as a `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable:
-
-```
-GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}
-```
-
-Lastly, you may also pass your credentials directly in code like this:
-
-```typescript
-import { ChatVertexAI } from "@langchain/google-vertexai-web";
-
-const model = new ChatVertexAI({
- authOptions: {
- credentials: {"type":"service_account","project_id":"YOUR_PROJECT-12345",...},
- },
-});
-```
-
-## Usage
-
-The entire family of `gemini` models are available by specifying the `modelName` parameter.
-
-For example:
-
-import ChatVertexAI from "@examples/models/chat/integration_googlevertexai.ts";
-
-{ChatVertexAI}
-
-:::tip
-See the LangSmith trace for the example above [here](https://smith.langchain.com/public/9403290d-1ca6-41e5-819c-f3ec233194c5/r).
-:::
-
-## Multimodal
-
-The Gemini API can process multimodal inputs. The example below demonstrates how to do this:
-
-import MultiModalVertexAI from "@examples/models/chat/integration_googlevertexai-multimodal.ts";
-
-{MultiModalVertexAI}
-
-:::tip
-See the LangSmith trace for the example above [here](https://smith.langchain.com/public/4cb2707d-bcf8-417e-8965-310b3045eb62/r).
-:::
-
-### Streaming
-
-`ChatVertexAI` also supports streaming in multiple chunks for faster responses:
-
-import ChatVertexAIStreaming from "@examples/models/chat/integration_googlevertexai-streaming.ts";
-
-{ChatVertexAIStreaming}
-
-:::tip
-See the LangSmith trace for the example above [here](https://smith.langchain.com/public/011c26dc-b7db-4fad-b0f2-3653f41a7667/r).
-:::
-
-### Tool calling
-
-`ChatVertexAI` also supports calling the model with a tool:
-
-import ChatVertexAITool from "@examples/models/chat/integration_googlevertexai-tools.ts";
-
-{ChatVertexAITool}
-
-:::tip
-See the LangSmith trace for the example above [here](https://smith.langchain.com/public/e6714fb3-ef24-447c-810d-7ff2c80c7db4/r).
-:::
-
-### `withStructuredOutput`
-
-Alternatively, you can also use the `withStructuredOutput` method:
-
-import ChatVertexAIWSO from "@examples/models/chat/integration_googlevertexai-wso.ts";
-
-{ChatVertexAIWSO}
-
-:::tip
-See the LangSmith trace for the example above [here](https://smith.langchain.com/public/d7b9860a-a761-4f76-ba57-195759eb38e7/r).
-:::
-
-### VertexAI tools agent
-
-The Gemini family of models not only support tool calling, but can also be used in the Tool Calling agent.
-Here's an example:
-
-import AgentsExample from "@examples/models/chat/chat_vertexai_agents.ts";
-
-{AgentsExample}
-
-:::tip
-See the LangSmith trace for the agent example above [here](https://smith.langchain.com/public/5615ee35-ba76-433b-8639-9b321cb6d4bf/r).
-:::