diff --git a/docs/core_docs/docs/integrations/chat/groq.ipynb b/docs/core_docs/docs/integrations/chat/groq.ipynb
new file mode 100644
index 000000000000..342a48ce48a8
--- /dev/null
+++ b/docs/core_docs/docs/integrations/chat/groq.ipynb
@@ -0,0 +1,577 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "afaf8039",
+ "metadata": {
+ "vscode": {
+ "languageId": "raw"
+ }
+ },
+ "source": [
+ "---\n",
+ "sidebar_label: Groq\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e49f1e0d",
+ "metadata": {},
+ "source": [
+ "# ChatGroq\n",
+ "\n",
+ "This will help you getting started with ChatGroq [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatGroq features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html).\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "| Class | Package | Local | Serializable | [PY support](https://python.langchain.com/docs/integrations/chat/groq) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [ChatGroq](https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html) | [@langchain/groq](https://api.js.langchain.com/modules/langchain_groq.html) | ❌ | ❌ | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/groq?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/groq?style=flat-square&label=%20&) |\n",
+ "\n",
+ "### Model features\n",
+ "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
+ "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
+ "| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | \n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "To access ChatGroq models you'll need to create a ChatGroq account, get an API key, and install the `@langchain/groq` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "In order to use the Groq API you'll need an API key. You can sign up for a Groq account and create an API key [here](https://wow.groq.com/).\n",
+ "Then, you can set the API key as an environment variable in your terminal:\n",
+ "\n",
+ "```bash\n",
+ "export GROQ_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
+ "\n",
+ "```bash\n",
+ "# export LANGCHAIN_TRACING_V2=\"true\"\n",
+ "# export LANGCHAIN_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "### Installation\n",
+ "\n",
+ "The LangChain ChatGroq integration lives in the `@langchain/groq` package:\n",
+ "\n",
+ "```{=mdx}\n",
+ "\n",
+ "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n",
+ "import Npm2Yarn from \"@theme/Npm2Yarn\";\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ " @langchain/groq\n",
+ "\n",
+ "\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a38cde65-254d-4219-a441-068766c0d4b5",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our model object and generate chat completions:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { ChatGroq } from \"@langchain/groq\" \n",
+ "\n",
+ "const llm = new ChatGroq({\n",
+ " model: \"mixtral-8x7b-32768\",\n",
+ " temperature: 0,\n",
+ " maxTokens: undefined,\n",
+ " maxRetries: 2,\n",
+ " // other params...\n",
+ "})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2b4f3e15",
+ "metadata": {},
+ "source": [
+ "## Invocation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "62e0dbc3",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "AIMessage {\n",
+ " \"content\": \"I enjoy programming. (The French translation is: \\\"J'aime programmer.\\\")\\n\\nNote: I chose to translate \\\"I love programming\\\" as \\\"J'aime programmer\\\" instead of \\\"Je suis amoureux de programmer\\\" because the latter has a romantic connotation that is not present in the original English sentence.\",\n",
+ " \"additional_kwargs\": {},\n",
+ " \"response_metadata\": {\n",
+ " \"tokenUsage\": {\n",
+ " \"completionTokens\": 73,\n",
+ " \"promptTokens\": 31,\n",
+ " \"totalTokens\": 104\n",
+ " },\n",
+ " \"finish_reason\": \"stop\"\n",
+ " },\n",
+ " \"tool_calls\": [],\n",
+ " \"invalid_tool_calls\": []\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "const aiMsg = await llm.invoke([\n",
+ " [\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
+ " ],\n",
+ " [\"human\", \"I love programming.\"],\n",
+ "])\n",
+ "aiMsg"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "I enjoy programming. (The French translation is: \"J'aime programmer.\")\n",
+ "\n",
+ "Note: I chose to translate \"I love programming\" as \"J'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.\n"
+ ]
+ }
+ ],
+ "source": [
+ "console.log(aiMsg.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "AIMessage {\n",
+ " \"content\": \"That's great! I can help you translate English phrases related to programming into German.\\n\\n\\\"I love programming\\\" can be translated to German as \\\"Ich liebe Programmieren\\\".\\n\\nHere are some more programming-related phrases translated into German:\\n\\n* \\\"Programming language\\\" = \\\"Programmiersprache\\\"\\n* \\\"Code\\\" = \\\"Code\\\"\\n* \\\"Variable\\\" = \\\"Variable\\\"\\n* \\\"Function\\\" = \\\"Funktion\\\"\\n* \\\"Array\\\" = \\\"Array\\\"\\n* \\\"Object-oriented programming\\\" = \\\"Objektorientierte Programmierung\\\"\\n* \\\"Algorithm\\\" = \\\"Algorithmus\\\"\\n* \\\"Data structure\\\" = \\\"Datenstruktur\\\"\\n* \\\"Debugging\\\" = \\\"Debuggen\\\"\\n* \\\"Compile\\\" = \\\"Kompilieren\\\"\\n* \\\"Link\\\" = \\\"Verknüpfen\\\"\\n* \\\"Run\\\" = \\\"Ausführen\\\"\\n* \\\"Test\\\" = \\\"Testen\\\"\\n* \\\"Deploy\\\" = \\\"Bereitstellen\\\"\\n* \\\"Version control\\\" = \\\"Versionskontrolle\\\"\\n* \\\"Open source\\\" = \\\"Open Source\\\"\\n* \\\"Software development\\\" = \\\"Softwareentwicklung\\\"\\n* \\\"Agile methodology\\\" = \\\"Agile Methodik\\\"\\n* \\\"DevOps\\\" = \\\"DevOps\\\"\\n* \\\"Cloud computing\\\" = \\\"Cloud Computing\\\"\\n\\nI hope this helps! Let me know if you have any other questions or if you need further translations.\",\n",
+ " \"additional_kwargs\": {},\n",
+ " \"response_metadata\": {\n",
+ " \"tokenUsage\": {\n",
+ " \"completionTokens\": 327,\n",
+ " \"promptTokens\": 25,\n",
+ " \"totalTokens\": 352\n",
+ " },\n",
+ " \"finish_reason\": \"stop\"\n",
+ " },\n",
+ " \"tool_calls\": [],\n",
+ " \"invalid_tool_calls\": []\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\"\n",
+ "\n",
+ "const prompt = ChatPromptTemplate.fromMessages(\n",
+ " [\n",
+ " [\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
+ " ],\n",
+ " [\"human\", \"{input}\"],\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "const chain = prompt.pipe(llm);\n",
+ "await chain.invoke(\n",
+ " {\n",
+ " input_language: \"English\",\n",
+ " output_language: \"German\",\n",
+ " input: \"I love programming.\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
+ "metadata": {},
+ "source": [
+ "## Tool calling\n",
+ "\n",
+ "Groq chat models support calling multiple functions to get all required data to answer a question.\n",
+ "Here's an example:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "aa42d55a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[\n",
+ " {\n",
+ " name: 'get_current_weather',\n",
+ " args: { location: 'San Francisco', unit: 'fahrenheit' },\n",
+ " type: 'tool_call',\n",
+ " id: 'call_1mpy'\n",
+ " }\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { tool } from \"@langchain/core/tools\";\n",
+ "import { ChatGroq } from \"@langchain/groq\";\n",
+ "import { z } from \"zod\";\n",
+ "\n",
+ "// Mocked out function, could be a database/API call in production\n",
+ "const getCurrentWeatherTool = tool((input) => {\n",
+ " if (input.location.toLowerCase().includes(\"tokyo\")) {\n",
+ " return JSON.stringify({ location: input.location, temperature: \"10\", unit: \"celsius\" });\n",
+ " } else if (input.location.toLowerCase().includes(\"san francisco\")) {\n",
+ " return JSON.stringify({\n",
+ " location: input.location,\n",
+ " temperature: \"72\",\n",
+ " unit: \"fahrenheit\",\n",
+ " });\n",
+ " } else {\n",
+ " return JSON.stringify({ location: input.location, temperature: \"22\", unit: \"celsius\" });\n",
+ " }\n",
+ "}, {\n",
+ " name: \"get_current_weather\",\n",
+ " description: \"Get the current weather in a given location\",\n",
+ " schema: z.object({\n",
+ " location: z.string().describe(\"The city and state, e.g. San Francisco, CA\"),\n",
+ " unit: z.enum([\"celsius\", \"fahrenheit\"]).optional(),\n",
+ " }),\n",
+ "})\n",
+ "\n",
+ "// Bind function to the model as a tool\n",
+ "const llmWithTools = new ChatGroq({\n",
+ " model: \"mixtral-8x7b-32768\",\n",
+ " maxTokens: 128,\n",
+ "}).bindTools([getCurrentWeatherTool], {\n",
+ " tool_choice: \"auto\",\n",
+ "});\n",
+ "\n",
+ "const resWithTools = await llmWithTools.invoke([\n",
+ " [\"human\", \"What's the weather like in San Francisco?\"],\n",
+ "]);\n",
+ "\n",
+ "console.dir(resWithTools.tool_calls, { depth: null });"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ae6d3948",
+ "metadata": {},
+ "source": [
+ "### `.withStructuredOutput({ ... })`\n",
+ "\n",
+ "```{=mdx}\n",
+ "\n",
+ ":::info\n",
+ "The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change.\n",
+ ":::\n",
+ "\n",
+ "```\n",
+ "\n",
+ "You can also use the `.withStructuredOutput({ ... })` method to coerce `ChatGroq` into returning a structured output.\n",
+ "\n",
+ "The method allows for passing in either a Zod object, or a valid JSON schema (like what is returned from [`zodToJsonSchema`](https://www.npmjs.com/package/zod-to-json-schema)).\n",
+ "\n",
+ "Using the method is simple. Just define your LLM and call `.withStructuredOutput({ ... })` on it, passing the desired schema.\n",
+ "\n",
+ "Here is an example using a Zod schema and the `functionCalling` mode (default mode):"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "1ad6c77d",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{ operation: 'add', number1: 2, number2: 2 }\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n",
+ "import { ChatGroq } from \"@langchain/groq\";\n",
+ "import { z } from \"zod\";\n",
+ "\n",
+ "const calculatorSchema = z.object({\n",
+ " operation: z.enum([\"add\", \"subtract\", \"multiply\", \"divide\"]),\n",
+ " number1: z.number(),\n",
+ " number2: z.number(),\n",
+ "});\n",
+ "\n",
+ "const llmForWSO = new ChatGroq({\n",
+ " temperature: 0,\n",
+ " model: \"mixtral-8x7b-32768\",\n",
+ "});\n",
+ "const modelWithStructuredOutput = llmForWSO.withStructuredOutput(calculatorSchema);\n",
+ "\n",
+ "const promptWSO = ChatPromptTemplate.fromMessages([\n",
+ " [\"system\", \"You are VERY bad at math and must always use a calculator.\"],\n",
+ " [\"human\", \"Please help me!! What is 2 + 2?\"],\n",
+ "]);\n",
+ "const chainWSO = promptWSO.pipe(modelWithStructuredOutput);\n",
+ "const resultWSO = await chainWSO.invoke({});\n",
+ "console.log(resultWSO);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "24757550",
+ "metadata": {},
+ "source": [
+ "You can also specify 'includeRaw' to return the parsed and raw output in the result."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "1d13ed6f",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\n",
+ " raw: AIMessage {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: {\n",
+ " content: '',\n",
+ " additional_kwargs: {\n",
+ " tool_calls: [\n",
+ " {\n",
+ " id: 'call_7z1y',\n",
+ " type: 'function',\n",
+ " function: {\n",
+ " name: 'calculator',\n",
+ " arguments: '{\"number1\":2,\"number2\":2,\"operation\":\"add\"}'\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " tool_calls: [\n",
+ " {\n",
+ " name: 'calculator',\n",
+ " args: { number1: 2, number2: 2, operation: 'add' },\n",
+ " type: 'tool_call',\n",
+ " id: 'call_7z1y'\n",
+ " }\n",
+ " ],\n",
+ " invalid_tool_calls: [],\n",
+ " response_metadata: {}\n",
+ " },\n",
+ " lc_namespace: [ 'langchain_core', 'messages' ],\n",
+ " content: '',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {\n",
+ " tool_calls: [\n",
+ " {\n",
+ " id: 'call_7z1y',\n",
+ " type: 'function',\n",
+ " function: {\n",
+ " name: 'calculator',\n",
+ " arguments: '{\"number1\":2,\"number2\":2,\"operation\":\"add\"}'\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " response_metadata: {\n",
+ " tokenUsage: { completionTokens: 111, promptTokens: 1257, totalTokens: 1368 },\n",
+ " finish_reason: 'tool_calls'\n",
+ " },\n",
+ " id: undefined,\n",
+ " tool_calls: [\n",
+ " {\n",
+ " name: 'calculator',\n",
+ " args: { number1: 2, number2: 2, operation: 'add' },\n",
+ " type: 'tool_call',\n",
+ " id: 'call_7z1y'\n",
+ " }\n",
+ " ],\n",
+ " invalid_tool_calls: [],\n",
+ " usage_metadata: undefined\n",
+ " },\n",
+ " parsed: { operation: 'add', number1: 2, number2: 2 }\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "const includeRawModel = llmForWSO.withStructuredOutput(calculatorSchema, {\n",
+ " name: \"calculator\",\n",
+ " includeRaw: true,\n",
+ "});\n",
+ "\n",
+ "const includeRawChain = promptWSO.pipe(includeRawModel);\n",
+ "const includeRawResult = await includeRawChain.invoke(\"\");\n",
+ "console.dir(includeRawResult, { depth: null });"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7944c7c3",
+ "metadata": {},
+ "source": [
+ "## Streaming\n",
+ "\n",
+ "Groq's API also supports streaming token responses. The example below demonstrates how to use this feature."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "4ae5fb48",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "stream: \n",
+ "stream: Hello\n",
+ "stream: Hello!\n",
+ "stream: Hello! I\n",
+ "stream: Hello! I'\n",
+ "stream: Hello! I'm\n",
+ "stream: Hello! I'm here\n",
+ "stream: Hello! I'm here to\n",
+ "stream: Hello! I'm here to help\n",
+ "stream: Hello! I'm here to help you\n",
+ "stream: Hello! I'm here to help you.\n",
+ "stream: Hello! I'm here to help you. Is\n",
+ "stream: Hello! I'm here to help you. Is there\n",
+ "stream: Hello! I'm here to help you. Is there something\n",
+ "stream: Hello! I'm here to help you. Is there something you\n",
+ "stream: Hello! I'm here to help you. Is there something you would\n",
+ "stream: Hello! I'm here to help you. Is there something you would like\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with?\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me anything\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me anything.\n",
+ "stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me anything.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatGroq } from \"@langchain/groq\";\n",
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n",
+ "import { StringOutputParser } from \"@langchain/core/output_parsers\";\n",
+ "\n",
+ "const llmForStreaming = new ChatGroq({\n",
+ " apiKey: process.env.GROQ_API_KEY,\n",
+ "});\n",
+ "const promptForStreaming = ChatPromptTemplate.fromMessages([\n",
+ " [\"system\", \"You are a helpful assistant\"],\n",
+ " [\"human\", \"{input}\"],\n",
+ "]);\n",
+ "const outputParserForStreaming = new StringOutputParser();\n",
+ "const chainForStreaming = promptForStreaming.pipe(llmForStreaming).pipe(outputParserForStreaming);\n",
+ "const streamRes = await chainForStreaming.stream({\n",
+ " input: \"Hello\",\n",
+ "});\n",
+ "let streamedRes = \"\";\n",
+ "for await (const item of streamRes) {\n",
+ " streamedRes += item;\n",
+ " console.log(\"stream:\", streamedRes);\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all ChatGroq features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "TypeScript",
+ "language": "typescript",
+ "name": "tslab"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "mode": "typescript",
+ "name": "javascript",
+ "typescript": true
+ },
+ "file_extension": ".ts",
+ "mimetype": "text/typescript",
+ "name": "typescript",
+ "version": "3.7.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/core_docs/docs/integrations/chat/groq.mdx b/docs/core_docs/docs/integrations/chat/groq.mdx
deleted file mode 100644
index 794a06b75a4e..000000000000
--- a/docs/core_docs/docs/integrations/chat/groq.mdx
+++ /dev/null
@@ -1,74 +0,0 @@
----
-sidebar_label: Groq
----
-
-import CodeBlock from "@theme/CodeBlock";
-
-# ChatGroq
-
-## Setup
-
-In order to use the Groq API you'll need an API key. You can sign up for a Groq account and create an API key [here](https://wow.groq.com/).
-
-You'll first need to install the [`@langchain/groq`](https://www.npmjs.com/package/@langchain/groq) package:
-
-import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
-
-
-
-```bash npm2yarn
-npm install @langchain/groq
-```
-
-import UnifiedModelParamsTooltip from "@mdx_components/unified_model_params_tooltip.mdx";
-
-
-
-## Usage
-
-import ChatGroqExample from "@examples/models/chat/chat_groq.ts";
-
-{ChatGroqExample}
-
-:::info
-You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/2ba59207-1383-4e42-b6a6-c1ddcfcd5710/r)
-:::
-
-## Tool calling
-
-Groq chat models support calling multiple functions to get all required data to answer a question.
-Here's an example:
-
-import GroqTools from "@examples/models/chat/integration_groq_tool_calls.ts";
-
-{GroqTools}
-
-### `.withStructuredOutput({ ... })`
-
-:::info
-The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change.
-:::
-
-You can also use the `.withStructuredOutput({ ... })` method to coerce `ChatGroq` into returning a structured output.
-
-The method allows for passing in either a Zod object, or a valid JSON schema (like what is returned from [`zodToJsonSchema`](https://www.npmjs.com/package/zod-to-json-schema)).
-
-Using the method is simple. Just define your LLM and call `.withStructuredOutput({ ... })` on it, passing the desired schema.
-
-Here is an example using a Zod schema and the `functionCalling` mode (default mode):
-
-import WSAZodExample from "@examples/models/chat/integration_groq_wsa_zod.ts";
-
-{WSAZodExample}
-
-## Streaming
-
-Groq's API also supports streaming token responses. The example below demonstrates how to use this feature.
-
-import ChatStreamGroqExample from "@examples/models/chat/chat_stream_groq.ts";
-
-{ChatStreamGroqExample}
-
-:::info
-You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/72832eb5-b9ae-4ce0-baa2-c2e95eca61a7/r)
-:::