Skip to content

Commit

Permalink
Merge branch 'main' into brace/cohere-token-count
Browse files Browse the repository at this point in the history
  • Loading branch information
bracesproul authored Jun 11, 2024
2 parents ddf98b1 + de3e618 commit f0cc874
Show file tree
Hide file tree
Showing 21 changed files with 630 additions and 7 deletions.
2 changes: 1 addition & 1 deletion docs/core_docs/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -176,4 +176,4 @@ docs/how_to/assign.mdx
docs/how_to/agent_executor.md
docs/how_to/agent_executor.mdx
docs/integrations/llms/mistral.md
docs/integrations/llms/mistral.mdx
docs/integrations/llms/mistral.mdx
25 changes: 25 additions & 0 deletions docs/core_docs/docs/integrations/chat/deep_infra.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
sidebar_label: Deep Infra
---

import CodeBlock from "@theme/CodeBlock";

# ChatDeepInfra

LangChain supports chat models hosted by [Deep Infra](https://deepinfra.com/) through the `ChatDeepInfra` wrapper.
First, you'll need to install the `@langchain/community` package:

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/community
```

You'll need to obtain an API key and set it as an environment variable named `DEEPINFRA_API_TOKEN`
(or pass it into the constructor), then call the model as shown below:

import Example from "@examples/models/chat/integration_deepinfra.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
13 changes: 13 additions & 0 deletions docs/core_docs/docs/integrations/chat/openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -138,3 +138,16 @@ import OpenAIStreamTokens from "@examples/models/chat/integration_openai_stream_
:::tip
See the LangSmith trace [here](https://smith.langchain.com/public/66bf7377-cc69-4676-91b6-25929a05e8b7/r)
:::

### Disabling parallel tool calls

If you have multiple tools bound to the model, but you'd only like for a single tool to be called at a time, you can pass the `parallel_tool_calls` call option to enable/disable this behavior.
By default, `parallel_tool_calls` is set to `true`.

import OpenAIParallelToolCallsTokens from "@examples/models/chat/integration_openai_parallel_tool_calls.ts";

<CodeBlock language="typescript">{OpenAIParallelToolCallsTokens}</CodeBlock>

:::tip
See the LangSmith trace for the first invocation [here](https://smith.langchain.com/public/68f2ff13-6331-47d8-a8c0-d1745788e84e/r) and the second invocation [here](https://smith.langchain.com/public/6c2fff29-9470-486a-8715-805fda631024/r)
:::
25 changes: 25 additions & 0 deletions docs/core_docs/docs/integrations/llms/deep_infra.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
sidebar_label: Deep Infra
---

import CodeBlock from "@theme/CodeBlock";

# DeepInfra

LangChain supports LLMs hosted by [Deep Infra](https://deepinfra.com/) through the `DeepInfra` wrapper.
First, you'll need to install the `@langchain/community` package:

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/community
```

You'll need to obtain an API key and set it as an environment variable named `DEEPINFRA_API_TOKEN`
(or pass it into the constructor), then call the model as shown below:

import Example from "@examples/models/llm/deepinfra.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
17 changes: 17 additions & 0 deletions examples/src/models/chat/integration_deepinfra.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import { ChatDeepInfra } from "@langchain/community/chat_models/deepinfra";
import { HumanMessage } from "@langchain/core/messages";

const apiKey = process.env.DEEPINFRA_API_TOKEN;

const model = "meta-llama/Meta-Llama-3-70B-Instruct";

const chat = new ChatDeepInfra({
model,
apiKey,
});

const messages = [new HumanMessage("Hello")];

const res = await chat.invoke(messages);

console.log(res);
85 changes: 85 additions & 0 deletions examples/src/models/chat/integration_openai_parallel_tool_calls.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const model = new ChatOpenAI({
temperature: 0,
model: "gpt-4o",
});

// Define your tools
const calculatorSchema = z
.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]),
number1: z.number(),
number2: z.number(),
})
.describe("A tool to perform basic arithmetic operations");
const weatherSchema = z
.object({
city: z.enum(["add", "subtract", "multiply", "divide"]),
})
.describe("A tool to get the weather in a city");

// Bind tools to the model
const modelWithTools = model.bindTools([
{
type: "function",
function: {
name: "calculator",
description: calculatorSchema.description,
parameters: zodToJsonSchema(calculatorSchema),
},
},
{
type: "function",
function: {
name: "weather",
description: weatherSchema.description,
parameters: zodToJsonSchema(weatherSchema),
},
},
]);

// Invoke the model with `parallel_tool_calls` set to `true`
const response = await modelWithTools.invoke(
["What is the weather in san francisco and what is 23716 times 27342?"],
{
parallel_tool_calls: true,
}
);
console.log(response.tool_calls);
// We can see it called two tools
/*
[
{
name: 'weather',
args: { city: 'san francisco' },
id: 'call_c1KymEIix7mdlFtgLSnTXmDc'
},
{
name: 'calculator',
args: { operation: 'multiply', number1: 23716, number2: 27342 },
id: 'call_ANLYclAmXQ4TwUCLXakbPr3Z'
}
]
*/

// Invoke the model with `parallel_tool_calls` set to `false`
const response2 = await modelWithTools.invoke(
["What is the weather in san francisco and what is 23716 times 27342?"],
{
parallel_tool_calls: false,
}
);
console.log(response2.tool_calls);
// We can see it called one tool
/*
[
{
name: 'weather',
args: { city: 'san francisco' },
id: 'call_Rk34XffawJjgZ2BCK9E4CwlT'
}
]
*/
18 changes: 18 additions & 0 deletions examples/src/models/llm/deepinfra.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import { DeepInfraLLM } from "@langchain/community/llms/deepinfra";

const apiKey = process.env.DEEPINFRA_API_TOKEN;
const model = "meta-llama/Meta-Llama-3-70B-Instruct";

const llm = new DeepInfraLLM({
temperature: 0.7,
maxTokens: 20,
model,
apiKey,
maxRetries: 5,
});

const res = await llm.invoke(
"What is the next step in the process of making a good game?"
);

console.log({ res });
8 changes: 8 additions & 0 deletions libs/langchain-community/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,10 @@ llms/cohere.cjs
llms/cohere.js
llms/cohere.d.ts
llms/cohere.d.cts
llms/deepinfra.cjs
llms/deepinfra.js
llms/deepinfra.d.ts
llms/deepinfra.d.cts
llms/fireworks.cjs
llms/fireworks.js
llms/fireworks.d.ts
Expand Down Expand Up @@ -510,6 +514,10 @@ chat_models/cloudflare_workersai.cjs
chat_models/cloudflare_workersai.js
chat_models/cloudflare_workersai.d.ts
chat_models/cloudflare_workersai.d.cts
chat_models/deepinfra.cjs
chat_models/deepinfra.js
chat_models/deepinfra.d.ts
chat_models/deepinfra.d.cts
chat_models/fireworks.cjs
chat_models/fireworks.js
chat_models/fireworks.d.ts
Expand Down
2 changes: 2 additions & 0 deletions libs/langchain-community/langchain.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ export const config = {
"llms/bedrock/web": "llms/bedrock/web",
"llms/cloudflare_workersai": "llms/cloudflare_workersai",
"llms/cohere": "llms/cohere",
"llms/deepinfra": "llms/deepinfra",
"llms/fireworks": "llms/fireworks",
"llms/friendli": "llms/friendli",
"llms/googlepalm": "llms/googlepalm",
Expand Down Expand Up @@ -164,6 +165,7 @@ export const config = {
"chat_models/bedrock": "chat_models/bedrock/index",
"chat_models/bedrock/web": "chat_models/bedrock/web",
"chat_models/cloudflare_workersai": "chat_models/cloudflare_workersai",
"chat_models/deepinfra": "chat_models/deepinfra",
"chat_models/fireworks": "chat_models/fireworks",
"chat_models/friendli": "chat_models/friendli",
"chat_models/googlevertexai": "chat_models/googlevertexai/index",
Expand Down
26 changes: 26 additions & 0 deletions libs/langchain-community/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -1231,6 +1231,15 @@
"import": "./llms/cohere.js",
"require": "./llms/cohere.cjs"
},
"./llms/deepinfra": {
"types": {
"import": "./llms/deepinfra.d.ts",
"require": "./llms/deepinfra.d.cts",
"default": "./llms/deepinfra.d.ts"
},
"import": "./llms/deepinfra.js",
"require": "./llms/deepinfra.cjs"
},
"./llms/fireworks": {
"types": {
"import": "./llms/fireworks.d.ts",
Expand Down Expand Up @@ -1852,6 +1861,15 @@
"import": "./chat_models/cloudflare_workersai.js",
"require": "./chat_models/cloudflare_workersai.cjs"
},
"./chat_models/deepinfra": {
"types": {
"import": "./chat_models/deepinfra.d.ts",
"require": "./chat_models/deepinfra.d.cts",
"default": "./chat_models/deepinfra.d.ts"
},
"import": "./chat_models/deepinfra.js",
"require": "./chat_models/deepinfra.cjs"
},
"./chat_models/fireworks": {
"types": {
"import": "./chat_models/fireworks.d.ts",
Expand Down Expand Up @@ -3235,6 +3253,10 @@
"llms/cohere.js",
"llms/cohere.d.ts",
"llms/cohere.d.cts",
"llms/deepinfra.cjs",
"llms/deepinfra.js",
"llms/deepinfra.d.ts",
"llms/deepinfra.d.cts",
"llms/fireworks.cjs",
"llms/fireworks.js",
"llms/fireworks.d.ts",
Expand Down Expand Up @@ -3511,6 +3533,10 @@
"chat_models/cloudflare_workersai.js",
"chat_models/cloudflare_workersai.d.ts",
"chat_models/cloudflare_workersai.d.cts",
"chat_models/deepinfra.cjs",
"chat_models/deepinfra.js",
"chat_models/deepinfra.d.ts",
"chat_models/deepinfra.d.cts",
"chat_models/fireworks.cjs",
"chat_models/fireworks.js",
"chat_models/fireworks.d.ts",
Expand Down
Loading

0 comments on commit f0cc874

Please sign in to comment.