Skip to content

Commit

Permalink
all[minor]: Unified model params (#5020)
Browse files Browse the repository at this point in the history
* all[minor]: Unified model params

* chore: lint files

* community

* chore: lint files

* more docs updates

* chore: lint files

* more updates

* chore: lint files

* chore: lint files

* Fix bad json
  • Loading branch information
bracesproul authored Apr 10, 2024
1 parent fc2f9de commit c247a50
Show file tree
Hide file tree
Showing 288 changed files with 10,594 additions and 10,170 deletions.
628 changes: 314 additions & 314 deletions cookbook/basic_critique_revise.ipynb

Large diffs are not rendered by default.

828 changes: 414 additions & 414 deletions cookbook/function_calling_fireworks.ipynb

Large diffs are not rendered by default.

344 changes: 172 additions & 172 deletions cookbook/openai_vision_multimodal.ipynb

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ const prompt = ChatPromptTemplate.fromMessages([
]);

const chain = prompt.pipe(
new ChatAnthropic({ modelName: "claude-3-sonnet-20240229" })
new ChatAnthropic({ model: "claude-3-sonnet-20240229" })
);
```

Expand Down
10 changes: 5 additions & 5 deletions docs/core_docs/docs/get_started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -71,13 +71,13 @@ Accessing the API requires an API key, which you can get by creating an account
OPENAI_API_KEY="..."
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `openAIApiKey` named parameter when initiating the OpenAI Chat Model class:
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the OpenAI Chat Model class:

```typescript
import { ChatOpenAI } from "@langchain/openai";

const chatModel = new ChatOpenAI({
openAIApiKey: "...",
apiKey: "...",
});
```

Expand Down Expand Up @@ -135,13 +135,13 @@ Accessing the API requires an API key, which you can get by creating an account
ANTHROPIC_API_KEY="..."
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `anthropicApiKey` named parameter when initiating the Anthropic Chat Model class:
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the Anthropic Chat Model class:

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const chatModel = new ChatAnthropic({
anthropicApiKey: "...",
apiKey: "...",
});
```

Expand Down Expand Up @@ -602,7 +602,7 @@ const agentPrompt = await pull<ChatPromptTemplate>(
);

const agentModel = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
model: "gpt-3.5-turbo-1106",
temperature: 0,
});

Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/guides/langsmith_evaluation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ const prompt = await pull<ChatPromptTemplate>(
);

const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
model: "gpt-3.5-turbo-1106",
temperature: 0,
});

Expand Down
4 changes: 2 additions & 2 deletions docs/core_docs/docs/integrations/chat/anthropic_tools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ import { ChatAnthropicTools } from "@langchain/anthropic/experimental";

const model = new ChatAnthropicTools({
temperature: 0.1,
modelName: "claude-3-sonnet-20240229",
anthropicApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY
model: "claude-3-sonnet-20240229",
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY
});
```

Expand Down
6 changes: 3 additions & 3 deletions docs/core_docs/docs/integrations/chat/azure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,9 +67,9 @@ import { AzureChatOpenAI } from "@langchain/azure-openai";

const model = new AzureChatOpenAI({
azureOpenAIEndpoint: "<your_endpoint>",
azureOpenAIApiKey: "<your_key>",
apiKey: "<your_key>",
azureOpenAIApiDeploymentName: "<your_embedding_deployment_name",
modelName: "<your_model>",
model: "<your_model>",
});
```

Expand All @@ -85,7 +85,7 @@ const model = new AzureChatOpenAI({
credentials,
azureOpenAIEndpoint: "<your_endpoint>",
azureOpenAIApiDeploymentName: "<your_embedding_deployment_name",
modelName: "<your_model>",
model: "<your_model>",
});
```

Expand Down
8 changes: 4 additions & 4 deletions docs/core_docs/docs/integrations/llms/azure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ import { AzureOpenAI } from "@langchain/azure-openai";

const model = new AzureOpenAI({
azureOpenAIEndpoint: "<your_endpoint>",
azureOpenAIApiKey: "<your_key>",
apiKey: "<your_key>",
azureOpenAIApiDeploymentName: "<your_deployment_name",
});
```
Expand Down Expand Up @@ -67,7 +67,7 @@ const model = new AzureOpenAI({
credentials,
azureOpenAIEndpoint: "<your_endpoint>",
azureOpenAIApiDeploymentName: "<your_deployment_name",
modelName: "<your_model>",
model: "<your_model>",
});
```

Expand Down Expand Up @@ -103,7 +103,7 @@ import { OpenAI } from "@langchain/openai";

const model = new OpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-API-KEY",
apiKey: "YOUR-API-KEY",
azureOpenAIApiVersion: "YOUR-API-VERSION",
azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}",
azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",
Expand All @@ -122,7 +122,7 @@ import { OpenAI } from "@langchain/openai";

const model = new OpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-API-KEY",
apiKey: "YOUR-API-KEY",
azureOpenAIApiVersion: "YOUR-API-VERSION",
azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",
azureOpenAIBasePath:
Expand Down
4 changes: 2 additions & 2 deletions docs/core_docs/docs/integrations/llms/openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ npm install @langchain/openai
import { OpenAI } from "@langchain/openai";

const model = new OpenAI({
modelName: "gpt-3.5-turbo-instruct", // Defaults to "gpt-3.5-turbo-instruct" if no model provided.
model: "gpt-3.5-turbo-instruct", // Defaults to "gpt-3.5-turbo-instruct" if no model provided.
temperature: 0.9,
openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
});
const res = await model.invoke(
"What would be a good company name a company that makes colorful socks?"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import { PromptLayerOpenAI } from "langchain/llms/openai";

const model = new PromptLayerOpenAI({
temperature: 0.9,
openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY
});
const res = await model.invoke(
Expand Down
8 changes: 4 additions & 4 deletions docs/core_docs/docs/integrations/platforms/google.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ export GOOGLE_API_KEY=your-api-key
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

const model = new ChatGoogleGenerativeAI({
modelName: "gemini-pro",
model: "gemini-pro",
maxOutputTokens: 2048,
});

Expand All @@ -52,7 +52,7 @@ Gemini vision models support image inputs when providing a single human message.

```typescript
const visionModel = new ChatGoogleGenerativeAI({
modelName: "gemini-pro-vision",
model: "gemini-pro-vision",
maxOutputTokens: 2048,
});
const image = fs.readFileSync("./hotdog.jpg").toString("base64");
Expand Down Expand Up @@ -105,7 +105,7 @@ import { ChatVertexAI } from "@langchain/google-vertexai";
// import { ChatVertexAI } from "@langchain/google-vertexai-web";

const model = new ChatVertexAI({
modelName: "gemini-1.0-pro",
model: "gemini-1.0-pro",
maxOutputTokens: 2048,
});

Expand All @@ -122,7 +122,7 @@ Gemini vision models support image inputs when providing a single human message.

```typescript
const visionModel = new ChatVertexAI({
modelName: "gemini-pro-vision",
model: "gemini-pro-vision",
maxOutputTokens: 2048,
});
const image = fs.readFileSync("./hotdog.png").toString("base64");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ import { AzureOpenAI } from "@langchain/azure-openai";

const model = new AzureOpenAI({
azureOpenAIEndpoint: "<your_endpoint>",
azureOpenAIApiKey: "<your_key>",
apiKey: "<your_key>",
azureOpenAIApiDeploymentName: "<your_embedding_deployment_name",
});
```
Expand Down
8 changes: 4 additions & 4 deletions docs/core_docs/docs/integrations/text_embedding/openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ npm install @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings = new OpenAIEmbeddings({
openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
batchSize: 512, // Default value if omitted is 512. Max is 2048
modelName: "text-embedding-3-large",
model: "text-embedding-3-large",
});
```

Expand All @@ -29,7 +29,7 @@ With the `text-embedding-3` class of models, you can specify the size of the emb

```typescript
const embeddings = new OpenAIEmbeddings({
modelName: "text-embedding-3-large",
model: "text-embedding-3-large",
});

const vectors = await embeddings.embedDocuments(["some text"]);
Expand All @@ -44,7 +44,7 @@ But by passing in `dimensions: 1024` we can reduce the size of our embeddings to

```typescript
const embeddings1024 = new OpenAIEmbeddings({
modelName: "text-embedding-3-large",
model: "text-embedding-3-large",
dimensions: 1024,
});

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ const prompt = await pull<ChatPromptTemplate>(
);

const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
model: "gpt-3.5-turbo-1106",
temperature: 0,
});

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ import type { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = await pull<ChatPromptTemplate>("hwchase17/openai-tools-agent");

const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
model: "gpt-3.5-turbo-1106",
temperature: 0,
});

Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/modules/agents/agent_types/react.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ import type { PromptTemplate } from "@langchain/core/prompts";
const prompt = await pull<PromptTemplate>("hwchase17/react");

const llm = new OpenAI({
modelName: "gpt-3.5-turbo-instruct",
model: "gpt-3.5-turbo-instruct",
temperature: 0,
});

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ const prompt = await pull<ChatPromptTemplate>(
);

const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
model: "gpt-3.5-turbo-1106",
temperature: 0,
});

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Next, we initialize an LLM and a search tool that wraps our web search retriever

```typescript
const llm = new ChatOpenAI({
modelName: "gpt-4-1106-preview",
model: "gpt-4-1106-preview",
});

const searchTool = new DynamicTool({
Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/modules/agents/how_to/custom_agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import { ChatOpenAI } from "@langchain/openai";
* Define your chat model to use.
*/
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
model: "gpt-3.5-turbo",
temperature: 0,
});
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ import { AgentExecutor, createReactAgent } from "langchain/agents";
const tools = [new Calculator()];

const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
model: "gpt-3.5-turbo",
temperature: 0,
});

Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/modules/agents/quick_start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ First, we choose the LLM we want to be guiding the agent.
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
model: "gpt-3.5-turbo",
temperature: 0,
});
```
Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/modules/model_io/chat/caching.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ import { ChatOpenAI } from "@langchain/openai";

// To make the caching really obvious, lets use a slower model.
const model = new ChatOpenAI({
modelName: "gpt-4",
model: "gpt-4",
cache: true,
});
```
Expand Down
Loading

0 comments on commit c247a50

Please sign in to comment.