Skip to content

Commit

Permalink
Brace/openai assistant (langchain-ai#3171)
Browse files Browse the repository at this point in the history
* Feat: openai assistants

* stash

* Added base api struct

* chore: lint files

* initial auto function calling implementation

* option for awaiting function calls in invoke

* refactors, nits

* spelling nit

* chore: lint files

* nit

* match py

* schema file

* fix agent executor typing

* improve tests

* update comment

* LFG

* drop console log

* cleanup code

* refactor

* improve, better docs

* cr

* improve test and entrypoint

* eslint disable any

* Fix typing

* Fix types

* Fix build

* Update langchain/src/experimental/openai_assistant/index.ts

* Use types instead of classes

* Add docs

---------

Co-authored-by: jacoblee93 <[email protected]>
  • Loading branch information
bracesproul and jacoblee93 authored Nov 8, 2023
1 parent 7ed956e commit 42e6ee6
Show file tree
Hide file tree
Showing 22 changed files with 812 additions and 80 deletions.
7 changes: 6 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,10 @@
"https://json.schemastore.org/github-workflow.json": "./.github/workflows/deploy.yml"
},
"typescript.tsdk": "node_modules/typescript/lib",
"cSpell.words": ["Upstash"]
"cSpell.words": [
"Upstash"
],
"cSpell.enableFiletypes": [
"mdx"
]
}
196 changes: 196 additions & 0 deletions docs/docs/modules/agents/agent_types/openai_assistant.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
# OpenAI Assistant

:::info
The [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) is still in beta.
:::

OpenAI released a new API for a conversational agent like system called Assistant.

You can interact with OpenAI Assistants using OpenAI tools or custom tools. When using exclusively OpenAI tools, you can just invoke the assistant directly and get final answers. When using custom tools, you can run the assistant and tool execution loop using the built-in `AgentExecutor` or write your own executor.
OpenAI assistants currently have access to two tools hosted by OpenAI: [code interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter), and [knowledge retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval).

We've implemented the assistant API in LangChain with some helpful abstractions. In this guide we'll go over those, and show how to use them to create powerful assistants.

## Creating an assistant

Creating an assistant is easy. Use the `createAssistant` method and pass in a model ID, and optionally more parameters to further customize your assistant.

```typescript
import { OpenAIAssistantRunnable } from "experimental/openai_assistant";

const assistant = await OpenAIAssistantRunnable.createAssistant({
model: "gpt-4-1106-preview",
});
const assistantResponse = await assistant.invoke({
content: "Hello world!",
});
console.log(assistantResponse);
/**
[
{
id: 'msg_OBH60nkVI40V9zY2PlxMzbEI',
thread_id: 'thread_wKpj4cu1XaYEVeJlx4yFbWx5',
role: 'assistant',
content: [
{
type: 'text',
value: 'Hello there! What can I do for you?'
}
],
assistant_id: 'asst_RtW03Vs6laTwqSSMCQpVND7i',
run_id: 'run_4Ve5Y9fyKMcSxHbaNHOFvdC6',
}
]
*/
```

If you have an existing assistant, you can pass it directly into the constructor:

```typescript
const assistant = new OpenAIAssistantRunnable({
assistantId: "asst_RtW03Vs6laTwqSSMCQpVND7i",
// asAgent: true
});
```

In this next example we'll show how you can turn your assistant into an agent.

## Assistant as an agent

```typescript
import { AgentExecutor } from "langchain/agents";
import { StructuredTool } from "langchain/tools";
import { OpenAIAssistantRunnable } from "experimental/openai_assistant";
```

The first step is to define a list of tools you want to pass to your assistant.
Here we'll only define one for simplicity's sake, however the assistant API allows for passing in a list of tools, and from there the model can use multiple tools at once.
Read more about the run steps lifecycle [here](https://platform.openai.com/docs/assistants/how-it-works/runs-and-run-steps)

:::note
Only models released >= 1106 are able to use multiple tools at once. See the full list of OpenAI models [here](https://platform.openai.com/docs/models).
:::

```typescript
function getCurrentWeather(location: string, _unit = "fahrenheit") {
if (location.toLowerCase().includes("tokyo")) {
return JSON.stringify({ location, temperature: "10", unit: "celsius" });
} else if (location.toLowerCase().includes("san francisco")) {
return JSON.stringify({ location, temperature: "72", unit: "fahrenheit" });
} else {
return JSON.stringify({ location, temperature: "22", unit: "celsius" });
}
}
class WeatherTool extends StructuredTool {
schema = z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).optional(),
});

name = "get_current_weather";

description = "Get the current weather in a given location";

constructor() {
super(...arguments);
}

async _call(input: { location: string; unit: string }) {
const { location, unit } = input;
const result = getCurrentWeather(location, unit);
return result;
}
}
const tools = [new WeatherTool()];
```

In the above code we've defined three things:

- A function for the agent to call if the model requests it.
- A tool class which we'll pass to the `AgentExecutor`
- The tool list we can use to pass to our `OpenAIAssistantRunnable` and `AgentExecutor`

Next, we construct the `OpenAIAssistantRunnable` and pass it to the `AgentExecutor`.

```typescript
const agent = await OpenAIAssistantRunnable.createAssistant({
model: "gpt-3.5-turbo-1106",
instructions:
"You are a weather bot. Use the provided functions to answer questions.",
name: "Weather Assistant",
tools,
asAgent: true,
});
const agentExecutor = AgentExecutor.fromAgentAndTools({
agent,
tools,
});
```

Note how we're setting `asAgent` to `true`, this input parameter tells the `OpenAIAssistantRunnable` to return different, agent-acceptable outputs for actions or finished conversations.

Above we're also doing something a little different from the first example by passing in input parameters for `instructions` and `name`.
These are optional parameters, with the instructions being passed as extra context to the model, and the name being used to identify the assistant in the OpenAI dashboard.

Finally to invoke our executor we call the `.invoke` method in the exact same way as we did in the first example.

```typescript
const assistantResponse = await agentExecutor.invoke({
content: "What's the weather in Tokyo and San Francisco?",
});
console.log(assistantResponse);
/**
{
output: 'The current weather in San Francisco is 72°F, and in Tokyo, it is 10°C.'
}
*/
```

Here we asked a question which contains two sub questions inside: `What's the weather in Tokyo?` and `What's the weather in San Francisco?`.
In order for the `OpenAIAssistantRunnable` to answer that it returned two sets of function call arguments for each question, demonstrating it's ability to call multiple functions at once.

## Assistant tools

OpenAI currently offers two tools for the assistant API: a [code interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) and a [knowledge retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval) tool.
You can offer these tools to the assistant simply by passing them in as part of the `tools` parameter when creating the assistant.

```typescript
const assistant = await OpenAIAssistantRunnable.createAssistant({
model: "gpt-3.5-turbo-1106",
instructions:
"You are a helpful assistant that provides answers to math problems.",
name: "Math Assistant",
tools: [{ type: "code_interpreter" }],
});
```

Since we're passing `code_interpreter` as a tool, the assistant will now be able to execute Python code, allowing for more complex tasks normal LLMs are not capable of doing well, like math.

```typescript
const assistantResponse = await assistant.invoke({
content: "What's 10 - 4 raised to the 2.7",
});
console.log(assistantResponse);
/**
[
{
id: 'msg_OBH60nkVI40V9zY2PlxMzbEI',
thread_id: 'thread_wKpj4cu1XaYEVeJlx4yFbWx5',
role: 'assistant',
content: [
{
type: 'text',
text: {
value: 'The result of 10 - 4 raised to the 2.7 is approximately -32.22.',
annotations: []
}
}
],
assistant_id: 'asst_RtW03Vs6laTwqSSMCQpVND7i',
run_id: 'run_4Ve5Y9fyKMcSxHbaNHOFvdC6',
}
]
*/
```

Here the assistant was able to utilize the `code_interpreter` tool to calculate the answer to our question.
1 change: 1 addition & 0 deletions environment_tests/test-exports-bun/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-cf/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-cjs/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ const util_document = require("langchain/util/document");
const util_math = require("langchain/util/math");
const util_time = require("langchain/util/time");
const experimental_autogpt = require("langchain/experimental/autogpt");
const experimental_openai_assistant = require("langchain/experimental/openai_assistant");
const experimental_babyagi = require("langchain/experimental/babyagi");
const experimental_generative_agents = require("langchain/experimental/generative_agents");
const experimental_plan_and_execute = require("langchain/experimental/plan_and_execute");
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-esbuild/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ import * as util_document from "langchain/util/document";
import * as util_math from "langchain/util/math";
import * as util_time from "langchain/util/time";
import * as experimental_autogpt from "langchain/experimental/autogpt";
import * as experimental_openai_assistant from "langchain/experimental/openai_assistant";
import * as experimental_babyagi from "langchain/experimental/babyagi";
import * as experimental_generative_agents from "langchain/experimental/generative_agents";
import * as experimental_plan_and_execute from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-esm/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ import * as util_document from "langchain/util/document";
import * as util_math from "langchain/util/math";
import * as util_time from "langchain/util/time";
import * as experimental_autogpt from "langchain/experimental/autogpt";
import * as experimental_openai_assistant from "langchain/experimental/openai_assistant";
import * as experimental_babyagi from "langchain/experimental/babyagi";
import * as experimental_generative_agents from "langchain/experimental/generative_agents";
import * as experimental_plan_and_execute from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-vercel/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-vite/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
3 changes: 3 additions & 0 deletions langchain/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -742,6 +742,9 @@ util/time.d.ts
experimental/autogpt.cjs
experimental/autogpt.js
experimental/autogpt.d.ts
experimental/openai_assistant.cjs
experimental/openai_assistant.js
experimental/openai_assistant.d.ts
experimental/babyagi.cjs
experimental/babyagi.js
experimental/babyagi.d.ts
Expand Down
8 changes: 8 additions & 0 deletions langchain/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -754,6 +754,9 @@
"experimental/autogpt.cjs",
"experimental/autogpt.js",
"experimental/autogpt.d.ts",
"experimental/openai_assistant.cjs",
"experimental/openai_assistant.js",
"experimental/openai_assistant.d.ts",
"experimental/babyagi.cjs",
"experimental/babyagi.js",
"experimental/babyagi.d.ts",
Expand Down Expand Up @@ -2628,6 +2631,11 @@
"import": "./experimental/autogpt.js",
"require": "./experimental/autogpt.cjs"
},
"./experimental/openai_assistant": {
"types": "./experimental/openai_assistant.d.ts",
"import": "./experimental/openai_assistant.js",
"require": "./experimental/openai_assistant.cjs"
},
"./experimental/babyagi": {
"types": "./experimental/babyagi.d.ts",
"import": "./experimental/babyagi.js",
Expand Down
1 change: 1 addition & 0 deletions langchain/scripts/create-entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -292,6 +292,7 @@ const entrypoints = {
"util/time": "util/time",
// experimental
"experimental/autogpt": "experimental/autogpt/index",
"experimental/openai_assistant": "experimental/openai_assistant/index",
"experimental/babyagi": "experimental/babyagi/index",
"experimental/generative_agents": "experimental/generative_agents/index",
"experimental/plan_and_execute": "experimental/plan_and_execute/index",
Expand Down
Loading

0 comments on commit 42e6ee6

Please sign in to comment.