Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: enable lightweight engine #46

Merged
merged 76 commits into from
Feb 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
76 commits
Select commit Hold shift + click to select a range
754ffed
feat(core): Add a way to manually specify serializable fields (#7667)
jacoblee93 Feb 7, 2025
1c1e6cd
release(core): 0.3.39 (#7668)
jacoblee93 Feb 7, 2025
c77a8a5
fix(openai): Prevent extra constructor params from being serialized, …
jacoblee93 Feb 7, 2025
01e8614
release(openai): 0.4.3 (#7670)
jacoblee93 Feb 7, 2025
0050a55
feat(ollama): Add support for Ollama built-in JSON schema with withSt…
jacoblee93 Feb 8, 2025
193d1e9
release(ollama): 0.1.6 (#7673)
jacoblee93 Feb 8, 2025
bb3c2cf
Rework tests without langgraph
FilipZmijewski Feb 10, 2025
850819a
feat(ollama): Switch Ollama default withStructuredOutput method to js…
jacoblee93 Feb 10, 2025
b81807a
release(ollama): 0.2.0 (#7682)
jacoblee93 Feb 10, 2025
b33a19f
chore(openai): update azure user agent (#7543)
sinedied Feb 11, 2025
5449925
fix(community): Fix interface of chat deployment IBM and add test for…
FilipZmijewski Feb 11, 2025
25ea68c
docs: Update message_history.ipynb (#7677)
yashsharma999 Feb 11, 2025
6bb5db8
fix(experimental): openai assistant attachments (#7664)
zachary-nguyen Feb 11, 2025
ed298b2
fix(community): hide console errors on RecursiveUrlLoader (#7679)
jfromaniello Feb 11, 2025
037dd91
release(community): 0.3.30 (#7684)
jacoblee93 Feb 11, 2025
92edca1
feat(openai): Properly pass through max_completion_tokens (#7683)
jacoblee93 Feb 11, 2025
719e081
release(openai): 0.4.4 (#7685)
jacoblee93 Feb 11, 2025
c3a153f
feat(core): Expose types/stream as an entrypoint (#7686)
jacoblee93 Feb 12, 2025
a0f1dc9
release(core): 0.3.40 (#7687)
jacoblee93 Feb 12, 2025
e6d053f
feat(langchain): Allow model providers prefixed by colon in initChatM…
jacoblee93 Feb 18, 2025
fbf20af
Release 0.3.16
jacoblee93 Feb 18, 2025
25cadbd
Release 0.3.16 (#7713)
jacoblee93 Feb 18, 2025
fff6126
fix(langchain): Remove stray log (#7714)
jacoblee93 Feb 18, 2025
47aade8
Release 0.3.17
jacoblee93 Feb 18, 2025
dfb54dc
Release 0.3.17 (#7715)
jacoblee93 Feb 18, 2025
92db8a7
docs: Fix the broken link (#7699)
toqeer-hussain Feb 18, 2025
e1bd39c
fix(langchain): Respect split model name for initChatModel (#7716)
jacoblee93 Feb 18, 2025
c05b77c
fix: typo in the chatbot tutorial (#7693)
vedantmishra69 Feb 18, 2025
b7626bb
fix(ci): Fix Vercel build (#7717)
jacoblee93 Feb 18, 2025
b5a8249
fix(community): Fix handling of ChromeAI chunks (#7700)
jtpio Feb 18, 2025
473ef0e
fix(redis): Add TTL support for redis vector store (#7695)
AllenFang Feb 18, 2025
54b68a4
fix(redis): update wrong redis setup link (#7698)
AllenFang Feb 18, 2025
b2d6b74
feat(community): Update Voyage embeddings parameters (#7689)
nicolas-geysse Feb 18, 2025
195cfff
release(redis): 0.1.1 (#7718)
jacoblee93 Feb 18, 2025
81d9f32
release(community): 0.3.31 (#7719)
jacoblee93 Feb 18, 2025
993d0f8
Release 0.3.18 (#7720)
jacoblee93 Feb 18, 2025
f755f84
feat(xai): xAI polish (#7722)
jacoblee93 Feb 18, 2025
c65147d
feat(langchain): Adds xAI to initChatModel (#7721)
jacoblee93 Feb 18, 2025
39b74b9
release(xai): 0.0.2 (#7723)
bracesproul Feb 19, 2025
cfd1513
release(langchain): 0.3.19 (#7724)
bracesproul Feb 19, 2025
7684485
feat(community,aws): Update @aws-sdk/* dependencies in langchain-aws …
KDKHD Feb 20, 2025
dd581e4
fix(core): Fix issue in .d.ts typing for TextEncoder (#7726)
danielkatz Feb 21, 2025
0412b4f
fix(google-genai): Support larger range of temperatures for Gemini mo…
afirstenberg Feb 21, 2025
f0247cc
release(aws): 0.1.4 (#7728)
jacoblee93 Feb 21, 2025
bee2366
fix(google-common): Eliminate hard-coded default values in favor of m…
afirstenberg Feb 21, 2025
27e6538
feat (google-*): Support Google Cloud Express Mode (#7676)
afirstenberg Feb 21, 2025
9f0f93c
minor(google-gauth): Upgrade google-auth-library to latest major vers…
afirstenberg Feb 21, 2025
fe01840
fix(google-common): Handle multiple function calls and complex parts/…
afirstenberg Feb 21, 2025
70e10a9
feat(community): Add support for Amazon Aurora DSQL memory message (#…
jl4nz Feb 21, 2025
df0212b
release(community): 0.3.32 (#7730)
jacoblee93 Feb 21, 2025
d303e90
release(google-vertex): 0.2.0 (#7731)
jacoblee93 Feb 21, 2025
5284571
docs: Fix broken link (#7743)
jacoblee93 Feb 24, 2025
4f7b157
release(google-genai): Release 0.1.9 (#7748)
jacoblee93 Feb 24, 2025
ae335a3
feat(anthropic): Support claude 3.7 sonnet & extended thinking (#7750)
benjamincburns Feb 24, 2025
38d0ab8
release(anthropic): 0.3.14
benjamincburns Feb 24, 2025
370629b
release(anthropic): 0.3.14 (#7751)
benjamincburns Feb 24, 2025
eca902e
fix(pinecone): Update Pinecone SDK to 5.0.0, make peer dep (#7767)
jacoblee93 Feb 27, 2025
dd68043
release(pinecone): 0.2.0 (#7772)
jacoblee93 Feb 27, 2025
cdb5d2d
docs: Adds missing redirect (#7759)
jacoblee93 Feb 27, 2025
1f57087
fix(core): don't create empty text content blocks (#7769)
benjamincburns Feb 27, 2025
b820cbb
fix(community): usage error returned by Zhipuai streaming data was mi…
mmdapl Feb 27, 2025
5f805b2
fix(core): Skip delete operation if not necessary during incremental …
weakit Feb 27, 2025
79467fb
fix(google-genai): Fix Google Genai usage token (#7733)
beowulf11 Feb 27, 2025
60e5950
fix(core): add artifact to ToolMessage and ToolMessageChunk for Remot…
acastells Feb 27, 2025
efed92d
fix(deepseek): wrong home link for @langchain/deepseek (#7754)
AllenFang Feb 27, 2025
1bf36c9
release(google-genai): 0.1.10 (#7773)
jacoblee93 Feb 27, 2025
d07f6da
release(core): 0.3.41 (#7774)
jacoblee93 Feb 27, 2025
ed1d8c2
feat(community): Support google cloud storage document loader (#7740)
AllenFang Feb 27, 2025
4a645ba
release(community): 0.3.33 (#7775)
jacoblee93 Feb 27, 2025
b593473
feat(aws): support reasoning blocks for claude 3.7 (#7768)
benjamincburns Feb 27, 2025
2de36ac
release(aws): 0.1.5
benjamincburns Feb 28, 2025
f559c04
release(aws): 0.1.5 (#7777)
benjamincburns Feb 28, 2025
9a06b28
Merge branch 'main' of https://github.com/FilipZmijewski/langchainjs
FilipZmijewski Feb 28, 2025
b9f65ac
Merge branch 'main' of https://github.com/langchain-ai/langchainjs
FilipZmijewski Feb 28, 2025
fa73d2f
feat: Add support for lightweight engine
FilipZmijewski Feb 28, 2025
e953757
Change id in notebooks
FilipZmijewski Feb 28, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/core_docs/docs/how_to/message_history.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@
"]\n",
"const output = await app.invoke({ messages: input }, config)\n",
"// The output contains all messages in the state.\n",
"// This will long the last message in the conversation.\n",
"// This will log the last message in the conversation.\n",
"console.log(output.messages[output.messages.length - 1]);"
]
},
Expand Down Expand Up @@ -583,4 +583,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
4 changes: 2 additions & 2 deletions docs/core_docs/docs/how_to/sequence.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"\n",
"## The pipe method\n",
"\n",
"To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/concepts/prompt_templates) to format input into a [chat model](/docs/concepts/chat_models), and finally converting the chat message output into a string with an [output parser](/docs/concepts/output_parsers.\n",
"To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/concepts/prompt_templates) to format input into a [chat model](/docs/concepts/chat_models), and finally converting the chat message output into a string with an [output parser](/docs/concepts/output_parsers).\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
Expand Down Expand Up @@ -249,4 +249,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
}
10 changes: 9 additions & 1 deletion docs/core_docs/docs/integrations/chat/google_vertex_ai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,8 @@
"## Setup\n",
"\n",
"LangChain.js supports two different authentication methods based on whether\n",
"you're running in a Node.js environment or a web environment.\n",
"you're running in a Node.js environment or a web environment. It also supports\n",
"the authentication method used by Vertex AI Express Mode using either package.\n",
"\n",
"To access `ChatVertexAI` models you'll need to setup Google VertexAI in your Google Cloud Platform (GCP) account, save the credentials file, and install the `@langchain/google-vertexai` integration package.\n",
"\n",
Expand All @@ -66,6 +67,13 @@
"GOOGLE_VERTEX_AI_WEB_CREDENTIALS={\"type\":\"service_account\",\"project_id\":\"YOUR_PROJECT-12345\",...}\n",
"```\n",
"\n",
"If you are using Vertex AI Express Mode, you can install either the `@langchain/google-vertexai` or `@langchain/google-vertexai-web` package.\n",
"You can then go to the [Express Mode](https://console.cloud.google.com/vertex-ai/studio) API Key page and set your API Key in the `GOOGLE_API_KEY` environment variable:\n",
"\n",
"```bash\n",
"export GOOGLE_API_KEY=\"api_key_value\"\n",
"```\n",
"\n",
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
"\n",
"```bash\n",
Expand Down
5 changes: 3 additions & 2 deletions docs/core_docs/docs/integrations/chat/ibm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,8 @@
" version: \"YYYY-MM-DD\",\n",
" serviceUrl: process.env.API_URL,\n",
" projectId: \"<PROJECT_ID>\",\n",
" spaceId: \"<SPACE_ID>\",\n",
" // spaceId: \"<SPACE_ID>\",\n",
" // idOrName: \"<DEPLOYMENT_ID>\",\n",
" model: \"<MODEL_ID>\",\n",
" ...props\n",
"});"
Expand All @@ -184,7 +185,7 @@
"source": [
"Note:\n",
"\n",
"- You must provide `spaceId` or `projectId` in order to proceed.\n",
"- You must provide `spaceId`, `projectId` or `idOrName`(deployment id) unless you use lighweight engine which works without specifying either (refer to [watsonx.ai docs](https://www.ibm.com/docs/en/cloud-paks/cp-data/5.0.x?topic=install-choosing-installation-mode))\n",
"- Depending on the region of your provisioned service instance, use correct serviceUrl."
]
},
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
hide_table_of_contents: true
sidebar_class_name: node-only
---

# Google Cloud Storage

:::tip Compatibility
Only available on Node.js.
:::

This covers how to load a Google Cloud Storage File into LangChain documents.

## Setup

To use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.

See the docs [here](/docs/integrations/document_loaders/file_loaders/unstructured) for information on how to do that.

You'll also need to install the official Google Cloud Storage SDK:

```bash npm2yarn
npm install @langchain/community @langchain/core @google-cloud/storage
```

## Usage

Once Unstructured is configured, you can use the Google Cloud Storage loader to load files and then convert them into a Document.

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/google_cloud_storage.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
6 changes: 3 additions & 3 deletions docs/core_docs/docs/integrations/llms/ibm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -171,8 +171,8 @@
" version: \"YYYY-MM-DD\",\n",
" serviceUrl: process.env.API_URL,\n",
" projectId: \"<PROJECT_ID>\",\n",
" spaceId: \"<SPACE_ID>\",\n",
" idOrName: \"<DEPLOYMENT_ID>\",\n",
" // spaceId: \"<SPACE_ID>\",\n",
" // idOrName: \"<DEPLOYMENT_ID>\",\n",
" model: \"<MODEL_ID>\",\n",
" ...props,\n",
"});"
Expand All @@ -185,7 +185,7 @@
"source": [
"Note:\n",
"\n",
"- You must provide `spaceId`, `projectId` or `idOrName`(deployment id) in order to proceed.\n",
"- You must provide `spaceId`, `projectId` or `idOrName`(deployment id) unless you use lighweight engine which works without specifying either (refer to [watsonx.ai docs](https://www.ibm.com/docs/en/cloud-paks/cp-data/5.0.x?topic=install-choosing-installation-mode))\n",
"- Depending on the region of your provisioned service instance, use correct serviceUrl.\n",
"- You need to specify the model you want to use for inferencing through model_id."
]
Expand Down
43 changes: 43 additions & 0 deletions docs/core_docs/docs/integrations/memory/aurora_dsql.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
---
hide_table_of_contents: true
sidebar_class_name: node-only
---

import CodeBlock from "@theme/CodeBlock";

# Aurora DSQL Chat Memory

For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` for the serverless PostgreSQL-compatible [Amazon Aurora DSQL](https://aws.amazon.com/rds/aurora/dsql/) Database.

This is very similar to the PostgreSQL integration with a few differences to make it compatible with DSQL:

1. The `id` column in PostgreSQL is SERIAL auto-incrementent, and DSQL is UUID using the database function `gen_random_uuid`.
2. A `created_at` column is created to track the order and history of the messages.
3. The `message` column in PostgreSQL is JSONB, and DSQL is TEXT with Javascript parsing handling

## Setup

Go to you AWS Console and create an Aurora DSQL Cluster, https://console.aws.amazon.com/dsql/clusters

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core pg @aws-sdk/dsql-signer
```

## Usage

Each chat history session is stored in a Aurora DSQL (Postgres-compatible) database and requires a session id.

The connection to Aurora DSQL is handled through a PostgreSQL pool. You can either pass an instance of a pool via the `pool` parameter or pass a pool config via the `poolConfig` parameter. See [pg-node docs on pools](https://node-postgres.com/apis/pool)
for more information. A provided pool takes precedence, thus if both a pool instance and a pool config are passed, only the pool will be used.

For options on how to do the authentication and authorization for DSQL please check https://docs.aws.amazon.com/aurora-dsql/latest/userguide/authentication-authorization.html.

The following example uses the AWS-SDK to generate an authentication token that is passed to the pool configuration:

import Example from "@examples/memory/aurora_dsql.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
"\n",
"The vector store lives in the `@langchain/pinecone` package. You'll also need to install the `langchain` package to import the main `SelfQueryRetriever` class.\n",
"\n",
"The official Pinecone SDK (`@pinecone-database/pinecone`) is automatically installed as a dependency of `@langchain/pinecone`, but you may wish to install it independently as well.\n",
"You will also need to install the official Pinecone SDK (`@pinecone-database/pinecone@5`).\n",
"\n",
"For this example, we'll also use OpenAI embeddings, so you'll need to install the `@langchain/openai` package and [obtain an API key](https://platform.openai.com):\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@
"<IntegrationInstallTooltip></IntegrationInstallTooltip>\n",
"\n",
"<Npm2Yarn>\n",
" @langchain/pinecone @langchain/core\n",
" @langchain/pinecone @langchain/core @pinecone-database/pinecone@5\n",
"</Npm2Yarn>\n",
"```"
]
Expand Down
11 changes: 11 additions & 0 deletions docs/core_docs/docs/integrations/text_embedding/voyageai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,23 @@ The `inputType` parameter allows you to specify the type of input text for bette
- `document`: Use this for documents or content that you want to be retrievable. Voyage AI will prepend a prompt to optimize the embeddings for document use cases.
- `None` (default): The input text will be directly encoded without any additional prompt.

Additionally, the class supports new parameters for further customization of the embedding process:

- **truncation**: Whether to truncate the input texts to the maximum length allowed by the model.
- **outputDimension**: The desired dimension of the output embeddings.
- **outputDtype**: The data type of the output embeddings. Can be `"float"` or `"int8"`.
- **encodingFormat**: The format of the output embeddings. Can be `"float"`, `"base64"`, or `"ubinary"`.

```typescript
import { VoyageEmbeddings } from "@langchain/community/embeddings/voyage";

const embeddings = new VoyageEmbeddings({
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.VOYAGEAI_API_KEY
inputType: "document", // Optional: specify input type as 'query', 'document', or omit for None / Undefined / Null
truncation: true, // Optional: enable truncation of input texts
outputDimension: 768, // Optional: set desired output embedding dimension
outputDtype: "float", // Optional: set output data type ("float" or "int8")
encodingFormat: "float", // Optional: set output encoding format ("float", "base64", or "ubinary")
});
```

Expand Down
4 changes: 2 additions & 2 deletions docs/core_docs/docs/integrations/vectorstores/pinecone.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
"<IntegrationInstallTooltip></IntegrationInstallTooltip>\n",
"\n",
"<Npm2Yarn>\n",
" @langchain/pinecone @langchain/openai @langchain/core @pinecone-database/pinecone \n",
" @langchain/pinecone @langchain/openai @langchain/core @pinecone-database/pinecone@5\n",
"</Npm2Yarn>\n",
"```\n",
"\n",
Expand Down Expand Up @@ -363,4 +363,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
2 changes: 1 addition & 1 deletion docs/core_docs/docs/tutorials/chatbot.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,7 @@
"]\n",
"const output = await app.invoke({ messages: input }, config)\n",
"// The output contains all messages in the state.\n",
"// This will long the last message in the conversation.\n",
"// This will log the last message in the conversation.\n",
"console.log(output.messages[output.messages.length - 1]);"
]
},
Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/tutorials/rag.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -372,7 +372,7 @@
"\n",
"- [Docs](/docs/concepts/document_loaders): Detailed documentation on how to use\n",
"- [Integrations](/docs/integrations/document_loaders/)\n",
"- [Interface](https:/api.js.langchain.com/classes/langchain.document_loaders_base.BaseDocumentLoader.html): API reference for the base interface.\n",
"- [Interface](https://api.js.langchain.com/classes/langchain.document_loaders_base.BaseDocumentLoader.html): API reference for the base interface.\n",
"\n",
"### Splitting documents\n",
"\n",
Expand Down
12 changes: 12 additions & 0 deletions docs/core_docs/vercel.json
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,18 @@
{
"source": "/docs/modules/model_io/prompts/quick_start/",
"destination": "/docs/concepts/prompt_templates"
},
{
"source": "/docs/modules/model_io/prompts(/?)",
"destination": "/docs/concepts/prompt_templates"
},
{
"source": "/docs/guides/expression_language/cookbook(/?)",
"destination": "/docs/how_to/sequence"
},
{
"source": "/docs/modules/model_io/models(/?)",
"destination": "/docs/integrations/chat/"
}
]
}
4 changes: 2 additions & 2 deletions examples/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
"author": "LangChain",
"license": "MIT",
"dependencies": {
"@aws-sdk/dsql-signer": "^3.738.0",
"@azure/identity": "^4.2.1",
"@browserbasehq/stagehand": "^1.3.0",
"@clickhouse/client": "^0.2.5",
Expand Down Expand Up @@ -68,7 +69,7 @@
"@langchain/yandex": "workspace:*",
"@layerup/layerup-security": "^1.5.12",
"@opensearch-project/opensearch": "^2.2.0",
"@pinecone-database/pinecone": "^4.0.0",
"@pinecone-database/pinecone": "^5.0.2",
"@planetscale/database": "^1.8.0",
"@prisma/client": "^4.11.0",
"@qdrant/js-client-rest": "^1.9.0",
Expand All @@ -87,7 +88,6 @@
"date-fns": "^3.3.1",
"duck-duck-scrape": "^2.2.5",
"exa-js": "^1.0.12",
"faiss-node": "^0.5.1",
"firebase-admin": "^12.0.0",
"graphql": "^16.6.0",
"hdb": "^0.19.8",
Expand Down
17 changes: 17 additions & 0 deletions examples/src/document_loaders/google_cloud_storage.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import { GoogleCloudStorageLoader } from "@langchain/community/document_loaders/web/google_cloud_storage";

const loader = new GoogleCloudStorageLoader({
bucket: "my-bucket-123",
file: "path/to/file.pdf",
storageOptions: {
keyFilename: "/path/to/keyfile.json",
},
unstructuredLoaderOptions: {
apiUrl: "http://localhost:8000/general/v0/general",
apiKey: "", // this will be soon required
},
});

const docs = await loader.load();

console.log(docs);
88 changes: 88 additions & 0 deletions examples/src/memory/aurora_dsql.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
import pg from "pg";

import { DsqlSigner } from "@aws-sdk/dsql-signer";
import { AuroraDsqlChatMessageHistory } from "@langchain/community/stores/message/aurora_dsql";
import { ChatOpenAI } from "@langchain/openai";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";

import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

async function getPostgresqlPool() {
const signer = new DsqlSigner({
hostname: process.env.DSQL_ENDPOINT!,
});

const token = await signer.getDbConnectAdminAuthToken();

if (!token) throw new Error("Auth token error for DSQL");

const poolConfig: pg.PoolConfig = {
host: process.env.DSQL_ENDPOINT,
port: 5432,
user: "admin",
password: token,
ssl: true,
database: "postgres",
};

const pool = new pg.Pool(poolConfig);
return pool;
}

const pool = await getPostgresqlPool();

const model = new ChatOpenAI();

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant. Answer all questions to the best of your ability.",
],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],
]);

const chain = prompt.pipe(model).pipe(new StringOutputParser());

const chainWithHistory = new RunnableWithMessageHistory({
runnable: chain,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
getMessageHistory: async (sessionId) => {
const chatHistory = new AuroraDsqlChatMessageHistory({
sessionId,
pool,
// Can also pass `poolConfig` to initialize the pool internally,
// but easier to call `.end()` at the end later.
});
return chatHistory;
},
});

const res1 = await chainWithHistory.invoke(
{
input: "Hi! I'm MJDeligan.",
},
{ configurable: { sessionId: "langchain-test-session" } }
);
console.log(res1);
/*
"Hello MJDeligan! It's nice to meet you. My name is AI. How may I assist you today?"
*/

const res2 = await chainWithHistory.invoke(
{ input: "What did I just say my name was?" },
{ configurable: { sessionId: "langchain-test-session" } }
);
console.log(res2);

/*
"You said your name was MJDeligan."
*/

// If you provided a pool config you should close the created pool when you are done
await pool.end();
Loading
Loading