Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into couchbase-store-optim…
Browse files Browse the repository at this point in the history
…ization
  • Loading branch information
lokesh-couchbase committed Mar 25, 2024
2 parents abded60 + e3319d9 commit e64c5e9
Show file tree
Hide file tree
Showing 36 changed files with 1,548 additions and 157 deletions.
10 changes: 9 additions & 1 deletion docs/core_docs/docs/ecosystem/langserve.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,17 @@ import Example from "@examples/ecosystem/langsmith.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>

[`streamLog`](/docs/expression_language/interface) is a lower level method for streaming chain intermediate steps as partial JSONPatch chunks.
[`streamEvents`](/docs/expression_language/interface) allows you to stream chain intermediate steps as events such as `on_llm_start`, and `on_chain_stream`.
See the [table here](/docs/expression_language/interface#stream-events) for a full list of events you can handle.
This method allows for a few extra options as well to only include or exclude certain named steps:

import StreamEventsExample from "@examples/ecosystem/langsmith_stream_events.ts";

<CodeBlock language="typescript">{StreamEventsExample}</CodeBlock>

[`streamLog`](/docs/expression_language/interface) is a lower level method for streaming chain intermediate steps as partial JSONPatch chunks.
Like `streamEvents`, this method also allows for a few extra options as well to only include or exclude certain named steps:

import StreamLogExample from "@examples/ecosystem/langsmith_stream_log.ts";

<CodeBlock language="typescript">{StreamLogExample}</CodeBlock>
Expand Down
30 changes: 30 additions & 0 deletions docs/core_docs/docs/integrations/chat/premai.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
sidebar_label: PremAI
---

import CodeBlock from "@theme/CodeBlock";

# ChatPrem

## Setup

1. Create a Prem AI account and get your API key [here](https://app.premai.io/accounts/signup/).
2. Export or set your API key inline. The ChatPrem class defaults to `process.env.PREM_API_KEY`.

```bash
export PREM_API_KEY=your-api-key
```

You can use models provided by Prem AI as follows:

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/community
```

import PremAI from "@examples/models/chat/integration_premai.ts";

<CodeBlock language="typescript">{PremAI}</CodeBlock>
28 changes: 28 additions & 0 deletions docs/core_docs/docs/integrations/text_embedding/premai.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
sidebar_label: Prem AI
---

# Prem AI

The `PremEmbeddings` class uses the Prem AI API to generate embeddings for a given text.

## Setup

In order to use the Prem API you'll need an API key. You can sign up for a Prem account and create an API key [here](https://app.premai.io/accounts/signup/).

You'll first need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/community
```

## Usage

import CodeBlock from "@theme/CodeBlock";
import PremExample from "@examples/embeddings/premai.ts";

<CodeBlock language="typescript">{PremExample}</CodeBlock>
31 changes: 17 additions & 14 deletions docs/core_docs/docs/langgraph.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,10 @@ Let's define the nodes, as well as a function to decide how what conditional edg
```typescript
import { FunctionMessage } from "@langchain/core/messages";
import { AgentAction } from "@langchain/core/agents";
import type { RunnableConfig } from "@langchain/core/runnables";
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";

// Define the function that determines whether to continue or not
const shouldContinue = (state: { messages: Array<BaseMessage> }) => {
Expand Down Expand Up @@ -428,33 +431,33 @@ const _getAction = (state: { messages: Array<BaseMessage> }): AgentAction => {
// We construct an AgentAction from the function_call
return {
tool: lastMessage.additional_kwargs.function_call.name,
toolInput: JSON.stringify(
toolInput: JSON.parse(
lastMessage.additional_kwargs.function_call.arguments
),
log: "",
};
};

// Define the function that calls the model
const callModel = async (
state: { messages: Array<BaseMessage> },
config?: RunnableConfig
) => {
const callModel = async (state: { messages: Array<BaseMessage> }) => {
const { messages } = state;
const response = await newModel.invoke(messages, config);
// You can use a prompt here to tweak model behavior.
// You can also just pass messages to the model directly.
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
new MessagesPlaceholder("messages"),
]);
const response = await prompt.pipe(newModel).invoke({ messages });
// We return a list, because this will get added to the existing list
return {
messages: [response],
};
};

const callTool = async (
state: { messages: Array<BaseMessage> },
config?: RunnableConfig
) => {
const callTool = async (state: { messages: Array<BaseMessage> }) => {
const action = _getAction(state);
// We call the tool_executor and get back a response
const response = await toolExecutor.invoke(action, config);
const response = await toolExecutor.invoke(action);
// We use the response to create a FunctionMessage
const functionMessage = new FunctionMessage({
content: response,
Expand Down Expand Up @@ -532,7 +535,7 @@ const inputs = {
const result = await app.invoke(inputs);
```

See a LangSmith trace of this run [here](https://smith.langchain.com/public/2562d46e-da94-4c9d-9b14-3759a26aec9b/r).
See a LangSmith trace of this run [here](https://smith.langchain.com/public/144af8a3-b496-43aa-ba9d-f0d5894196e2/r).

This may take a little bit - it's making a few calls behind the scenes.
In order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.
Expand All @@ -555,7 +558,7 @@ for await (const output of await app.stream(inputs)) {
}
```

See a LangSmith trace of this run [here](https://smith.langchain.com/public/9afacb13-b9dc-416e-abbe-6ed2a0811afe/r).
See a LangSmith trace of this run [here](https://smith.langchain.com/public/968cd1bf-0db2-410f-a5b4-0e73066cf06e/r).

## Running Examples

Expand Down
Loading

0 comments on commit e64c5e9

Please sign in to comment.