-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help in streaming the graph's agent response from backend to frontend #668
Comments
I'm trying something similar with expressjs and having trouble getting it to work. I can stream the response, but there is no benefit to streaming because I can't seem to pass the streamed chunks coming from the model to the client. The only way I've gotten it to work is invokng the model and getting a message string as a response. Here is a working example that I'm hoping someone can change to actually use the streamed output of the llm: import express from 'express';
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { AIMessage, BaseMessage, SystemMessage } from '@langchain/core/messages';
import { ChatBedrockConverse } from "@langchain/aws";
import { StateGraph, START, END, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { getCredentials } from './lib/credentials';
const router = express.Router();
const lastMessageContent = (messages: BaseMessage[]) => {
if (messages.length > 0) {
const lastMessage = messages[messages.length - 1];
if (lastMessage instanceof AIMessage && lastMessage.response_metadata.stopReason === "end_turn")
return lastMessage.content;
}
return "";
}
const weatherTool = tool(async ({ query }) => {
if (query.toLowerCase().includes("sf")) {
return "It's 60 degrees and foggy."
}
return "It's 90 degrees and sunny."
}, {
name: "weather",
description: "Call to get the current weather for a location.",
schema: z.object({ query: z.string().describe("The query to use in your search.") })
}
);
const tools = [weatherTool];
const toolNode = new ToolNode(tools);
const model = new ChatBedrockConverse({
model: process.env.MODEL_ID || "anthropic.claude-3-sonnet-20240229-v1:0",
region: process.env.AWS_REGION,
credentials: getCredentials()
}).bindTools(tools);
const prompt = `You are a Redbaird, a snarky pirate.`;
const systemMessage = new SystemMessage(prompt);
async function shouldContinue(state: typeof MessagesAnnotation.State) {
const messages = state.messages;
const lastMessage = messages[messages.length - 1] as AIMessage;
if (lastMessage?.tool_calls?.length) {
return "tools";
}
return END;
}
async function callModel(state: typeof MessagesAnnotation.State) {
const messages: BaseMessage[] = [systemMessage].concat(state.messages);
const message = await model.invoke(messages);
return { messages: [message] };
}
const workflow = new StateGraph(MessagesAnnotation)
.addNode("llm", callModel)
.addNode("tools", toolNode)
.addEdge(START, "llm")
.addConditionalEdges("llm", shouldContinue)
.addEdge("tools", "llm");
const app = workflow.compile();
router.get('/', function (_req, res) {
res.send('API server')
})
router.post('/chat', async (req, res) => {
try {
const messages: BaseMessage[] = req.body.messages;
const stream = await app.stream({ messages }, { streamMode: ["values"] });
for await (const [_type, chunk] of stream) {
if (chunk.messages) {
res.write(lastMessageContent(chunk.messages));
}
}
res.end();
} catch (e: any) {
console.error("POST api/chat error:", e, "at", e.stack?.split("\n at p"));
res.status(e.status ?? 500).json({ error: e.message });
}
})
export default router |
For anyone looking for a Nextjs + Langgraphjs + Vercel AI SDK + streaming, here is my example app, and you can find the code here. Currently it has a simple chat completion and a chat with history features. I'm currently working on tool calling and generative UI. |
Hi, I'm learning langgraph right now. I'm trying to build a full-stack app with Nextjs. I'm not able to stream the response from the backend to the frontend. I can invoke the graph and send the completed response, but I don't know how to send the stream response. Right now, I'm not using Langgraph Studio or Cloud. Can you please share code snippets for NextJs or ExpressJs that invoke a graph and stream the final response?
In the frontend, I'm trying to use Vercel's AI package to stream the chat response. It would be great if you could share a code that works with this package.
Thanks in advance.
The text was updated successfully, but these errors were encountered: