Skip to content

Commit

Permalink
Merge pull request #233 from langchain-ai/jacob/langgraph-cli
Browse files Browse the repository at this point in the history
Update instructions to use LangGraph.js CLI
  • Loading branch information
bracesproul authored Jan 27, 2025
2 parents fbf7908 + affee61 commit 5416017
Show file tree
Hide file tree
Showing 18 changed files with 949 additions and 158 deletions.
6 changes: 6 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# LangSmith tracing
# Set this to `false` to disable tracing to LangSmith
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=

Expand All @@ -19,6 +20,11 @@ NEXT_PUBLIC_ANTHROPIC_ENABLED=true
NEXT_PUBLIC_OPENAI_ENABLED=true
# Set to false by default since the base OpenAI API is more common than the Azure OpenAI API.
NEXT_PUBLIC_AZURE_ENABLED=false
NEXT_PUBLIC_OLLAMA_ENABLED=false

# If using Ollama, set the API URL here. Only needs to be set if using the non default Ollama server port.
# It will default to `http://host.docker.internal:11434` if not set.
# OLLAMA_API_URL="http://host.docker.internal:11434"

# LangGraph Deployment, or local development server via LangGraph Studio.
# If running locally, this URL should be set in the `constants.ts` file.
Expand Down
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
/.pnp
.pnp.js
.yarn/install-state.gz
.yarn/cache

# testing
/coverage
Expand Down Expand Up @@ -36,4 +37,7 @@ yarn-error.log*
*.tsbuildinfo
next-env.d.ts

credentials.json
credentials.json

# LangGraph API
.langgraph_api
44 changes: 26 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ Open Canvas requires the following API keys and external services:

- [LangSmith](https://smith.langchain.com/) for tracing & observability


### Installation

First, clone the repository:
Expand Down Expand Up @@ -92,15 +91,14 @@ Now we'll cover how to setup and run the LangGraph server locally.

Follow the [`Installation` instructions in the LangGraph docs](https://langchain-ai.github.io/langgraph/cloud/reference/cli/#installation) to install the LangGraph CLI.

Once installed, navigate to the root of the Open Canvas repo and run `LANGSMITH_API_KEY="<YOUR_LANGSMITH_API_KEY>" langgraph up --watch --port 54367` (replacing `<YOUR_LANGSMITH_API_KEY>` with your LangSmith API key).
Once installed, navigate to the root of the Open Canvas repo and run `yarn dev:server` (this runs `npx @langchain/langgraph-cli dev --port 54367`).

Once it finishes pulling the docker image and installing dependencies, you should see it log:

```
Ready!
- API: http://localhost:54367
- Docs: http://localhost:54367/docs
- LangGraph Studio: https://smith.langchain.com/studio/?baseUrl=http://*********:54367
Ready!
- 🚀 API: http://localhost:54367
- 🎨 Studio UI: https://smith.langchain.com/studio?baseUrl=http://localhost:54367
```

After your LangGraph server is running, execute the following command to start the Open Canvas app:
Expand All @@ -109,8 +107,9 @@ After your LangGraph server is running, execute the following command to start t
yarn dev
```

Then, open [localhost:3000](http://localhost:3000) with your browser and start interacting!
On initial load, compilation may take a little bit of time.

Then, open [localhost:3000](http://localhost:3000) with your browser and start interacting!

## LLM Models

Expand All @@ -126,25 +125,34 @@ If you'd like to add a new model, follow these simple steps:
2. Install the necessary package for the provider (e.g. `@langchain/anthropic`).
3. Update the `getModelConfig` function in `src/agent/utils.ts` to include an `if` statement for your new model name and provider.
4. Manually test by checking you can:
> - 4a. Generate a new artifact
>
> - 4b. Generate a followup message (happens automatically after generating an artifact)
>
> - 4c. Update an artifact via a message in chat
>
> - 4d. Update an artifact via a quick action
>
> - 4e. Repeat for text/code (ensure both work)
> - 4a. Generate a new artifact
> - 4b. Generate a followup message (happens automatically after generating an artifact)
> - 4c. Update an artifact via a message in chat
> - 4d. Update an artifact via a quick action
> - 4e. Repeat for text/code (ensure both work)
### Local Ollama models

Open Canvas supports calling local LLMs running on Ollama. This is not enabled in the hosted version of Open Canvas, but you can use this in your own local/deployed Open Canvas instance.

To use a local Ollama model, first ensure you have [Ollama](https://ollama.com) installed, and a model that supports tool calling pulled (the default model is `llama3.3`).

Next, start the Ollama server by running `ollama run llama3.3`.

Then, set the `NEXT_PUBLIC_OLLAMA_ENABLED` environment variable to `true`, and the `OLLAMA_API_URL` environment variable to the URL of your Ollama server (defaults to `http://host.docker.internal:11434`. If you do not set a custom port when starting your Ollama server, you should not need to set this environment variable).

> [!NOTE]
> Open source LLMs are typically not as good at instruction following as proprietary models like GPT-4o or Claude Sonnet. Because of this, you may experience errors or unexpected behavior when using local LLMs.
## Troubleshooting

Below are some common issues you may run into if running Open Canvas yourself:

- **I have the LangGraph server running successfully, and my client can make requests, but no text is being generated:** This can happen if you start & connect to multiple different LangGraph servers locally in the same browser. Try clearing the `oc_thread_id_v2` cookie and refreshing the page. This is because each unique LangGraph server has its own database where threads are stored, so a thread ID from one server will not be found in the database of another server.

- **I'm getting 500 network errors when I try to make requests on the client:** Ensure you have the LangGraph server running, and you're making requests to the correct port. You can specify the port to use by passing the `--port <PORT>` flag to the `langgraph up` command, and you can set the URL to make requests to by either setting the `LANGGRAPH_API_URL` environment variable, or by changing the fallback value of the `LANGGRAPH_API_URL` variable in `constants.ts`.
- **I'm getting 500 network errors when I try to make requests on the client:** Ensure you have the LangGraph server running, and you're making requests to the correct port. You can specify the port to use by passing the `--port <PORT>` flag to the `npx @langchain/langgraph-cli dev` command, and you can set the URL to make requests to by either setting the `LANGGRAPH_API_URL` environment variable, or by changing the fallback value of the `LANGGRAPH_API_URL` variable in `constants.ts`.

- **I'm getting "thread ID not found" error toasts when I try to make requests on the client:** Ensure you have the LangGraph server running, and you're making requests to the correct port. You can specify the port to use by passing the `--port <PORT>` flag to the `langgraph up` command, and you can set the URL to make requests to by either setting the `LANGGRAPH_API_URL` environment variable, or by changing the fallback value of the `LANGGRAPH_API_URL` variable in `constants.ts`.
- **I'm getting "thread ID not found" error toasts when I try to make requests on the client:** Ensure you have the LangGraph server running, and you're making requests to the correct port. You can specify the port to use by passing the `--port <PORT>` flag to the `npx @langchain/langgraph-cli dev` command, and you can set the URL to make requests to by either setting the `LANGGRAPH_API_URL` environment variable, or by changing the fallback value of the `LANGGRAPH_API_URL` variable in `constants.ts`.

- **`Model name is missing in config.` error is being thrown when I make requests:** This error occurs when the `customModelName` is not specified in the config. You can resolve this by setting the `customModelName` field inside `config.configurable` to the name of the model you want to use when invoking the graph. See [this doc](https://langchain-ai.github.io/langgraphjs/how-tos/configuration/) on how to use configurable fields in LangGraph.

Expand Down
90 changes: 90 additions & 0 deletions evals/agent.eval.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
import { expect } from "vitest";
import * as ls from "langsmith/vitest";
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";

import { graph } from "../src/agent/open-canvas/index";
import { QUERY_ROUTING_DATA } from "./data/query_routing";
import { CODEGEN_DATA } from "./data/codegen";

ls.describe("query routing", () => {
ls.test(
"routes followups with questions to update artifact",
{
inputs: QUERY_ROUTING_DATA.inputs,
referenceOutputs: QUERY_ROUTING_DATA.referenceOutputs,
},
async ({ inputs, referenceOutputs }) => {
const generatePathNode = graph.nodes.generatePath;
const res = await generatePathNode.invoke(inputs, {
configurable: {
customModelName: "gpt-4o-mini",
},
});
ls.logOutputs(res);
expect(res).toEqual(referenceOutputs);
}
);
});

const qualityEvaluator = async (params: {
inputs: string;
outputs: string;
}) => {
const judge = new ChatOpenAI({ model: "gpt-4o" }).withStructuredOutput(
z.object({
justification: z
.string()
.describe("reasoning for why you are assigning a given quality score"),
quality_score: z
.number()
.describe(
"quality score for how well the generated code answers the query."
),
}),
{
name: "judge",
}
);
const EVAL_PROMPT = [
`Given the following user query and generated code, judge whether the`,
`code satisfies the user's query. Return a quality score between 1 and 10,`,
`where a 1 would be completely irrelevant to the user's input, and 10 would be a perfectly accurate code sample.`,
`A 5 would be a code sample that is partially on target, but is missing some aspect of a user's request.`,
`Justify your answer.\n`,
`<query>\n${params.inputs}\n</query>\n`,
`<generated_code>\n${params.outputs}\n</generated_code>`,
].join(" ");
const res = await judge.invoke(EVAL_PROMPT);
return {
key: "quality",
score: res.quality_score,
comment: res.justification,
};
};

ls.describe("codegen", () => {
ls.test(
"generate code with an LLM agent when asked",
{
inputs: CODEGEN_DATA.inputs,
referenceOutputs: {},
},
async ({ inputs }) => {
const generateArtifactNode = graph.nodes.generateArtifact;
const res = await generateArtifactNode.invoke(inputs, {
configurable: {
customModelName: "gpt-4o-mini",
},
});
ls.logOutputs(res);
const generatedCode = (res.artifact?.contents[0] as any).code;
expect(generatedCode).toBeDefined();
const wrappedEvaluator = ls.wrapEvaluator(qualityEvaluator);
await wrappedEvaluator({
inputs: inputs.messages[0].content,
outputs: generatedCode,
});
}
);
});
10 changes: 10 additions & 0 deletions evals/data/codegen.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
import { HumanMessage } from "@langchain/core/messages";

export const CODEGEN_DATA: Record<string, any> = {
inputs: {
messages: [
new HumanMessage("Write me code for an LLM agent that does scraping"),
],
next: "generateArtifact",
},
};
30 changes: 30 additions & 0 deletions evals/data/query_routing.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import { AIMessage, HumanMessage } from "@langchain/core/messages";

export const QUERY_ROUTING_DATA: Record<string, any> = {
inputs: {
messages: [
new HumanMessage(
"generate code for an LLM agent that can scrape the web"
),
new AIMessage(
"I've crafted a web scraper for you that fetches and parses content from a specified URL. Let me know if you need any modifications or additional features!"
),
new HumanMessage("Where's the LLM?"),
],
artifact: {
currentIndex: 1,
contents: [
{
index: 1,
type: "code" as const,
title: "Web Scraper LLM Agent",
code: "import requests\nfrom bs4 import BeautifulSoup\n\nclass WebScraper:\n def __init__(self, url):\n self.url = url\n self.content = None\n\n def fetch_content(self):\n try:\n response = requests.get(self.url)\n response.raise_for_status() # Check for HTTP errors\n self.content = response.text\n except requests.RequestException as e:\n print(f\"Error fetching {self.url}: {e}\")\n\n def parse_content(self):\n if self.content:\n soup = BeautifulSoup(self.content, 'html.parser')\n return soup\n else:\n print(\"No content to parse. Please fetch content first.\")\n return None\n\n def scrape(self):\n self.fetch_content()\n return self.parse_content()\n\n# Example usage:\nif __name__ == '__main__':\n url = 'https://example.com'\n scraper = WebScraper(url)\n parsed_content = scraper.scrape()\n print(parsed_content)",
language: "python" as const,
},
],
},
},
referenceOutputs: {
next: "rewriteArtifact",
},
};
15 changes: 15 additions & 0 deletions ls.vitest.config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import { defineConfig } from "vitest/config";
import path from "path";

export default defineConfig({
test: {
include: ["**/*.eval.?(c|m)[jt]s"],
reporters: ["langsmith/vitest/reporter"],
setupFiles: ["dotenv/config"],
},
resolve: {
alias: {
"@": path.resolve(__dirname, "./src"),
},
},
});
25 changes: 15 additions & 10 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,15 @@
"private": true,
"scripts": {
"dev": "next dev",
"dev:server": "npx @langchain/langgraph-cli dev --port 54367",
"build": "next build",
"start": "next start",
"lint": "next lint",
"format": "prettier --config .prettierrc --write \"src\" \"evals\"",
"eval": "vitest run --config ls.vitest.config.ts",
"eval:highlights": "yarn tsx evals/highlights.ts"
},
"packageManager": "yarn@1.22.22",
"dependencies": {
"@assistant-ui/react": "^0.5.71",
"@assistant-ui/react-markdown": "^0.2.18",
Expand All @@ -30,13 +33,14 @@
"@codemirror/lang-rust": "^6.0.1",
"@codemirror/lang-sql": "^6.8.0",
"@codemirror/lang-xml": "^6.1.0",
"@langchain/anthropic": "^0.3.6",
"@langchain/community": "^0.3.9",
"@langchain/core": "^0.3.14",
"@langchain/google-genai": "^0.1.2",
"@langchain/langgraph": "^0.2.18",
"@langchain/langgraph-sdk": "^0.0.17",
"@langchain/openai": "^0.3.11",
"@langchain/anthropic": "^0.3.12",
"@langchain/community": "^0.3.26",
"@langchain/core": "^0.3.33",
"@langchain/google-genai": "^0.1.6",
"@langchain/langgraph": "^0.2.41",
"@langchain/langgraph-sdk": "^0.0.36",
"@langchain/ollama": "^0.1.4",
"@langchain/openai": "^0.3.17",
"@nextjournal/lang-clojure": "^1.0.0",
"@radix-ui/react-avatar": "^1.1.0",
"@radix-ui/react-checkbox": "^1.1.2",
Expand Down Expand Up @@ -65,8 +69,8 @@
"eslint-plugin-unused-imports": "^4.1.4",
"framer-motion": "^11.11.9",
"js-cookie": "^3.0.5",
"langchain": "^0.3.5",
"langsmith": "^0.1.61",
"langchain": "^0.3.12",
"langsmith": "^0.3.3",
"lodash": "^4.17.21",
"lucide-react": "^0.441.0",
"next": "14.2.10",
Expand Down Expand Up @@ -105,6 +109,7 @@
"tailwindcss": "^3.4.1",
"tsx": "^4.19.1",
"typescript": "^5",
"typescript-eslint": "^8.8.1"
"typescript-eslint": "^8.8.1",
"vitest": "^3.0.4"
}
}
5 changes: 3 additions & 2 deletions src/agent/open-canvas/nodes/generate-artifact/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ export const generateArtifact = async (
state: typeof OpenCanvasGraphAnnotation.State,
config: LangGraphRunnableConfig
): Promise<OpenCanvasGraphReturnType> => {
const { modelName } = getModelConfig(config);
const { modelName, modelProvider } = getModelConfig(config);
const smallModel = await getModelFromConfig(config, {
temperature: 0.5,
});
Expand All @@ -32,7 +32,8 @@ export const generateArtifact = async (
schema: ARTIFACT_TOOL_SCHEMA,
},
],
{ tool_choice: "generate_artifact" }
// Ollama does not support tool choice
{ ...(modelProvider !== "ollama" && { tool_choice: "generate_artifact" }) }
);

const memoriesAsString = await getFormattedReflections(config);
Expand Down
14 changes: 12 additions & 2 deletions src/agent/open-canvas/nodes/rewrite-artifact/update-meta.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
import { LangGraphRunnableConfig } from "@langchain/langgraph";
import { OpenCanvasGraphAnnotation } from "../../state";
import { formatArtifactContent, getModelFromConfig } from "@/agent/utils";
import {
formatArtifactContent,
getModelConfig,
getModelFromConfig,
} from "@/agent/utils";
import { getArtifactContent } from "@/contexts/utils";
import { GET_TITLE_TYPE_REWRITE_ARTIFACT } from "../../prompts";
import { OPTIONALLY_UPDATE_ARTIFACT_META_SCHEMA } from "./schemas";
Expand All @@ -11,6 +15,7 @@ export async function optionallyUpdateArtifactMeta(
state: typeof OpenCanvasGraphAnnotation.State,
config: LangGraphRunnableConfig
): Promise<ToolCall | undefined> {
const { modelProvider } = getModelConfig(config);
const toolCallingModel = (await getModelFromConfig(config))
.bindTools(
[
Expand All @@ -20,7 +25,12 @@ export async function optionallyUpdateArtifactMeta(
description: "Update the artifact meta information, if necessary.",
},
],
{ tool_choice: "optionallyUpdateArtifactMeta" }
{
// Ollama does not support tool choice
...(modelProvider !== "ollama" && {
tool_choice: "optionallyUpdateArtifactMeta",
}),
}
)
.withConfig({ runName: "optionally_update_artifact_meta" });

Expand Down
3 changes: 3 additions & 0 deletions src/agent/open-canvas/prompts.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ Follow these rules and guidelines:
- Do not wrap it in any XML tags you see in this prompt.
- If writing code, do not add inline comments unless the user has specifically requested them. This is very important as we don't want to clutter the code.
${DEFAULT_CODE_PROMPT_RULES}
- Make sure you fulfill ALL aspects of a user's request. For example, if they ask for an output involving an LLM, prefer examples using OpenAI models with LangChain agents.
</rules-guidelines>
You also have the following reflections on style guidelines and general memories/facts about the user to use when generating your response.
Expand Down Expand Up @@ -249,6 +250,8 @@ A few of the recent messages in the chat history are:
{recentMessages}
</recent-messages>
If you have previously generated an artifact and the user asks a question that seems actionable, the likely choice is to take that action and rewrite the artifact.
{currentArtifactPrompt}`;

export const FOLLOWUP_ARTIFACT_PROMPT = `You are an AI assistant tasked with generating a followup to the artifact the user just generated.
Expand Down
Loading

0 comments on commit 5416017

Please sign in to comment.