Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
hinthornw committed Sep 14, 2024
1 parent ef0d1af commit 609708e
Show file tree
Hide file tree
Showing 8 changed files with 60 additions and 248 deletions.
244 changes: 27 additions & 217 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,22 @@
# LangGraph ReAct Agent Template
# LangGraph Simple Chatbot Template

[![CI](https://github.com/langchain-ai/react-agent/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/langchain-ai/react-agent/actions/workflows/unit-tests.yml)
[![Integration Tests](https://github.com/langchain-ai/react-agent/actions/workflows/integration-tests.yml/badge.svg)](https://github.com/langchain-ai/react-agent/actions/workflows/integration-tests.yml)
[![CI](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/unit-tests.yml)
[![Integration Tests](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/integration-tests.yml/badge.svg)](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/integration-tests.yml)

This template showcases a [ReAct agent](https://arxiv.org/abs/2210.03629) implemented using [LangGraph](https://github.com/langchain-ai/langgraph), designed for [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio). ReAct agents are uncomplicated, prototypical agents that can be flexibly extended to many tools.
This template demonstrates a simple chatbot implemented using [LangGraph](https://github.com/langchain-ai/langgraph), designed for [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio). The chatbot maintains persistent chat memory, allowing for coherent conversations across multiple interactions.

![Graph view in LangGraph studio UI](./static/studio_ui.png)

The core logic, defined in `src/agent/graph.py`, demonstrates a flexible ReAct agent that iteratively reasons about user queries and executes actions, showcasing the power of this approach for complex problem-solving tasks.
The core logic, defined in `src/agent/graph.py`, showcases a straightforward chatbot that responds to user queries while maintaining context from previous messages.

## What it does

The ReAct agent:
The simple chatbot:

1. Takes a user **query** as input
2. Reasons about the query and decides on an action
3. Executes the chosen action using available tools
4. Observes the result of the action
5. Repeats steps 2-4 until it can provide a final answer
1. Takes a user **message** as input
2. Maintains a history of the conversation
3. Generates a response based on the current message and conversation history
4. Updates the conversation history with the new interaction

By default, it's set up with a basic set of tools, but can be easily extended with custom tools to suit various use cases.
This template provides a foundation that can be easily customized and extended to create more complex conversational agents.

## Getting Started

Expand All @@ -33,8 +30,6 @@ cp .env.example .env

2. Define required API keys in your `.env` file.

The primary [search tool](./src/agent/tools.py) [^1] used is [Tavily](https://tavily.com/). Create an API key [here](https://app.tavily.com/sign-in).

<!--
Setup instruction auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
-->
Expand All @@ -45,29 +40,34 @@ Set up your LLM API keys. This repo defaults to using [Claude](https://console.a
End setup instructions
-->

3. Customize whatever you'd like in the code.
4. Open the folder LangGraph Studio!
3. Customize the code as needed.
4. Open the folder in LangGraph Studio!

## How to customize

1. **Add new tools**: Extend the agent's capabilities by adding new tools in [tools.py](./src/agent/tools.py). These can be any Python functions that perform specific tasks.
1. **Modify the system prompt**: The default system prompt is defined in [configuration.py](./src/agent/configuration.py). You can easily update this via configuration in the studio to change the chatbot's personality or behavior.
2. **Select a different model**: We default to Anthropic's Claude 3 Sonnet. You can select a compatible chat model using `provider/model-name` via configuration. Example: `openai/gpt-4-turbo-preview`.
3. **Customize the prompt**: We provide a default system prompt in [configuration.py](./src/agent/configuration.py). You can easily update this via configuration in the studio.
3. **Extend the graph**: The core logic of the chatbot is defined in [graph.py](./src/agent/graph.py). You can modify this file to add new nodes, edges, or change the flow of the conversation.

You can also quickly extend this template by:

- Modifying the agent's reasoning process in [graph.py](./src/agent/graph.py).
- Adjusting the ReAct loop or adding additional steps to the agent's decision-making process.
- Adding custom tools or functions to enhance the chatbot's capabilities.
- Implementing additional logic for handling specific types of user queries or tasks.
- Integrating external APIs or databases to provide more dynamic responses.

## Development

While iterating on your graph, you can edit past state and rerun your app from past states to debug specific nodes. Local changes will be automatically applied via hot reload. Try adding an interrupt before the agent calls tools, updating the default system message in `src/agent/configuration.py` to take on a persona, or adding additional nodes and edges!
While iterating on your graph, you can edit past state and rerun your app from previous states to debug specific nodes. Local changes will be automatically applied via hot reload. Try experimenting with:

- Modifying the system prompt to give your chatbot a unique personality.
- Adding new nodes to the graph for more complex conversation flows.
- Implementing conditional logic to handle different types of user inputs.

Follow up requests will be appended to the same thread. You can create an entirely new thread, clearing previous history, using the `+` button in the top right.
Follow-up requests will be appended to the same thread. You can create an entirely new thread, clearing previous history, using the `+` button in the top right.

You can find the latest (under construction) docs on [LangGraph](https://github.com/langchain-ai/langgraph) here, including examples and other references. Using those guides can help you pick the right patterns to adapt here for your use case.
For more advanced features and examples, refer to the [LangGraph documentation](https://github.com/langchain-ai/langgraph). These resources can help you adapt this template for your specific use case and build more sophisticated conversational agents.

LangGraph Studio also integrates with [LangSmith](https://smith.langchain.com/) for more in-depth tracing and collaboration with teammates.
LangGraph Studio also integrates with [LangSmith](https://smith.langchain.com/) for more in-depth tracing and collaboration with teammates, allowing you to analyze and optimize your chatbot's performance.

<!--
Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
Expand All @@ -78,7 +78,7 @@ Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
"properties": {
"system_prompt": {
"type": "string",
"default": "You are a helpful AI assistant.\n\nSystem time: {system_time}"
"default": "You are a helpful (if not sassy) personal assistant.\n\nSystem time: {system_time}"
},
"model_name": {
"type": "string",
Expand Down Expand Up @@ -265,196 +265,6 @@ Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
"variables": "OPENAI_API_KEY"
}
]
},
"scraper_tool_model_name": {
"type": "string",
"default": "accounts/fireworks/models/firefunction-v2",
"environment": [
{
"value": "anthropic/claude-1.2",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-2.0",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-2.1",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-5-sonnet-20240620",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-haiku-20240307",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-opus-20240229",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-sonnet-20240229",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-instant-1.2",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "fireworks/gemma2-9b-it",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-70b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-70b-instruct-hf",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-8b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-8b-instruct-hf",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-405b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-405b-instruct-long",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-70b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-8b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mixtral-8x22b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mixtral-8x7b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mixtral-8x7b-instruct-hf",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mythomax-l2-13b",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/phi-3-vision-128k-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/phi-3p5-vision-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/starcoder-16b",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/yi-large",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-0125",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-0301",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-1106",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-16k",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-16k-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-0125-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-0314",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-1106-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-32k",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-32k-0314",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-32k-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-turbo",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-turbo-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-vision-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4o",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4o-mini",
"variables": "OPENAI_API_KEY"
}
]
},
"max_search_results": {
"type": "integer",
"default": 10
}
}
}
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ readme = "README.md"
license = { text = "MIT" }
requires-python = ">=3.9"
dependencies = [
"langchain>=0.3.0",
"langchain-anthropic>=0.2.0",
"langgraph>=0.2.6",
"python-dotenv>=1.0.1",
Expand Down
2 changes: 1 addition & 1 deletion src/agent/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class Configuration:
"kind": "llm",
}
},
] = "claude-3-5-sonnet-20240620"
] = "anthropic/claude-3-5-sonnet-20240620"
"""The name of the language model to use for our chatbot."""

@classmethod
Expand Down
29 changes: 8 additions & 21 deletions src/agent/graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,17 @@
from datetime import datetime, timezone
from typing import Any, Dict, List

import anthropic
from agent.configuration import Configuration
from agent.state import State
from langchain_core.runnables import RunnableConfig
from langgraph.graph import StateGraph

from agent.configuration import Configuration
from agent.state import State
from agent.utils import load_chat_model

# Define the function that calls the model


async def call_model(
state: State, config: RunnableConfig
) -> Dict[str, List[Dict[str, Any]]]:
async def call_model(state: State, config: RunnableConfig) -> Dict[str, List[Any]]:
"""Call the LLM powering our "agent".
This function prepares the prompt, initializes the model, and processes the response.
Expand All @@ -33,23 +32,11 @@ async def call_model(
system_prompt = configuration.system_prompt.format(
system_time=datetime.now(tz=timezone.utc).isoformat()
)
toks = []
async with anthropic.AsyncAnthropic() as client:
async with client.messages.stream(
model=configuration.model_name,
max_tokens=1024,
system=system_prompt,
messages=state.messages,
) as stream:
async for text in stream.text_stream:
toks.append(text)
model = load_chat_model(configuration.model_name)
res = await model.ainvoke([("system", system_prompt), *state.messages])

# Return the model's response as a list to be added to existing messages
return {
"messages": [
{"role": "assistant", "content": [{"type": "text", "text": "".join(toks)}]}
]
}
return {"messages": [res]}


# Define a new graph
Expand Down
7 changes: 4 additions & 3 deletions src/agent/state.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@

from __future__ import annotations

import operator
from dataclasses import dataclass, field
from typing import Sequence
from typing import List

from langchain_core.messages import AnyMessage
from langgraph.graph import add_messages
from typing_extensions import Annotated


Expand All @@ -16,7 +17,7 @@ class State:
This class is used to define the initial state and structure of incoming data.
"""

messages: Annotated[Sequence[dict], operator.add] = field(default_factory=list)
messages: Annotated[List[AnyMessage], add_messages] = field(default_factory=list)
"""
Messages tracking the primary execution state of the agent.
Expand Down
Loading

0 comments on commit 609708e

Please sign in to comment.