Skip to content

Commit

Permalink
Merge pull request #2711 from langchain-ai/eugene/more_changes
Browse files Browse the repository at this point in the history
more changes
  • Loading branch information
eyurtsev authored Dec 11, 2024
2 parents d468655 + c2556aa commit 8f1db66
Show file tree
Hide file tree
Showing 4 changed files with 142 additions and 84 deletions.
107 changes: 76 additions & 31 deletions docs/docs/concepts/human_in_the_loop.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ A **human-in-the-loop** (or "on-the-loop") workflow integrates human input into

Key use cases for **human-in-the-loop** workflows in LLM-based applications include:

1. **🛠️ Reviewing tool calls**: Humans can review, edit, or approve tool calls requested by the LLM before tool execution.
1. [**🛠️ Reviewing tool calls**](#review-tool-calls): Humans can review, edit, or approve tool calls requested by the LLM before tool execution.
2. **✅ Validating LLM outputs**: Humans can review, edit, or approve content generated by the LLM.
3. **💡 Providing context**: Enable the LLM to explicitly request human input for clarification or additional details or to support multi-turn conversations.

Expand Down Expand Up @@ -46,12 +46,13 @@ Please read the [Breakpoints](breakpoints.md) guide for more information on usin

## Design Patterns

There are typically three different things that you might want to do when you interrupt a graph:
There are typically three different **actions** that you can do with a human-in-the-loop workflow:

1. **Approval/Rejection**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involve **routing** the graph based on the human's input.
2. **Editing**: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves **updating** the state with the human's input.
3. **Input**: Explicitly request human input at a particular step in the graph. This is useful for collecting additional information or context to inform the agent's decision-making process or for supporting **multi-turn conversations**.
1. **Approve or Reject**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involve **routing** the graph based on the human's input.
2. **Edit Graph State**: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves **updating** the state with the human's input.
3. **Get Input**: Explicitly request human input at a particular step in the graph. This is useful for collecting additional information or context to inform the agent's decision-making process or for supporting **multi-turn conversations**.

Below we show different design patterns that can be implemented using these **actions**.

### Approve or Reject

Expand Down Expand Up @@ -93,6 +94,8 @@ thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(Command(resume=True), config=thread_config)
```

See [how to review tool calls](../../how-tos/human_in_the_loop/review-tool-calls) for a more detailed example.

### Review & Edit State

<figure markdown="1">
Expand Down Expand Up @@ -136,7 +139,9 @@ graph.invoke(
)
```

### Review Tool Call
See [How to wait for user input using interrupt](../../how-tos/human_in_the_loop/wait-user-input) for a more detailed example.

### Review Tool Calls

<figure markdown="1">
![image](img/human_in_the_loop/tool-call-review.png){: style="max-height:400px"}
Expand Down Expand Up @@ -177,6 +182,8 @@ def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]:
return Command(goto="call_llm", update={"messages": [feedback_msg]})
```

See [how to review tool calls](../../how-tos/human_in_the_loop/review-tool-calls) for a more detailed example.

### Multi-turn conversation

<figure markdown="1">
Expand All @@ -190,35 +197,72 @@ A **multi-turn conversation** involves multiple back-and-forth interactions betw
This design pattern is useful in an LLM application consisting of [multiple agents](./multi_agent.md). One or more agents may need to carry out multi-turn conversations with a human, where the human provides input or feedback at different stages of the conversation. For simplicity, the agent implementation below is illustrated as a single node, but in reality
it may be part of a larger graph consisting of multiple nodes and include a conditional edge.

```python
from langgraph.types import interrupt
=== "One human node per agent"

def human_input(state: State):
human_message = interrupt("human_input")
return {
"messages": [
{
"role": "human",
"content": human_message
}
]
}
In this pattern, each agent has its own human node for collecting user input.
This can be achieved by either naming the human nodes with unique names (e.g., "human for agent 1", "human for agent 2") or by
using subgraphs where a subgraph contains a human node and an agent node.

def agent(state: State):
# Agent logic
...
```python
from langgraph.types import interrupt

graph_builder.add_node("human_input", human_input)
graph_builder.add_edge("human_input", "agent")
graph = graph_builder.compile(checkpointer=checkpointer)
def human_input(state: State):
human_message = interrupt("human_input")
return {
"messages": [
{
"role": "human",
"content": human_message
}
]
}

# After running the graph and hitting the breakpoint, the graph will pause.
# Resume it with the human's input.
graph.invoke(
Command(resume="hello!"),
config=thread_config
)
```
def agent(state: State):
# Agent logic
...

graph_builder.add_node("human_input", human_input)
graph_builder.add_edge("human_input", "agent")
graph = graph_builder.compile(checkpointer=checkpointer)

# After running the graph and hitting the breakpoint, the graph will pause.
# Resume it with the human's input.
graph.invoke(
Command(resume="hello!"),
config=thread_config
)
```


=== "Single human node shared across multiple agents"

In this pattern, a single human node is used to collect user input for multiple agents. The active agent is determined from the state, so after human input is collected, the graph can route to the correct agent.

```python
from langgraph.types import interrupt

def human_node(state: MessagesState) -> Command[Literal["agent_1", "agent_2", ...]]:
"""A node for collecting user input."""
user_input = interrupt(value="Ready for user input.")

# Determine the **active agent** from the state, so
# we can route to the correct agent after collecting input.
# For example, add a field to the state or use the last active agent.
# or fill in `name` attribute of AI messages generated by the agents.
active_agent = ...

return Command(
update={
"messages": [{
"role": "human",
"content": user_input,
}]
},
goto=active_agent,
)
```

See [how to implement multi-turn conversations](../how-tos/multi-agent-multi-turn-convo.ipynb) for a more detailed example.

## Best practices

Expand All @@ -232,3 +276,4 @@ graph.invoke(
- [**Conceptual Guide: Persistence**](persistence.md#replay): Read the persistence guide for more context on replaying.
- [**Conceptual Guide: Breakpoints**](breakpoints.md): Read the breakpoints guide for more context on breakpoints.
- [**How to Guides: Human-in-the-loop**](../how-tos/index.md#human-in-the-loop): Learn how to implement human-in-the-loop workflows in LangGraph.
- [**How to implement multi-turn conversations**](../how-tos/multi-agent-multi-turn-convo.ipynb): Learn how to implement multi-turn conversations in LangGraph.
33 changes: 18 additions & 15 deletions docs/docs/how-tos/human_in_the_loop/review-tool-calls.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -208,17 +208,17 @@
" {\n",
" \"question\": \"Is this correct?\",\n",
" # Surface tool calls for review\n",
" \"tool_call\": tool_call\n",
" \"tool_call\": tool_call,\n",
" }\n",
" )\n",
" \n",
"\n",
" review_action = human_review[\"action\"]\n",
" review_data = human_review.get(\"data\")\n",
"\n",
" # if approved, call the tool\n",
" if review_action == \"continue\":\n",
" return Command(goto=\"run_tool\")\n",
" \n",
"\n",
" # update the AI message AND call tools\n",
" elif review_action == \"update\":\n",
" updated_message = {\n",
Expand Down Expand Up @@ -448,10 +448,10 @@
],
"source": [
"for event in graph.stream(\n",
" # provide value \n",
" Command(resume={\"action\": \"continue\"}), \n",
" # provide value\n",
" Command(resume={\"action\": \"continue\"}),\n",
" thread,\n",
" stream_mode=\"updates\"\n",
" stream_mode=\"updates\",\n",
"):\n",
" print(event)\n",
" print(\"\\n\")"
Expand Down Expand Up @@ -558,9 +558,9 @@
"source": [
"# Let's now continue executing from here\n",
"for event in graph.stream(\n",
" Command(resume={\"action\": \"update\", \"data\": {\"city\": \"San Francisco, USA\"}}), \n",
" thread, \n",
" stream_mode=\"updates\"\n",
" Command(resume={\"action\": \"update\", \"data\": {\"city\": \"San Francisco, USA\"}}),\n",
" thread,\n",
" stream_mode=\"updates\",\n",
"):\n",
" print(event)\n",
" print(\"\\n\")"
Expand Down Expand Up @@ -674,9 +674,14 @@
"# Let's now continue executing from here\n",
"for event in graph.stream(\n",
" # provide our natural language feedback!\n",
" Command(resume={\"action\": \"feedback\", \"data\": \"User requested changes: use <city, country> format for location\"}), \n",
" thread, \n",
" stream_mode=\"updates\"\n",
" Command(\n",
" resume={\n",
" \"action\": \"feedback\",\n",
" \"data\": \"User requested changes: use <city, country> format for location\",\n",
" }\n",
" ),\n",
" thread,\n",
" stream_mode=\"updates\",\n",
"):\n",
" print(event)\n",
" print(\"\\n\")"
Expand Down Expand Up @@ -737,9 +742,7 @@
],
"source": [
"for event in graph.stream(\n",
" Command(resume={\"action\": \"continue\"}), \n",
" thread, \n",
" stream_mode=\"updates\"\n",
" Command(resume={\"action\": \"continue\"}), thread, stream_mode=\"updates\"\n",
"):\n",
" print(event)\n",
" print(\"\\n\")"
Expand Down
27 changes: 15 additions & 12 deletions docs/docs/how-tos/human_in_the_loop/wait-user-input.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,9 @@
],
"source": [
"# Continue the graph execution\n",
"for event in graph.stream(Command(resume=\"go to step 3!\"), thread, stream_mode=\"updates\"):\n",
"for event in graph.stream(\n",
" Command(resume=\"go to step 3!\"), thread, stream_mode=\"updates\"\n",
"):\n",
" print(event)\n",
" print(\"\\n\")"
]
Expand Down Expand Up @@ -396,9 +398,7 @@
"def ask_human(state):\n",
" tool_call_id = state[\"messages\"][-1].tool_calls[0][\"id\"]\n",
" location = interrupt(\"Please provide your location:\")\n",
" tool_message = [\n",
" {\"tool_call_id\": tool_call_id, \"type\": \"tool\", \"content\": location}\n",
" ]\n",
" tool_message = [{\"tool_call_id\": tool_call_id, \"type\": \"tool\", \"content\": location}]\n",
" return {\"messages\": tool_message}\n",
"\n",
"\n",
Expand All @@ -424,7 +424,7 @@
" # This means these are the edges taken after the `agent` node is called.\n",
" \"agent\",\n",
" # Next, we pass in the function that will determine which node is called next.\n",
" should_continue\n",
" should_continue,\n",
")\n",
"\n",
"# We now add a normal edge from `tools` to `agent`.\n",
Expand Down Expand Up @@ -489,9 +489,16 @@
"\n",
"config = {\"configurable\": {\"thread_id\": \"2\"}}\n",
"for event in app.stream(\n",
" {\"messages\": [(\"user\", \"Use the search tool to ask the user where they are, then look up the weather there\")]},\n",
" {\n",
" \"messages\": [\n",
" (\n",
" \"user\",\n",
" \"Use the search tool to ask the user where they are, then look up the weather there\",\n",
" )\n",
" ]\n",
" },\n",
" config,\n",
" stream_mode=\"values\"\n",
" stream_mode=\"values\",\n",
"):\n",
" event[\"messages\"][-1].pretty_print()"
]
Expand Down Expand Up @@ -565,11 +572,7 @@
}
],
"source": [
"for event in app.stream(\n",
" Command(resume=\"san francisco\"), \n",
" config, \n",
" stream_mode=\"values\"\n",
"):\n",
"for event in app.stream(Command(resume=\"san francisco\"), config, stream_mode=\"values\"):\n",
" event[\"messages\"][-1].pretty_print()"
]
}
Expand Down
Loading

0 comments on commit 8f1db66

Please sign in to comment.