diff --git a/docs/docs/concepts/human_in_the_loop.md b/docs/docs/concepts/human_in_the_loop.md index ac02d57c2..74b6906c8 100644 --- a/docs/docs/concepts/human_in_the_loop.md +++ b/docs/docs/concepts/human_in_the_loop.md @@ -13,7 +13,7 @@ A **human-in-the-loop** (or "on-the-loop") workflow integrates human input into Key use cases for **human-in-the-loop** workflows in LLM-based applications include: -1. **🛠️ Reviewing tool calls**: Humans can review, edit, or approve tool calls requested by the LLM before tool execution. +1. [**🛠️ Reviewing tool calls**](#review-tool-calls): Humans can review, edit, or approve tool calls requested by the LLM before tool execution. 2. **✅ Validating LLM outputs**: Humans can review, edit, or approve content generated by the LLM. 3. **💡 Providing context**: Enable the LLM to explicitly request human input for clarification or additional details or to support multi-turn conversations. @@ -46,12 +46,13 @@ Please read the [Breakpoints](breakpoints.md) guide for more information on usin ## Design Patterns -There are typically three different things that you might want to do when you interrupt a graph: +There are typically three different **actions** that you can do with a human-in-the-loop workflow: -1. **Approval/Rejection**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involve **routing** the graph based on the human's input. -2. **Editing**: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves **updating** the state with the human's input. -3. **Input**: Explicitly request human input at a particular step in the graph. This is useful for collecting additional information or context to inform the agent's decision-making process or for supporting **multi-turn conversations**. +1. **Approve or Reject**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involve **routing** the graph based on the human's input. +2. **Edit Graph State**: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves **updating** the state with the human's input. +3. **Get Input**: Explicitly request human input at a particular step in the graph. This is useful for collecting additional information or context to inform the agent's decision-making process or for supporting **multi-turn conversations**. +Below we show different design patterns that can be implemented using these **actions**. ### Approve or Reject @@ -93,6 +94,8 @@ thread_config = {"configurable": {"thread_id": "some_id"}} graph.invoke(Command(resume=True), config=thread_config) ``` +See [how to review tool calls](../../how-tos/human_in_the_loop/review-tool-calls) for a more detailed example. + ### Review & Edit State
@@ -136,7 +139,9 @@ graph.invoke( ) ``` -### Review Tool Call +See [How to wait for user input using interrupt](../../how-tos/human_in_the_loop/wait-user-input) for a more detailed example. + +### Review Tool Calls
![image](img/human_in_the_loop/tool-call-review.png){: style="max-height:400px"} @@ -177,6 +182,8 @@ def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]: return Command(goto="call_llm", update={"messages": [feedback_msg]}) ``` +See [how to review tool calls](../../how-tos/human_in_the_loop/review-tool-calls) for a more detailed example. + ### Multi-turn conversation
@@ -190,35 +197,72 @@ A **multi-turn conversation** involves multiple back-and-forth interactions betw This design pattern is useful in an LLM application consisting of [multiple agents](./multi_agent.md). One or more agents may need to carry out multi-turn conversations with a human, where the human provides input or feedback at different stages of the conversation. For simplicity, the agent implementation below is illustrated as a single node, but in reality it may be part of a larger graph consisting of multiple nodes and include a conditional edge. -```python -from langgraph.types import interrupt +=== "One human node per agent" -def human_input(state: State): - human_message = interrupt("human_input") - return { - "messages": [ - { - "role": "human", - "content": human_message - } - ] - } + In this pattern, each agent has its own human node for collecting user input. + This can be achieved by either naming the human nodes with unique names (e.g., "human for agent 1", "human for agent 2") or by + using subgraphs where a subgraph contains a human node and an agent node. -def agent(state: State): - # Agent logic - ... + ```python + from langgraph.types import interrupt -graph_builder.add_node("human_input", human_input) -graph_builder.add_edge("human_input", "agent") -graph = graph_builder.compile(checkpointer=checkpointer) + def human_input(state: State): + human_message = interrupt("human_input") + return { + "messages": [ + { + "role": "human", + "content": human_message + } + ] + } -# After running the graph and hitting the breakpoint, the graph will pause. -# Resume it with the human's input. -graph.invoke( - Command(resume="hello!"), - config=thread_config -) -``` + def agent(state: State): + # Agent logic + ... + + graph_builder.add_node("human_input", human_input) + graph_builder.add_edge("human_input", "agent") + graph = graph_builder.compile(checkpointer=checkpointer) + + # After running the graph and hitting the breakpoint, the graph will pause. + # Resume it with the human's input. + graph.invoke( + Command(resume="hello!"), + config=thread_config + ) + ``` + + +=== "Single human node shared across multiple agents" + + In this pattern, a single human node is used to collect user input for multiple agents. The active agent is determined from the state, so after human input is collected, the graph can route to the correct agent. + + ```python + from langgraph.types import interrupt + + def human_node(state: MessagesState) -> Command[Literal["agent_1", "agent_2", ...]]: + """A node for collecting user input.""" + user_input = interrupt(value="Ready for user input.") + + # Determine the **active agent** from the state, so + # we can route to the correct agent after collecting input. + # For example, add a field to the state or use the last active agent. + # or fill in `name` attribute of AI messages generated by the agents. + active_agent = ... + + return Command( + update={ + "messages": [{ + "role": "human", + "content": user_input, + }] + }, + goto=active_agent, + ) + ``` + +See [how to implement multi-turn conversations](../how-tos/multi-agent-multi-turn-convo.ipynb) for a more detailed example. ## Best practices @@ -232,3 +276,4 @@ graph.invoke( - [**Conceptual Guide: Persistence**](persistence.md#replay): Read the persistence guide for more context on replaying. - [**Conceptual Guide: Breakpoints**](breakpoints.md): Read the breakpoints guide for more context on breakpoints. - [**How to Guides: Human-in-the-loop**](../how-tos/index.md#human-in-the-loop): Learn how to implement human-in-the-loop workflows in LangGraph. +- [**How to implement multi-turn conversations**](../how-tos/multi-agent-multi-turn-convo.ipynb): Learn how to implement multi-turn conversations in LangGraph. diff --git a/docs/docs/how-tos/human_in_the_loop/review-tool-calls.ipynb b/docs/docs/how-tos/human_in_the_loop/review-tool-calls.ipynb index 18db683ef..1c36ca285 100644 --- a/docs/docs/how-tos/human_in_the_loop/review-tool-calls.ipynb +++ b/docs/docs/how-tos/human_in_the_loop/review-tool-calls.ipynb @@ -208,17 +208,17 @@ " {\n", " \"question\": \"Is this correct?\",\n", " # Surface tool calls for review\n", - " \"tool_call\": tool_call\n", + " \"tool_call\": tool_call,\n", " }\n", " )\n", - " \n", + "\n", " review_action = human_review[\"action\"]\n", " review_data = human_review.get(\"data\")\n", "\n", " # if approved, call the tool\n", " if review_action == \"continue\":\n", " return Command(goto=\"run_tool\")\n", - " \n", + "\n", " # update the AI message AND call tools\n", " elif review_action == \"update\":\n", " updated_message = {\n", @@ -448,10 +448,10 @@ ], "source": [ "for event in graph.stream(\n", - " # provide value \n", - " Command(resume={\"action\": \"continue\"}), \n", + " # provide value\n", + " Command(resume={\"action\": \"continue\"}),\n", " thread,\n", - " stream_mode=\"updates\"\n", + " stream_mode=\"updates\",\n", "):\n", " print(event)\n", " print(\"\\n\")" @@ -558,9 +558,9 @@ "source": [ "# Let's now continue executing from here\n", "for event in graph.stream(\n", - " Command(resume={\"action\": \"update\", \"data\": {\"city\": \"San Francisco, USA\"}}), \n", - " thread, \n", - " stream_mode=\"updates\"\n", + " Command(resume={\"action\": \"update\", \"data\": {\"city\": \"San Francisco, USA\"}}),\n", + " thread,\n", + " stream_mode=\"updates\",\n", "):\n", " print(event)\n", " print(\"\\n\")" @@ -674,9 +674,14 @@ "# Let's now continue executing from here\n", "for event in graph.stream(\n", " # provide our natural language feedback!\n", - " Command(resume={\"action\": \"feedback\", \"data\": \"User requested changes: use format for location\"}), \n", - " thread, \n", - " stream_mode=\"updates\"\n", + " Command(\n", + " resume={\n", + " \"action\": \"feedback\",\n", + " \"data\": \"User requested changes: use format for location\",\n", + " }\n", + " ),\n", + " thread,\n", + " stream_mode=\"updates\",\n", "):\n", " print(event)\n", " print(\"\\n\")" @@ -737,9 +742,7 @@ ], "source": [ "for event in graph.stream(\n", - " Command(resume={\"action\": \"continue\"}), \n", - " thread, \n", - " stream_mode=\"updates\"\n", + " Command(resume={\"action\": \"continue\"}), thread, stream_mode=\"updates\"\n", "):\n", " print(event)\n", " print(\"\\n\")" diff --git a/docs/docs/how-tos/human_in_the_loop/wait-user-input.ipynb b/docs/docs/how-tos/human_in_the_loop/wait-user-input.ipynb index f0f80e20e..6142030cd 100644 --- a/docs/docs/how-tos/human_in_the_loop/wait-user-input.ipynb +++ b/docs/docs/how-tos/human_in_the_loop/wait-user-input.ipynb @@ -245,7 +245,9 @@ ], "source": [ "# Continue the graph execution\n", - "for event in graph.stream(Command(resume=\"go to step 3!\"), thread, stream_mode=\"updates\"):\n", + "for event in graph.stream(\n", + " Command(resume=\"go to step 3!\"), thread, stream_mode=\"updates\"\n", + "):\n", " print(event)\n", " print(\"\\n\")" ] @@ -396,9 +398,7 @@ "def ask_human(state):\n", " tool_call_id = state[\"messages\"][-1].tool_calls[0][\"id\"]\n", " location = interrupt(\"Please provide your location:\")\n", - " tool_message = [\n", - " {\"tool_call_id\": tool_call_id, \"type\": \"tool\", \"content\": location}\n", - " ]\n", + " tool_message = [{\"tool_call_id\": tool_call_id, \"type\": \"tool\", \"content\": location}]\n", " return {\"messages\": tool_message}\n", "\n", "\n", @@ -424,7 +424,7 @@ " # This means these are the edges taken after the `agent` node is called.\n", " \"agent\",\n", " # Next, we pass in the function that will determine which node is called next.\n", - " should_continue\n", + " should_continue,\n", ")\n", "\n", "# We now add a normal edge from `tools` to `agent`.\n", @@ -489,9 +489,16 @@ "\n", "config = {\"configurable\": {\"thread_id\": \"2\"}}\n", "for event in app.stream(\n", - " {\"messages\": [(\"user\", \"Use the search tool to ask the user where they are, then look up the weather there\")]},\n", + " {\n", + " \"messages\": [\n", + " (\n", + " \"user\",\n", + " \"Use the search tool to ask the user where they are, then look up the weather there\",\n", + " )\n", + " ]\n", + " },\n", " config,\n", - " stream_mode=\"values\"\n", + " stream_mode=\"values\",\n", "):\n", " event[\"messages\"][-1].pretty_print()" ] @@ -565,11 +572,7 @@ } ], "source": [ - "for event in app.stream(\n", - " Command(resume=\"san francisco\"), \n", - " config, \n", - " stream_mode=\"values\"\n", - "):\n", + "for event in app.stream(Command(resume=\"san francisco\"), config, stream_mode=\"values\"):\n", " event[\"messages\"][-1].pretty_print()" ] } diff --git a/docs/docs/how-tos/multi-agent-multi-turn-convo.ipynb b/docs/docs/how-tos/multi-agent-multi-turn-convo.ipynb index 98cc4ba64..2e6db94d4 100644 --- a/docs/docs/how-tos/multi-agent-multi-turn-convo.ipynb +++ b/docs/docs/how-tos/multi-agent-multi-turn-convo.ipynb @@ -161,12 +161,13 @@ " response = model.with_structured_output(Response).invoke(messages)\n", " goto = response[\"goto\"]\n", " if goto == \"finish\":\n", - " # When the agent is done, we should go to the \n", + " # When the agent is done, we should go to the\n", " goto = \"human\"\n", "\n", " # Handoff to another agent or halt\n", " ai_msg = {\"role\": \"ai\", \"content\": response[\"response\"], \"name\": name}\n", " return Command(goto=goto, update={\"messages\": [ai_msg]})\n", + "\n", " return agent_node\n", "\n", "\n", @@ -204,37 +205,44 @@ " ),\n", ")\n", "\n", - "def human_node(state: MessagesState) -> Command[Literal[\"hotel_advisor\", \"sightseeing_advisor\", \"travel_advisor\", \"human\"]]:\n", + "\n", + "def human_node(\n", + " state: MessagesState,\n", + ") -> Command[\n", + " Literal[\"hotel_advisor\", \"sightseeing_advisor\", \"travel_advisor\", \"human\"]\n", + "]:\n", " \"\"\"A node for collecting user input.\"\"\"\n", " user_input = interrupt(value=\"Ready for user input.\")\n", "\n", " active_agent = None\n", "\n", " # This will look up the active agent.\n", - " for message in state['messages'][::-1]:\n", + " for message in state[\"messages\"][::-1]:\n", " if message.name:\n", " active_agent = message.name\n", " break\n", " else:\n", - " raise AssertionError(f'Could not determine the active agent.')\n", - " \n", + " raise AssertionError(\"Could not determine the active agent.\")\n", + "\n", " return Command(\n", " update={\n", - " \"messages\": [{\n", - " \"role\": \"human\",\n", - " \"content\": user_input,\n", - " }]\n", + " \"messages\": [\n", + " {\n", + " \"role\": \"human\",\n", + " \"content\": user_input,\n", + " }\n", + " ]\n", " },\n", " goto=active_agent,\n", " )\n", - " \n", + "\n", "\n", "builder = StateGraph(MessagesState)\n", "builder.add_node(\"travel_advisor\", travel_advisor)\n", "builder.add_node(\"sightseeing_advisor\", sightseeing_advisor)\n", "builder.add_node(\"hotel_advisor\", hotel_advisor)\n", "\n", - "# This adds a node to collet human input, which will route \n", + "# This adds a node to collet human input, which will route\n", "# back to the active agent.\n", "builder.add_node(\"human\", human_node)\n", "\n", @@ -314,40 +322,39 @@ "source": [ "import uuid\n", "\n", - "thread_config = {\n", - " \"configurable\": {\n", - " \"thread_id\": uuid.uuid4()\n", - " }\n", - "}\n", + "thread_config = {\"configurable\": {\"thread_id\": uuid.uuid4()}}\n", "\n", "inputs = [\n", " # 1st round of conversation,\n", - " {\"messages\": [{\n", - " \"role\": \"user\", \n", - " \"content\": \"i wanna go somewhere warm in the caribbean\"\n", - " }]},\n", + " {\n", + " \"messages\": [\n", + " {\"role\": \"user\", \"content\": \"i wanna go somewhere warm in the caribbean\"}\n", + " ]\n", + " },\n", " # Since we're using `interrupt`, we'll need to resume using the Command primitive.\n", " # 2nd round of conversation,\n", - " Command(resume=\"could you recommend a nice hotel in one of the areas and tell me which area it is.\"),\n", + " Command(\n", + " resume=\"could you recommend a nice hotel in one of the areas and tell me which area it is.\"\n", + " ),\n", " # 3rd round of conversation,\n", " Command(resume=\"could you recommend something to do near the hotel?\"),\n", "]\n", "\n", "for idx, user_input in enumerate(inputs):\n", " print()\n", - " print(f'--- Conversation Turn {idx + 1} ---')\n", + " print(f\"--- Conversation Turn {idx + 1} ---\")\n", " print()\n", " print(f\"User: {user_input}\")\n", " print()\n", " for update in graph.stream(\n", " user_input,\n", " config=thread_config,\n", - " stream_mode='updates',\n", + " stream_mode=\"updates\",\n", " ):\n", " for node_id, value in update.items():\n", - " if isinstance(value, dict) and value.get('messages', []):\n", - " last_message = value['messages'][-1]\n", - " if last_message['role'] != \"ai\":\n", + " if isinstance(value, dict) and value.get(\"messages\", []):\n", + " last_message = value[\"messages\"][-1]\n", + " if last_message[\"role\"] != \"ai\":\n", " continue\n", " print(f\"{last_message['name']}: {last_message['content']}\")" ]