Agent with Human Assistance: Adding a Human in the Loop¶
Our agent is getting smarter. It can use tools and remember past conversations. But what if it gets stuck, or what if we need to supervise its work before it takes an important action? Real-world applications often require a "human in the loop" – a way for us to guide, correct, or approve the agent's behavior.
Let's teach our agent to pause and ask for our help. This is a crucial step towards building reliable and collaborative AI systems.
The Power of a Pause: Hardcoded Interrupts¶
LangGraph has a built-in mechanism for pausing the execution of a graph: interrupts. The simplest way to use them is to tell the graph to pause at a predetermined point in its flow.
What is an Interrupt?
An interrupt is a directive you give to the compiled graph to pause its execution either before or after a specific node runs. When the graph is interrupted, it stops and returns control to you. You can then inspect its current state, modify it if needed, and decide when to resume the process.
This feature is perfect for human oversight. We can let the agent do its automated work (like searching with a tool) and then interrupt it just before it formulates the final answer, giving us a chance to review the information it found.
Visualizing the New Flow¶
With a hardcoded interrupt, our agent's process will look a little different. After the tool runs, the graph will pause and wait for our signal to continue.
graph LR
A[Start] --> B(Chatbot Node: Generates Tool Call);
B --> C{tools_condition};
C --> D[Tool Node: Executes Google Search];
D --> s;
subgraph s [Human in the Loop]
direction TB
E((PAUSE: Human Intervention))
E -- User Approves --> F(Chatbot Node: Generates Final Answer);
end
s --> G[End];
C --> G;
Implementing a Hardcoded Interrupt¶
We want our agent to do the following:
- Receive our question.
- Decide to use the Google Search tool.
- Execute the search and get the results.
- Pause and wait for our approval.
- Once approved, use the search results to generate the final answer.
To achieve this, we only need to make a small but powerful change to how we compile our graph. We'll tell it to interrupt_before
running the chatbot
node.
Why interrupt before the chatbot
node? Because the chatbot
node is responsible for two things: calling tools and generating the final response. By interrupting before this node runs a second time, we catch the agent right after it has come back from the ToolNode
with fresh data, but just before it synthesizes that data into a final answer.
Here's the complete code. The core logic of our nodes remains untouched. The changes are all in the final section where we compile the graph and interact with it.
Full Code
Running the Code: A Guided Tour¶
When you run this script, the interaction will be different. It won't complete in one go.
- Initial Run & Tool Call: The script starts, the
chatbot
node runs once and decides to call thegoogle_search
tool. TheToolNode
executes the search. - The Interrupt: The graph is about to run the
chatbot
node for the second time (to process the tool's output). Because we addedinterrupt_before=["chatbot"]
, the execution stops here. - Inspection: Your program now has control. Our code prints out the current state of the graph. You will see the
ToolMessage
containing the search results. This is your chance to review the data the agent is about to use. - Human Approval: The script waits for you to press Enter. This simulates a human approving the process to continue.
- Resuming Execution: When you press Enter, we call
graph.invoke(None, config)
. TheNone
input tells LangGraph to simply resume from its paused state. Thechatbot
node finally runs, processes the tool result, and generates the final answer.
Here's what the output will look like:
Advanced HIL: Interrupt and Command¶
The interrupt_before
method is powerful, but it's static. The graph always pauses at the same point. What if we want the agent to decide for itself when it needs help?
We can achieve this by giving the agent a special "human review" tool. The agent can then call this tool whenever it's uncertain. Our application will see this special tool call, Interrupt the flow, and ask the user for a Command on how to proceed.
Interrupt (Agent-Initiated)
Instead of the system forcing a pause, the agent itself requests one. It does this by calling a tool specifically designed for human intervention. The tool call might contain a question like, "I've found three documents, which one is most relevant?"
Command (User-Provided)
This is the user's response to the agent's interruption. It's not just a simple "continue" anymore. The user can provide a specific directive, like "Use the second document," or "Ignore the documents and search for something else." This command is then fed back into the agent's state, directly influencing its next action.
This pattern makes the collaboration feel much more natural and intelligent.
How would this work in code?¶
While we won't build a full example here, let's look at the key changes you'd need to make.
-
Create a Human Review Tool: First, you'd define a new tool. Crucially, its function is empty because the application code will intercept it. Its description is what guides the LLM to use it.
-
Update the Agent's Tools: You would add this new tool to the list available to the LLM.
-
Create an "Interrupt and Command" Loop: Your application loop becomes more sophisticated. Instead of just invoking the graph once, you'd use a
for
orwhile
loop withgraph.stream
. Inside the loop, you check if the agent tried to use thehuman_review
tool.
A More Powerful Collaboration
By mastering both hardcoded interrupts and the more dynamic "Interrupt and Command" pattern, you've moved from building a simple automation to designing a true collaborative partner. You can now build agents that know when to act alone and, more importantly, know when to ask for help.