-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: claude 3 tool calling #70
Feat: claude 3 tool calling #70
Conversation
cc @baskaryan |
I upgraded |
@3coins @baskaryan can you please have a look, run the workflows and let me know what is needed to merge? TY |
Need to add a test case for agent executor and resolve: KeyError Traceback (most recent call last) File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) KeyError: 'AIMessageChunk' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM modulo @baskaryan's comments + the agent issue above.
Implemented standard tests in #72 and ran them on this branch. Can confirm this fixes most of the integration tests (and passes all tests associated with tool calling).
Good catch. Looks like this was fixed in the legacy version but not updated here: langchain-ai/langchain#20801 |
Fixed it. TY for the pointer |
FYI - I was unable to add a test for |
@3coins @ccurme @baskaryan checking in |
@laithalsaadoon With this merged, I am seeing tool calls as expected despite my issue stating otherwise--I must have messed up the pip install when trying your branch initially. I'm still seeing a problem with ordering of human/assistant messages when using a multi-agent system as I assume that tool calls are being tied to assistant responses. Do you have any suggestions to account for these scenarios?
|
I'll try to repro in the next couple of days. My main motivation for getting this PR in was for multi-agent systems :) |
Thanks you--FWIW the lineup of messages upon entering the LLM run at the next step in the graph (as seen from _generate) is |
@ssg-kstewart I can't speak for other LLM providers and if/how they accept AIMessages without alternating, but this change below worked for the multi-agent flow: def agent_node(state, agent, name):
last_message = state["messages"][-1]
if isinstance(last_message, AIMessage):
state["messages"][-1] = HumanMessage(content=last_message.content)
result = agent.invoke(state)
# We convert the agent output into a format that is suitable to append to the global state
if isinstance(result, ToolMessage):
pass
else:
result = AIMessage(**result.dict(exclude={"type", "name"}), name=name)
return {
"messages": [result],
# Since we have a strict workflow, we can
# track the sender so we know who to pass to next.
"sender": name,
} |
@laithalsaadoon Thanks, this gets me moving. A funny consequence of this workaround is that the LLM gets into a loop of self-complimenting thinking that the Human response is, of course, actual user input. |
Hi, I noticed that langchain-aws/libs/aws/langchain_aws/chat_models/bedrock.py Lines 786 to 789 in 767fa5a
This has caused some confusion for me so I have submitted a small PR here: #126. |
fixes: #65
Checks if model_id is claude-3, and then uses invoke_model with proper
tools
in the request input body.