Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: claude 3 tool calling #70

Merged
merged 19 commits into from
Jun 12, 2024

Conversation

laithalsaadoon
Copy link
Contributor

fixes: #65

Checks if model_id is claude-3, and then uses invoke_model with proper tools in the request input body.

@laithalsaadoon laithalsaadoon marked this pull request as ready for review June 10, 2024 20:35
@3coins
Copy link
Collaborator

3coins commented Jun 10, 2024

cc @baskaryan

@laithalsaadoon
Copy link
Contributor Author

I upgraded poetry.lock to fix the lint errors, but maybe we want that as a separate PR? just fyi

@laithalsaadoon
Copy link
Contributor Author

@3coins @baskaryan can you please have a look, run the workflows and let me know what is needed to merge? TY

@laithalsaadoon
Copy link
Contributor Author

Need to add a test case for agent executor and resolve:


KeyError Traceback (most recent call last)
Cell In[14], line 3
1 # Create an agent executor by passing in the agent and tools
2 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
----> 3 agent_executor.invoke({"input": "*********"})

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
...
267 content: Union[str, List]
269 if not isinstance(message.content, str):
270 # parse as dict

KeyError: 'AIMessageChunk'

Copy link
Contributor

@ccurme ccurme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo @baskaryan's comments + the agent issue above.

Implemented standard tests in #72 and ran them on this branch. Can confirm this fixes most of the integration tests (and passes all tests associated with tool calling).

libs/aws/langchain_aws/chat_models/bedrock.py Outdated Show resolved Hide resolved
@ccurme
Copy link
Contributor

ccurme commented Jun 11, 2024

Need to add a test case for agent executor and resolve:

KeyError Traceback (most recent call last) Cell In[14], line 3 1 # Create an agent executor by passing in the agent and tools 2 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) ----> 3 agent_executor.invoke({"input": "*********"})

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info:

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) ... 267 content: Union[str, List] 269 if not isinstance(message.content, str): 270 # parse as dict

KeyError: 'AIMessageChunk'

Good catch. Looks like this was fixed in the legacy version but not updated here: langchain-ai/langchain#20801

@laithalsaadoon
Copy link
Contributor Author

Need to add a test case for agent executor and resolve:
KeyError Traceback (most recent call last) Cell In[14], line 3 1 # Create an agent executor by passing in the agent and tools 2 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) ----> 3 agent_executor.invoke({"input": "*********"})
File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info:
File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) ... 267 content: Union[str, List] 269 if not isinstance(message.content, str): 270 # parse as dict
KeyError: 'AIMessageChunk'

Good catch. Looks like this was fixed in the legacy version but not updated here: langchain-ai/langchain#20801

Fixed it. TY for the pointer

@laithalsaadoon
Copy link
Contributor Author

FYI - I was unable to add a test for AgentExecutor. Seems like we are not importing langchain I'm sure for "reasons" to which I am ignorant 🙂 but I tested it on a local build

@laithalsaadoon
Copy link
Contributor Author

@3coins @ccurme @baskaryan checking in

@baskaryan baskaryan merged commit 99409d8 into langchain-ai:main Jun 12, 2024
12 checks passed
@ssg-kstewart
Copy link

@laithalsaadoon With this merged, I am seeing tool calls as expected despite my issue stating otherwise--I must have messed up the pip install when trying your branch initially.

I'm still seeing a problem with ordering of human/assistant messages when using a multi-agent system as I assume that tool calls are being tied to assistant responses. Do you have any suggestions to account for these scenarios?

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeMo
del operation: Your API request included an `assistant` message in the final position, which would pre-fill the `assist
ant` response. When using tools, pre-filling the `assistant` response is not supported.

@laithalsaadoon
Copy link
Contributor Author

@laithalsaadoon With this merged, I am seeing tool calls as expected despite my issue stating otherwise--I must have messed up the pip install when trying your branch initially.

I'm still seeing a problem with ordering of human/assistant messages when using a multi-agent system as I assume that tool calls are being tied to assistant responses. Do you have any suggestions to account for these scenarios?

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeMo
del operation: Your API request included an `assistant` message in the final position, which would pre-fill the `assist
ant` response. When using tools, pre-filling the `assistant` response is not supported.

I'll try to repro in the next couple of days. My main motivation for getting this PR in was for multi-agent systems :)

@ssg-kstewart
Copy link

Thanks you--FWIW the lineup of messages upon entering the LLM run at the next step in the graph (as seen from _generate) is [<class 'langchain_core.messages.system.SystemMessage'>, <class 'langchain_core.messages.human.Human Message'>, <class 'langchain_core.messages.ai.AIMessage'>, <class 'langchain_core.messages.tool.ToolMessage'>, <class ' langchain_core.messages.ai.AIMessage'>]

@laithalsaadoon
Copy link
Contributor Author

laithalsaadoon commented Jun 24, 2024

Thanks you--FWIW the lineup of messages upon entering the LLM run at the next step in the graph (as seen from _generate) is [<class 'langchain_core.messages.system.SystemMessage'>, <class 'langchain_core.messages.human.Human Message'>, <class 'langchain_core.messages.ai.AIMessage'>, <class 'langchain_core.messages.tool.ToolMessage'>, <class ' langchain_core.messages.ai.AIMessage'>]

@ssg-kstewart I can't speak for other LLM providers and if/how they accept AIMessages without alternating, but this change below worked for the multi-agent flow:

def agent_node(state, agent, name):
    last_message = state["messages"][-1]
    if isinstance(last_message, AIMessage):
        state["messages"][-1] = HumanMessage(content=last_message.content)
    result = agent.invoke(state)
    # We convert the agent output into a format that is suitable to append to the global state
    if isinstance(result, ToolMessage):
        pass
    else:
        result = AIMessage(**result.dict(exclude={"type", "name"}), name=name)
    return {
        "messages": [result],
        # Since we have a strict workflow, we can
        # track the sender so we know who to pass to next.
        "sender": name,
    }

@ssg-kstewart
Copy link

@laithalsaadoon Thanks, this gets me moving. A funny consequence of this workaround is that the LLM gets into a loop of self-complimenting thinking that the Human response is, of course, actual user input.

@andymli
Copy link

andymli commented Jul 30, 2024

Hi, I noticed that ValueError is not correctly raisedwhen a non-claude-3 model is called here:

if "claude-3" not in self._get_model():
ValueError(
f"Structured output is not supported for model {self._get_model()}"
)

This has caused some confusion for me so I have submitted a small PR here: #126.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Official Anthropic function calling support?
6 participants