-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
community: Implement bind_tools
for ChatTongyi
#20725
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
AIMessageChunk( | ||
content=content, | ||
additional_kwargs=additional_kwargs, | ||
tool_calls=tool_calls, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are these tool calls fully formed in a streaming context?
if not you can specify tool_call_chunks
on AIMessageChunk instead, with args
a (partial json) string. See example here:
tool_call_chunks=tool_call_chunks, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot, I have made some updates about this.
return ( | ||
AIMessage( | ||
content=message_chunk.content, | ||
tool_calls=message_chunk.additional_kwargs["tool_calls"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is confusing, because in convert_dict_to_message
we are parsing tool calls (e.g., in call to parse_tool_call
) but here we are passing them in although they are already parsed.
at this point, is the following true?
if message_chunk.additional_kwargs["tool_calls"]:
item = message_chunk.additional_kwargs["tool_calls"][0]
assert isinstance(item["args"], dict) # not a string
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for delayed response, having a really tough time with my job 😫
This was a mistake, I've rewritten the method referring langchain_openai's base
module.
I also did some refactoring. The main challenge of supporting tools for Tongyi, is that Tongyi does not support using tools with That's why I end up writing a |
else: | ||
delta_resp = self.subtract_client_response(resp, prev_resp) | ||
prev_resp = resp | ||
yield check_response(delta_resp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cheese-git would this represent a breaking change for streaming generally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it would not.
Before this PR, incremental_output
is always True
when streaming:
langchain/libs/community/langchain_community/chat_models/tongyi.py
Lines 416 to 417 in 64c4722
if params.get("stream"): | |
params["incremental_output"] = True |
To support tools when streaming, I updated the condition to this:
# According to the Tongyi official docs,
# `incremental_output` with `tools` is not supported yet
if params.get("stream") and not params.get("tools"):
params["incremental_output"] = True
So "streaming without incremental_output
being True
" is a new circumstance.
I Use ChatTongyi to bulid a Agent. |
## Description Implement `bind_tools` in ChatTongyi. Usage example: ```py from langchain_core.tools import tool from langchain_community.chat_models.tongyi import ChatTongyi @tool def multiply(first_int: int, second_int: int) -> int: """Multiply two integers together.""" return first_int * second_int llm = ChatTongyi(model="qwen-turbo") llm_with_tools = llm.bind_tools([multiply]) msg = llm_with_tools.invoke("What's 5 times forty two") print(msg) ``` Streaming is also supported. ## Dependencies No Dependency is required for this change. --------- Co-authored-by: Bagatur <[email protected]> Co-authored-by: Chester Curme <[email protected]>
from langchain_core.tools import tool
from langchain_community.chat_models.tongyi import ChatTongyi
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
llm = ChatTongyi(model="qwen-turbo")
llm_with_tools = llm.bind_tools([multiply])
msg = llm_with_tools.invoke("What's 5 times forty two")
print(msg) |
Description
Implement
bind_tools
in ChatTongyi. Usage example:Streaming is also supported.
Dependencies
No Dependency is required for this change.