-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support structured output in ChatDatabricks #28
Conversation
Signed-off-by: B-Step62 <[email protected]>
@harupy @BenWilson2 Could you review this change that mostly copies what OAI does? Wanted to close the FR before releasing the next version by the end of this week. |
) | ||
else: | ||
output_parser = JsonOutputKeyToolsParser( | ||
key_name=tool_name, first_tool_only=True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the OAI implementation, they're doing the same thing?
IIRC their tool requests are sequential, but it will be good to double check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the rational is that it is guaranteed to have only one tool matches with the tool_name
we pass here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM! Let's merge :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall LGTM - let's verify that the tool response requests from OpenAI are always provided as sequential iterations (I BELIEVE that is how their APIs work, but let's be sure we're not inadvertently dropping multiple tool requests; for context, Anthropic adheres to a similar response structure but their ToolRequestFormat is a collection instead of sequential calls)
Reading through the code closely, the more direct reason is that LLM can only returns single tool call for structured output because we (and OAI/Anthoropic) set It's good to understand the rational behind (rather than no-brain copy paste). Great callout, Ben! |
Resolve #19
Implement
with_structured_output()
method inChatDatabricks
to support structured output feature compatible with OpenAI. The code mostly copies ChatOpenAI implementation with slight adjustment e.g. FMAPI doesn't supportparallel_tool_calls
parameter.Test