You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The provided code snippet works fine if LLM (v3-5-sonnet) decides to use only one function, but failing if LLM is calling two functions.
This snippet also works fine if I use gpt-4o model for parallel function calls.
importjsonfromlangchain_core.toolsimporttoolfromlangchain_core.utils.function_callingimportconvert_to_openai_toolfromopenaiimportAzureOpenAIquery="What is 3 * 12 and 6 * 2?"# parallel FC use-case <- failing# query = "What is 3 * 12?" # a regular FC <- works fine@tooldefadd(a: int, b: int) ->int:
"""Add two integers. Args: a: First integer b: Second integer """returna+b@tooldefmultiply(a: int, b: int) ->int:
"""Multiply two integers. Args: a: First integer b: Second integer """returna*btools= [add, multiply]
available_functions= {t.name:t.funcfortintools}
tools_openai_schema= [convert_to_openai_tool(t) fortintools]
messages= [{"role": "user", "content": query}]
client=AzureOpenAI(
api_key="XXX",
api_version="2023-03-15-preview",
azure_endpoint="XXX"
)
model_name="anthropic.claude-v3-5-sonnet"completion=client.chat.completions.create(
model=model_name,
messages=messages,
tools=tools_openai_schema
)
response_message=completion.choices[0].messagetool_calls=response_message.tool_callsasserttool_callsresponse_message.content=None# instead of empty string (raises error) convert that to Nonemessages.append(response_message.to_dict())
# Parallel function calling# https://platform.openai.com/docs/guides/function-calling/configuring-parallel-function-callingfortool_callintool_calls:
function_name=tool_call.function.namefunction_to_call=available_functions[function_name]
function_args=json.loads(tool_call.function.arguments)
function_response=function_to_call(**function_args)
message_to_append= {
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": str(function_response),
}
messages.append(message_to_append) # extend conversation with function response, as in docscompletion=client.chat.completions.create(
model=model_name,
messages=messages,
tools=tools_openai_schema
)
print(completion.choices[0].message.content)
I'm getting error:
Error code: 400 - {'error': {'message': 'Error code: 400 - {\'message\': "messages.2: the following `tool_use` ids were not found in `tool_result` blocks: {\'...\'}"}', 'type': 'invalid_request_error', 'code': '400'}}
The text was updated successfully, but these errors were encountered:
Seems that I tested outdated version of adapter.
Issue resolved some time ago.
If Sonnet V2 is not using functions in parallel and uses them sequentially, one-by-one, but you would like to have them in parallel (in one LLM call), just add to the System Message something like "Use function calls in parallel whereas possible".
The provided code snippet works fine if LLM (v3-5-sonnet) decides to use only one function, but failing if LLM is calling two functions.
This snippet also works fine if I use gpt-4o model for parallel function calls.
I'm getting error:
The text was updated successfully, but these errors were encountered: