Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for llama-3.3 #47

Open
lenadankin opened this issue Dec 8, 2024 · 0 comments
Open

support for llama-3.3 #47

lenadankin opened this issue Dec 8, 2024 · 0 comments

Comments

@lenadankin
Copy link

lenadankin commented Dec 8, 2024

Hi,

I tried to use the new meta-llama/llama-3-3-70b-instruct model with langchain-ibm and got a somewhat odd behaviour when I use tool binding: even though I set tool_choice='auto', the model tried to use the tool in an obvious case where the tool should not be called.

    def multiply(a: int, b: int) -> int:
        """Multiply a and b.

        Args:
            a: first int
            b: second int
        """
        return a * b

    tool_calling_model = get_chat_model() # llama3-3 ChatWatsonx
    llm_with_tools = tool_calling_model.bind_tools([multiply],tool_choice="auto" )

    result = llm_with_tools.invoke("Hello")
    print(result)

The reply from the model is:
content='The provided functions are insufficient for me to answer the question.' additional_kwargs={} response_metadata={'token_usage': ....}
It would be odd for the model to decide to call completely unrelated tool for a relatively easy prompt, so maybe there's an issue with how tool_choice is handled.

My code uses the latest versions of langchain-ibm and ibm_watson_ai.
langchain-ibm==0.3.5
ibm_watsonx_ai==1.1.25

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant