Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatOpenAI: bind_tools not callable after with_structured_output #28848

Open
5 tasks done
p3nnst8r opened this issue Dec 20, 2024 · 1 comment
Open
5 tasks done

ChatOpenAI: bind_tools not callable after with_structured_output #28848

p3nnst8r opened this issue Dec 20, 2024 · 1 comment
Labels
investigate Flagged for investigation. Ɑ: models Related to LLMs or chat model modules

Comments

@p3nnst8r
Copy link

p3nnst8r commented Dec 20, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from langchain.tools import StructuredTool

class ResponseModel(BaseModel):
  a_value:str = Field(description="This doesn't matter much")

def a_func(val: int):
    return True

a_tool = StructuredTool.from_function(
            func=a_func,
            name="A func",
            description="A function you will need",
        )

llm = ChatOpenAI(model="gpt-4o-mini",temperature=0)
structured_llm = llm.with_structured_output(ResponseModel)
llm_with_tools = structured_llm.bind_tools([a_tool]) <----- not available

Error Message and Stack Trace (if applicable)

'RunnableSequence' object has no attribute 'bind_tools'

Description

I am attempting to retrieved structured output in a json format (to pass via an api to a frontend), and I also require calling out to tools. I cannot figure out how to combine the two, or there is an issue with code to do so.

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64
Python Version: 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)]

Package Information

langchain_core: 0.3.28
langchain: 0.3.13
langchain_community: 0.3.13
langsmith: 0.2.4
langchain_experimental: 0.3.4
langchain_openai: 0.2.14
langchain_text_splitters: 0.3.4

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.10.10
async-timeout: Installed. No version info available.
dataclasses-json: 0.6.7
httpx: 0.27.2
httpx-sse: 0.4.0
jsonpatch: 1.33
langsmith-pyo3: Installed. No version info available.
numpy: 1.26.4
openai: 1.58.1
orjson: 3.10.9
packaging: 24.1
pydantic: 2.9.2
pydantic-settings: 2.6.0
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.36
tenacity: 9.0.0
tiktoken: 0.8.0
typing-extensions: 4.12.2

@langcarl langcarl bot added the investigate Flagged for investigation. label Dec 20, 2024
@dosubot dosubot bot added the Ɑ: models Related to LLMs or chat model modules label Dec 20, 2024
@keenborder786
Copy link
Contributor

@p3nnst8r I don't know what you are trying to achieve, but essentially, with_structured_output returns a RunnableSequence that consists of the following two Runnables:
RunnableBinding (where ChatOpenAI is binding, and the given schema is passed as an additional parameter to tools) -> OutputParser.

So, calling bind_tools again on the RunnableSequence is causing this error.

This is not recommended, but if you want to use additional tools in the same RunnableSequence, you can do the following:

structured_llm.steps[0] = structured_llm.steps[0].bound.bind_tools([a_tool, ResponseModel])

However, I still don't understand why you want to use additional tools when with_structured_output is only used to make the LLM parse the result into a specific format. It is recommended that you initiate a different instance of the LLM with the desired tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigate Flagged for investigation. Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

2 participants