You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The user should be able to specify a set of tools or even have custom tools not provided in our codebase. Currently, due to the nature of the namespaces involved, the "contact human agent" tool does all of it's productive work in the langchain callback manager. This tightly couples the agents with their tools and it's this coupling that needs to be broken.
Possible Solution
Perhaps deviating away from the langchain Pydantic style of defining agent function definitions, but I'd like to explore other options that allow us to take advantage of Pydantic's serialization/deserialization, and langchains clean integration with Pydantic.
Additional context
Near the top of gpt_agent.py you can see a langchain callback manager that contains all of the logic for the "call a human agent" tool. This is there because we don't want the agent to have to be "aware" of any metadata like session_id or client_id. It should just be able to invoke a tool with information from the conversation.
In order to get around this, more complicated objects for comparisons are passed along in the callback manager, and the tool function only serves to define the fields of an incoming object (which is then used to populate an ORM object with metadata).
Additional context
No response
The text was updated successfully, but these errors were encountered:
Description
The user should be able to specify a set of tools or even have custom tools not provided in our codebase. Currently, due to the nature of the namespaces involved, the "contact human agent" tool does all of it's productive work in the langchain callback manager. This tightly couples the agents with their tools and it's this coupling that needs to be broken.
Possible Solution
Perhaps deviating away from the langchain Pydantic style of defining agent function definitions, but I'd like to explore other options that allow us to take advantage of Pydantic's serialization/deserialization, and langchains clean integration with Pydantic.
Additional context
Near the top of gpt_agent.py you can see a langchain callback manager that contains all of the logic for the "call a human agent" tool. This is there because we don't want the agent to have to be "aware" of any metadata like session_id or client_id. It should just be able to invoke a tool with information from the conversation.
In order to get around this, more complicated objects for comparisons are passed along in the callback manager, and the tool function only serves to define the fields of an incoming object (which is then used to populate an ORM object with metadata).
Additional context
No response
The text was updated successfully, but these errors were encountered: