Replies: 2 comments 7 replies
-
I am interested in helping with the project but I knew about it recently. I started to study LLM a few weeks ago. |
Beta Was this translation helpful? Give feedback.
-
That's actually not the case. The JSON agent is not putting the entire json/open api spec into the llm prompt. That would be expensive and slow, as you pointed out. Instead, the agent is able to iteratively sift through the OpenAPI spec and find the relevant information by retrieving the keys, then extracting the right values over and over again. |
Beta Was this translation helpful? Give feedback.
-
@agola11 int his part of code https://github.com/hwchase17/langchain/blob/910da8518f561199d6f3edf0503e4117fdfa9ed6/langchain/agents/agent_toolkits/openapi/toolkit.py#L49-L53 you are loading and putting, in the prompt, the entire JSON API code.
The cost of that is expensive using OpenAI as LLM. What do you think about using embeddings to filter just the endpoints related to the user's question?
Beta Was this translation helpful? Give feedback.
All reactions