Replies: 2 comments 1 reply
-
Reason: Streamline use of structured output for Langchain to make it more consistent with other models |
Beta Was this translation helpful? Give feedback.
0 replies
-
hey! I believe this is already implemented on the base class |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked
Feature request
We would like to implement Ollama.with_structured_output() to directly support Ollama models capable of structuring its output. For LLM's that do not support structured output, the method should return an error indicating that structured output is not supported.
Motivation
In Langchain, the OpenAI class has a with_structured_output() method, allowing structured outputs such as JSON from supported models. However, this functionality is currently missing in the Ollama class.
Proposal (If applicable)
If the issue is accepted, my team and I would like to work on it.
Beta Was this translation helpful? Give feedback.
All reactions