Replies: 2 comments 4 replies
-
here is what i would do template = """Please provide a JSON response in the following format to the question provided, mark the json as ```json:
{format_instructions}
---
Question:
.....
"""
parser = JsonOutputParser(pydantic_object=VehicleListing)
prompt = PromptTemplate(
template=template,
input_variables=["page_content", "images"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
chain = prompt | self.chat SO we are asking the model to format the results but not restricting its full response output_from_claude = chain.invoke({"foo": "goo"})
message_json = parse_json_markdown(output_from_claude.content)
message = VehicleListing.parse_obj(message_json) now i have the message and the full output from the llm |
Beta Was this translation helpful? Give feedback.
1 reply
-
Good idea, thanks! I'll do it that way. It'd be a nicer interface if all responses returned a Message object as a standard and |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi! Hopefully a simple question. If I have a model defined like so:
and I want to see the
response_metadata
when I invoke it, how can I do that?Normally it's on the message object, but since Plan is a custom model it only contains the fields I specified.
Beta Was this translation helpful? Give feedback.
All reactions