-
Notifications
You must be signed in to change notification settings - Fork 816
Add low_level
public API
#1599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add low_level
public API
#1599
Conversation
PR Change SummaryAdded a new public API for low-level functionality in the pydantic_ai module, with initial documentation provided.
Added Files
How can I customize these reviews?Check out the Hyperlint AI Reviewer docs for more information on how to customize the review. If you just want to ignore it on this PR, you can add the Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add |
Docs Preview
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the high level API should be using the low level API code path to make sure things stay in sync.
In the agent graph I see this:
model_request_parameters = ctx.deps.model.customize_request_parameters(model_request_parameters)
model_response, request_usage = await ctx.deps.model.request(
ctx.state.message_history, model_settings, model_request_parameters
)
That customize_request_parameters
seems important and missing from the low level code.
I don't agree, the amount of code here is tiny, and there's some (minimal) savings by caching results, but I guess we can do that later if you like. |
|
oh sorry, I'll look. |
[ModelRequest.user_text_prompt('What is 123 / 456?')], | ||
model_request_parameters=ModelRequestParameters( | ||
function_tools=[ | ||
ToolDefinition( | ||
name=Divide.__name__.lower(), | ||
description=Divide.__doc__ or '', | ||
parameters_json_schema=Divide.model_json_schema(), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should aim to make this more ergonomic in the not too distant future, e.g
model_response = await model_request(
'openai:gpt-4.1-nano',
'What is 123 / 456?',
function_tools=[Divide],
)
I don't think the current API prevents having this in the future, but it's worth keeping in mind.
parameters_json_schema=Divide.model_json_schema(), | ||
) | ||
], | ||
allow_text_output=True, # Allow model to either use tools or respond directly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this help with the example?
""" | ||
``` | ||
|
||
_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I find this sentence contradictory
|
||
For most application use cases, the higher-level [`Agent`][pydantic_ai.Agent] API provides a more convenient interface with additional features such as built-in tool execution, structured output parsing, and more. | ||
|
||
## OpenTelemetry Instrumentation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this heading mention logfire?
|
||
## OpenTelemetry Instrumentation | ||
|
||
As with [agents][pydantic_ai.Agent] you can enable OpenTelemetry/logfire instrumentation with just a few extra lines |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this link to the agents page? Would the agents guide make more sense than the API page?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we no longer capitalising logfire?
|
||
_(This example is complete, it can be run "as is")_ | ||
|
||
See [Debugging and Monitoring](logfire.md) for more details. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See [Debugging and Monitoring](logfire.md) for more details. | |
See [Debugging and Monitoring](logfire.md) for more details, including how to instrument with plain OpenTelemetry without logfire. |
#> Paris | ||
``` | ||
|
||
_(This example is complete, it can be run "as is")_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a screenshot?
|
||
@dataclasses.dataclass | ||
class LowLevelModelResponse(messages.ModelResponse): | ||
"""Subclass of [`ModelResponse`][pydantic_ai.messages.ModelResponse] that includes usage information.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not sure why the usage isn't included in ModelResponse. Is it because people might misinterpret it to mean the usage of the whole agent run? What if it was called request_usage
? People might actually want to access that info via agent_run.all_messages()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise does this name need to be tied to the module? What about ModelResponseWithUsage
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ye, we can probably change it everywhere. Now I see it's the usage for just that request, I think it makes more sense to just add it to ModelResponse
class LowLevelModelResponse(messages.ModelResponse): | ||
"""Subclass of [`ModelResponse`][pydantic_ai.messages.ModelResponse] that includes usage information.""" | ||
|
||
# the default_factor is required here to passify dataclasses since `ModelResponse` has fields with defaults :-( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# the default_factor is required here to passify dataclasses since `ModelResponse` has fields with defaults :-( | |
# the default_factor is required here to pacify dataclasses since `ModelResponse` has fields with defaults :-( |
"""Subclass of [`ModelResponse`][pydantic_ai.messages.ModelResponse] that includes usage information.""" | ||
|
||
# the default_factor is required here to passify dataclasses since `ModelResponse` has fields with defaults :-( | ||
usage: usage.Usage = dataclasses.field(default_factory=usage.Usage) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usage: usage.Usage = dataclasses.field(default_factory=usage.Usage) | |
usage: usage.Usage = dataclasses.field(kw_only=True) |
Would this work?
TODO: