Skip to content

Add low_level public API #1599

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Add low_level public API #1599

wants to merge 6 commits into from

Conversation

samuelcolvin
Copy link
Member

@samuelcolvin samuelcolvin commented Apr 25, 2025

TODO:

  • fix tests
  • tracing
  • docs

Copy link
Contributor

hyperlint-ai bot commented Apr 25, 2025

PR Change Summary

Added a new public API for low-level functionality in the pydantic_ai module, with initial documentation provided.

  • Introduced the low_level public API in the pydantic_ai module
  • Added initial documentation for the new API
  • Enhanced the overall API structure

Added Files

  • docs/api/low_level.md

How can I customize these reviews?

Check out the Hyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add the hyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add hyperlint-ignore to the PR to ignore the link check for this PR.

Copy link

github-actions bot commented Apr 25, 2025

Docs Preview

commit: d5047e2
Preview URL: https://802d74fe-pydantic-ai-previews.pydantic.workers.dev

@DouweM DouweM marked this pull request as draft April 30, 2025 21:46
@samuelcolvin samuelcolvin marked this pull request as ready for review May 1, 2025 05:01
Copy link
Contributor

@alexmojaki alexmojaki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the high level API should be using the low level API code path to make sure things stay in sync.

In the agent graph I see this:

        model_request_parameters = ctx.deps.model.customize_request_parameters(model_request_parameters)
        model_response, request_usage = await ctx.deps.model.request(
            ctx.state.message_history, model_settings, model_request_parameters
        )

That customize_request_parameters seems important and missing from the low level code.

@samuelcolvin
Copy link
Member Author

I think the high level API should be using the low level API code path to make sure things stay in sync.

In the agent graph I see this:

        model_request_parameters = ctx.deps.model.customize_request_parameters(model_request_parameters)
        model_response, request_usage = await ctx.deps.model.request(
            ctx.state.message_history, model_settings, model_request_parameters
        )

That customize_request_parameters seems important and missing from the low level code.

I don't agree, the amount of code here is tiny, and there's some (minimal) savings by caching results, but I guess we can do that later if you like.

@alexmojaki
Copy link
Contributor

customize_request_parameters isn't important?

@samuelcolvin
Copy link
Member Author

oh sorry, I'll look.

Comment on lines +65 to +72
[ModelRequest.user_text_prompt('What is 123 / 456?')],
model_request_parameters=ModelRequestParameters(
function_tools=[
ToolDefinition(
name=Divide.__name__.lower(),
description=Divide.__doc__ or '',
parameters_json_schema=Divide.model_json_schema(),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should aim to make this more ergonomic in the not too distant future, e.g

model_response = await model_request(
    'openai:gpt-4.1-nano',
    'What is 123 / 456?',
    function_tools=[Divide],
)

I don't think the current API prevents having this in the future, but it's worth keeping in mind.

parameters_json_schema=Divide.model_json_schema(),
)
],
allow_text_output=True, # Allow model to either use tools or respond directly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this help with the example?

"""
```

_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I find this sentence contradictory


For most application use cases, the higher-level [`Agent`][pydantic_ai.Agent] API provides a more convenient interface with additional features such as built-in tool execution, structured output parsing, and more.

## OpenTelemetry Instrumentation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this heading mention logfire?


## OpenTelemetry Instrumentation

As with [agents][pydantic_ai.Agent] you can enable OpenTelemetry/logfire instrumentation with just a few extra lines
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this link to the agents page? Would the agents guide make more sense than the API page?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we no longer capitalising logfire?


_(This example is complete, it can be run "as is")_

See [Debugging and Monitoring](logfire.md) for more details.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
See [Debugging and Monitoring](logfire.md) for more details.
See [Debugging and Monitoring](logfire.md) for more details, including how to instrument with plain OpenTelemetry without logfire.

#> Paris
```

_(This example is complete, it can be run "as is")_
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about a screenshot?


@dataclasses.dataclass
class LowLevelModelResponse(messages.ModelResponse):
"""Subclass of [`ModelResponse`][pydantic_ai.messages.ModelResponse] that includes usage information."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not sure why the usage isn't included in ModelResponse. Is it because people might misinterpret it to mean the usage of the whole agent run? What if it was called request_usage? People might actually want to access that info via agent_run.all_messages()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise does this name need to be tied to the module? What about ModelResponseWithUsage?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ye, we can probably change it everywhere. Now I see it's the usage for just that request, I think it makes more sense to just add it to ModelResponse

class LowLevelModelResponse(messages.ModelResponse):
"""Subclass of [`ModelResponse`][pydantic_ai.messages.ModelResponse] that includes usage information."""

# the default_factor is required here to passify dataclasses since `ModelResponse` has fields with defaults :-(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# the default_factor is required here to passify dataclasses since `ModelResponse` has fields with defaults :-(
# the default_factor is required here to pacify dataclasses since `ModelResponse` has fields with defaults :-(

"""Subclass of [`ModelResponse`][pydantic_ai.messages.ModelResponse] that includes usage information."""

# the default_factor is required here to passify dataclasses since `ModelResponse` has fields with defaults :-(
usage: usage.Usage = dataclasses.field(default_factory=usage.Usage)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
usage: usage.Usage = dataclasses.field(default_factory=usage.Usage)
usage: usage.Usage = dataclasses.field(kw_only=True)

Would this work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants