-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do we implement tools that "do something" #83
Comments
Also, it's no more code in PydanticAI, just a pattern to document. 😸 |
Would data.run output automatically be added to the messages such that the LLM knows about the outcome of the tool call? |
The "tool call" message that is parsed to become The response value from You might also want to continue using those messages in another agent, inside One thing I think we've realised over the last few days is that my mental model of our You might think of them more like an |
In general this looks great for most cases. I would also want the ability to pass tools like you do now with retrievers. It's up to me to decide whether I'm concerned about allowing the LLM to call something that has side-effects. Maybe an option that builds on this proposal would be to have a ToolBaseModel, that has an implementation of run, and I pass a subclass to the Agent contstructor (or attach it with a decorator) and I can use the validator as the gatekeepr: if I got the result and it validates then I'm OK for it to run and for the result to go into the next LLM call that is triggered automatically. And if I want to use it manually, I pass it as the result_type instead like above. |
Currently
retriever
s are tools that are expected to be benign, e.g. have no side effects, so you don't care if the models chooses to call them or not.Technically there's nothing to stop you from having retrievers with side-effects, and you could even look in message history to see if they were called.
But what is our recommend way of using tools that do something?
I would suggest something like this:
This looks like a bit more logic, but it has some nice advtanges:
run
without it being hidden in the "magic" of PydanticAIThe text was updated successfully, but these errors were encountered: