-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set connecttimeout with user option (Fixes #318) #321
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks right to me.
Many of the snapshots need to be updated with this change, unfortunately. |
Co-authored-by: Aaron Jacobs <[email protected]>
@atheriel what are you checking for in the tests when you print the request object? Maybe it would be better to test it with |
I was largely testing if the body and headers look right. Very open to suggestions on a better way to do that. |
@atheriel maybe with
It wouldn't be hard to fix those problems in httr2. |
I'm not convinced that this is the correct solution because it means that (e.g.) |
I just wanted to add as a user perspective here: the flexible timeout setting is very important when using local models (as for instance served by llama.cpp's The reason is that even with Metal execution on an M1 Max, larger models such as the new gemma-3-27b will need more than the default 60 seconds {httr2} timeout to generate a complete response if tool use via {ragnar} is specified. This is because as far as I could find out, tool use does not work with a streaming response, so I have to use So I (and probably other local LLM users) would certainly appreciate some sort of configurable option here since unfortunately neither {http2} nor {curl} allow setting a global timeout option in the user's R session. |
@schelhorn I get that, but I'm not convinced that this is the right timeout to set — even if it takes llama-server a long time to generate the complete request, it should still connect. |
The suggestion in #318 (comment) by @hadley solves #318, and my reported query (including all the others I have that failed) succeeded.
I've:
timeout