-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for request timeout #521
base: master
Are you sure you want to change the base?
Conversation
Wire up a timer to perform the appropriate type of aborting and resource cleanup for each client. Handles canceling the timer when the request completes.
cc @lrhn @jakemac53 I'd like to get some initial feedback on the approach and whether this is worth doing. I think this is worth doing because timeouts are a common case for wanting to abort requests, and they are easier to get wrong than to get right. The naive approach of Please let me know what you think of this initial version, if we think it looks like a good approach I can start work on tests. I'm not sure how easy it is going to be to cover all the behavior here well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The api lgtm
lib/src/io_client.dart
Outdated
var stream = request.finalize(); | ||
|
||
Timer? timer; | ||
late void Function() onTimeout; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can make this nullable instead of late, and do onTimeout?.call();
or onTimeout!()
in the timer callback.
That could potentially be cheaper than a late field. (Unless the compiler is being competent about late non-nullable fields and using null
as placeholder anyway)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I have a slight preference for late
as it matches the intended semantics more closely IMO. Outside of performance concerns do you have a preference one way or the other?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I consider late
an unsafe and dynamic feature at the same level of as, well, dynamic
. I try to avoid it as much as possible, just as I do dynamic
. Late variables have their uses, where it makes things easier, but it never makes them safer.
Using late
(and dynamic
) allows statically undetectable run-time errors with visible marker at the use position. I prefer to use something nullable and !
when possible because it is more verbose and therefore highlights the assumption at the read (by adding !
).
This is a fairly fundamental library, and I think we should always choose safety (and performance) over brevity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using late
here I believe is more about choosing something that matches the underlying semantics/implementation than brevity. I don't feel super strongly either way, but this is always assigned, and I don't agree it is any safer to use !
everywhere - we would not be adding any null checks either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm actually on re-reading it I think this is only assigned if timeout
is non-null, in which case making it nullable does match the semantics better, so I retract my previous comment and agree it should be nullable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't taking enough advantage of flow analysis. Turns out we don't need late
or ?
. If I switch a few lines around then definite assignment analysis takes over.
One more question about the API design. Some backround: There are differences between behaviors with the The immediate question is, should the Stream emit the Timeout or not. If it doesn't the timeout would apply only to receiving the headers and you'd need to cancel a stream subscription to get a full timeout. |
It sounds like we have two APIs, one future based and one stream based. It's hard to say what a "timeout" means if the author cannot specify it. Do they need the entire result before that time (when do I need to move on, with or without a response?), or do they just need some evidence that the remote end exists and is willing to answer before that time (how long do I wait if the remote server is down?). I can see both use-cases as valid. Is there any way to cancel the timeout? So, set a timeout of 5 seconds, which will then kill the connection after that time (and put an exception on the stream), but if I get a header and plausible start of the response before that time, then I can cancel the timeout (or start a new timer). That would make it easier for the client to control the timeout behavior without having to know up-front whether they're waiting for a non-remote-end timeout or a large response. |
I can see the use cases for both variants of timeout. One thing I'm not sure of is whether most authors would want to timeout the connection (which they can do with a |
I agree that both are probably valid, it's hard to say whether both are worth the API space. After some discussion I think we are leaning towards the timeout applying to the Stream as well. It's slightly less general, but also a friendlier API for the case it solves, and my hunch is that is the more common one. What we can do, though, is hold on to a little room for the other one. If we name the argument Currently I'm leaning towards using the name
Not that I can think of without a difficult to understand API. |
After some further discussion we think there is more risk in getting this API wrong than there is in delaying it until after the dust settles on NNBD and we can take more time to design it alongside general request aborting and decide about the rollout plan from there. We may end up with a slower alternate-import type migration for request aborting, and so we could use that same approach to deliver a timeout feature alongside abort with a gradual migration. We should continue to use this discussion to inform our future design, but I don't think I'll hold the null safe publish on it. |
One thing I'm noticing about this as I look at overrides of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API LGTM.
/// If [contentTimeout] is not null the request will be aborted if it takes | ||
/// longer than the given duration to receive the entire response. If the | ||
/// timeout occurs before any reply is received from the server the returned | ||
/// future will as an error with a [TimeoutException]. If the timout occurs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing a "will complete as" here?
I'd usually write "the returned future will complete with a [TimeoutException] error."
/// longer than the given duration to receive the entire response. If the | ||
/// timeout occurs before any reply is received from the server the returned | ||
/// future will as an error with a [TimeoutException]. If the timout occurs | ||
/// after the reply has been started but before the entire body has been read |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comma before "but"? (And then probably also after "read").
Maybe "has been started" -> "has started".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comma before "but"? (And then probably also after "read").
Don't think so, the '... but ...' is a single description of a condition, whereas a comma would suggest the two parts being separate sentences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read "but before the entire body has been read" as parenthetical.
If it's not a parenthetical clause, it deserves to be equally prominent to the main clause, and I'd change but
to and
.
I'd still have a comma after read
in either case.
@@ -117,11 +118,18 @@ abstract class BaseRequest { | |||
/// the request is complete. If you're planning on making multiple requests to | |||
/// the same server, you should use a single [Client] for all of those | |||
/// requests. | |||
Future<StreamedResponse> send() async { | |||
/// | |||
/// If [contentTimeout] is not null the request will be aborted if it takes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd usually say "If [contentTimeout] is provided, the ...."
I know you can technically provide a null
argument, but users will understand it anyway. After all, you talk about the duration afterwards, and null
is not a duration.
/// | ||
/// If [contentTimeout] is not null the request will be aborted if it takes | ||
/// longer than the given duration to receive the entire response. If the | ||
/// timeout occurs before any reply is received from the server the returned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comma after "server"?
This PR contains important features, has gone under extensive reviews, and should be merged by now. |
Wire up a timer to perform the appropriate type of aborting and resource
cleanup for each client. Handles canceling the timer when the request
completes.