-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent duplicate resume continuation on unary request #340
Conversation
Signed-off-by: Eddie Seay <[email protected]>
@rebello95 I am going to keep this in Draft until I give the reporters from #333 the opportunity to integrate this branch and validate in some way. My means of reproducing the issue is so intermittent that it's honestly hard for me to tell whether my solution worked or if the problem is just not occurring because the reproduction steps are unsuccessful. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this fix! Looks good overall, just a couple questions
let hasResumed = Locked(false) | ||
self.cancelable = self.sendUnary { response in | ||
guard !hasResumed.value else { | ||
return assertionFailure("Attempting to resume continuation twice") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's your thought process for keeping this assertion in place? I assume it will trigger in the cases described by the PR description, which could be disruptive for local development
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My reasoning was mostly that I felt like failing silently didn't really seem appropriate either.
In the interest of time, the change here specifically addresses the continuation resuming twice, which was resulting in the worst possible experience in both debug and release - a crash. It does not necessarily address the root cause for why sendUnary
's closure is getting triggered a second time.
It felt like keeping an assertion there, such that the debugger pauses when it happens during development, would provide a means to re-raise the issue and prompt deeper analysis of the root cause at a later date if we see it happening regularly.
Up to you how you'd like to proceed though. I can just as easily os_log
something, create an issue and put it in a comment, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It felt like keeping an assertion there, such that the debugger pauses when it happens during development, would provide a means to re-raise the issue and prompt deeper analysis of the root cause at a later date if we see it happening regularly.
Up to you how you'd like to proceed though. I can just as easily os_log something, create an issue and put it in a comment, etc.
I get where you're coming from. That said, if I'm understanding correctly this is an issue in the underlying library/dependency - not in how the caller is using the API. If a user hits an assertion in debug I think they'd think they were doing something wrong, but in practice it doesn't seem like they need to do anything. Since they end up getting a non-successful response once anyway, I think adding an assertion will introduce confusion/distraction to the end user. IMO keeping an os_log
here seems sufficient unless you feel strongly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me! Would you also like us to make an Issue just to track the underlying problem for later investigation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think an inline comment is probably sufficient :)
Signed-off-by: Eddie Seay <[email protected]>
fe70ca7
to
b6b2ddd
Compare
Signed-off-by: Eddie Seay <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Issue #333 reported an issue where the continuation in the
UnaryAsyncWrapper
was getting resumed twice, which results in a fatal error. Additional details are available in the issue, but the gist is -When the client is configured to use the gRPC protocol (therefore using swift-nio for networking instead of URLSession), and it is configured with a short timeout internal, when the server responds in error at around the same time that the timeout occurs, the callback that resumes the continuation may fire twice.
I was able to replicate this issue by adjusting the Eliza app's
ProtocolClient
to have a timeout of 0.25s and to point Eliza at a different Connect server (my own team's), such that requests to the Eliza RPCs would result in 404 errors.This change simply introduces a thread-safe
Bool
to track whether the continuation has been resumed, and if the callback attempts to fire twice, it will throw anassertionFailure
and return instead of callingresume
again and causing a fatal error.Closes #333