-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit rare race condition in topo.Server
#17165
Limit rare race condition in topo.Server
#17165
Conversation
Signed-off-by: Florent Poinsard <[email protected]>
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #17165 +/- ##
==========================================
+ Coverage 67.32% 67.38% +0.06%
==========================================
Files 1569 1569
Lines 252552 252681 +129
==========================================
+ Hits 170032 170280 +248
+ Misses 82520 82401 -119 ☔ View full report in Codecov by Sentry. |
Signed-off-by: Florent Poinsard <[email protected]>
TestRunFailsToStartTabletManager
topo.Server
…unFailsToStartTabletManager Signed-off-by: Florent Poinsard <[email protected]>
Signed-off-by: Florent Poinsard <[email protected]>
Adding |
Signed-off-by: Florent Poinsard <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! It does seem as though the last commit has started breaking one of the zk2 topo tests.
@mattlord and I have seen this test flakes in the past. I will merge this PR as the failure observed on the ZK2 test does not seem to be caused by this PR. |
Signed-off-by: Florent Poinsard <[email protected]> Signed-off-by: Renan Rangel <[email protected]>
Description
This PR was motivated by what first looked like a flaky test,
TestRunFailsToStartTabletManager
was failing very often due to either a segfault or a race condition, both on thetopo.Server
. Turns out that this test is actually exposing a race condition that can only happen when the server is being closed and another goroutine is accessing thetopo.Server
's connections.Below are the two traces that were observed when running
TestRunFailsToStartTabletManager
:Segmentation fault
Race condition
I have been through the code and find at least 3 other code paths that can expose a similar race condition too:
Proposed Solutions
I have explored several options to fix this issue:
topo.Server
, locking it whenever we need to Close it or use the connections. Dropped off this idea as it would impact performance negatively, specially for our gRPC servers that use topo.Server a lot and need fast concurrent operations. gRPC is not even affected by such panic/segfault, as when closing the server we first wait for all gRPC connections to close.topo.Server
we also close the topo connection and set it to nil, but in our implementations of the topo connections we don't check if the topo connection is nil or not before using it. For this reason and for a potential performance impact I gave up this idea.topo.Server
fields tonil
. This was given up for the same reason (on the topo connection implementation) explained in idea number 2.true
or increase a counter, theClose
method would only proceed when the boolean is set tofalse
or the counter is equal to 0, this would imply we add some sort of timeout to theClose
method to "force-close" after a certain time even if the not all usages of the connection are complete. This method would be a bit more invasive and require a decent amount of changes. Given that the impact of this issue is very small and only happens if the binary was going to stop anyway, I opted for option 5 instead.Close
method on thetopo.Server
, checking the context before accessing the topo connections should avoid a lot of our issues. I think this is a good first approach, if we observe more panics/segfault in the future we could implement option 4.This PR implements option 5, making sure we are checking the context every time we need to access the topo connection. It is not a 100% safe solution as some race condition may still happen, but: 1. they're already pretty unlikely, 2. this will safeguard a lot of potential race condition already.