-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: use asyncio.Lock over Event #1095
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might need to carefully audit all the reads and writes to self._current
and self._next
. I'd expect to see us holding the lock whenever we read or write from those values. WDYT?
@@ -138,7 +138,7 @@ async def force_refresh(self) -> None: | |||
Forces a new refresh attempt immediately to be used for future connection attempts. | |||
""" | |||
# if next refresh is not already in progress, cancel it and schedule new one immediately | |||
if not self._refresh_in_progress.is_set(): | |||
if not self._lock.locked(): | |||
self._next.cancel() | |||
self._next = self._schedule_refresh(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Go and Java connectors use the mutex to protect access and modification of self._next
and self._current
to avoid race conditions that occur when there are concurrent requests to refresh from multiple threads. The python connector probably needs to do the same. See Go RefreshAheadCache instantiation and scheduleRefresh()
Still have a couple more spots to acquire the lock, the tricky one is in the cloud-sql-python-connector/google/cloud/sql/connector/instance.py Lines 97 to 99 in f8de0f1
|
We are currently improperly using
asyncio.Event
in our code. Event is meant to be used with thewait
coroutine to hold coroutine from running until theset()
is called. We don't use wait anywhere in our code thus this is not having the intended effect.Much easier to just use
asyncio.Lock
which allows usage as a context manager to acquire/release the mutex lock.