-
-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock when a process tries to open and lock on the same file twice #315
Comments
If we use from threading import Barrier, Condition, Thread
def thread_fds():
from pathlib import Path
return {int(p.stem) for p in Path('/proc/thread-self/fd').iterdir()}
barrier = Barrier(3)
i, j = 0, 0
cv = Condition()
parent_fds = thread_fds()
def foo(tid):
global i, j
with cv:
cv.wait_for(lambda: i == tid) # print initial fds in order
print(tid, sorted(thread_fds() - parent_fds))
i += 1 # wake up next thread
cv.notify_all()
barrier.wait() # open file after all threads print initial fds
with open('/dev/null'):
barrier.wait() # print new fds after all threads open file
with cv:
cv.wait_for(lambda: j == tid) # print new fds in order
print(tid, sorted(thread_fds() - parent_fds))
j += 1 # wake up next thread
cv.notify_all()
barrier.wait() # close file after all threads print new fds
def main():
for tid in range(3):
Thread(target=foo, args=(tid,)).start()
main() Output on my machine:
|
TL; DR: Use The core problem is the granularity of the re-entrant control. When I tried building a wheel by myself, I suddenly found that
Using However there are some minor problems:
|
It seems all threads in one process share the same file descriptor table, and So there is not a simple method to tell if the current thread has already opened a file. That being said, I think we can at least give some warnings, when we detect the lock file has been opened in the current process. It is a necessity condition for this kind of deadlock, although not sufficient. And if it indeed causes deadlock, then users will know the root cause quickly. |
I read this stackexchange question (Actually, I was inspired by the comment from Dave Reikher) before writing code to verify that file descriptors are shared across threads in a process (as we learned in an OS course), but it seems not in the POSIX spec. Anyway, I agree that "there is not a simple method to tell if the current thread has already opened a file" and we can do something when "the lock file has been opened in the current process" Since checking all file descriptors might be time consuming (if this process has opened many files), this behavior can't be enable by default in my opinion. Maybe there can be some DEBUG mode? Or use something like: (this is very similar to from dataclasses import dataclass
from threading import local
@dataclass
class FileLockContext:
"""A dataclass which holds the context for a ``BaseFileLock`` object."""
# The context is held in a separate class to allow optional use of thread local storage via the
# ThreadLocalFileContext class.
#: The path to the lock file.
lock_file: str
#: The default timeout value.
timeout: float
#: The mode for the lock files
mode: int
#: The file descriptor for the *_lock_file* as it is returned by the os.open() function, not None when lock held
lock_file_fd: int | None = None
#: The lock counter is used for implementing the nested locking mechanism.
lock_counter: int = 0 # When the lock is acquired is increased and the lock is only released, when this value is 0
class ThreadLocalFileContext(FileLockContext, local):
"""A thread local version of the ``FileLockContext`` class."""
class ContextStore:
def __init__(self) -> None:
self.data: dict[tuple[str, float, int, bool], FileLockContext] = {} # maybe use WeakValueDictionary
default_context_store = ContextStore()
class FileLock:
def __init__(
self,
lock_file: str,
timeout: float = -1,
mode: int = 0o644,
thread_local: bool = True,
*,
is_singleton: bool = False,
context_store: ContextStore | None = None
) -> None:
if context_store is None:
context_store = default_context_store
args = (lock_file, timeout, mode)
self._context = context_store.data.setdefault(
(*args, thread_local),
(ThreadLocalFileContext if thread_local else FileLockContext)(*args)
) |
Minimal reproducible example:
This is dangerous, and even when we just create one filelock, we don't know if any other code already created the filelock in the same process and we are within the filelock.
A possible solution would be to check and get existing
fd
on a file inUnixFileLock._acquire
function. In Linux, this is easy because Pythonfd
is just integer, and allfd
s are checkable in/proc/{process_id}/fd/
.cc @gaborbernat
The text was updated successfully, but these errors were encountered: