* Bug: lockf returns false-positive EDEADLK in multiprocess multithreaded environment
@ 2022-01-10 10:46 Ivan Zuboff
[not found] ` <CAL-cVeiHF3+1bq9+RLsdZU-kzfMNYxD0CJBGVeKOrrEpBAyt4Q@mail.gmail.com>
0 siblings, 1 reply; 5+ messages in thread
From: Ivan Zuboff @ 2022-01-10 10:46 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: Type: text/plain, Size: 2266 bytes --]
As an application-level developer, I found a counter-intuitive
behavior in lockf function provided by glibc and Linux kernel that is
likely a bug.
In glibc, lockf function is implemented on top of fcntl system call:
https://github.com/lattera/glibc/blob/master/io/lockf.c
man page says that lockf can sometimes detect deadlock:
http://manpages.ubuntu.com/manpages/xenial/man3/lockf.3.html
Same with fcntl(F_SETLKW), on top of which lockf is implemented:
http://manpages.ubuntu.com/manpages/hirsute/en/man3/fcntl.3posix.html
Deadlock detection algorithm in the Linux kernel
(https://github.com/torvalds/linux/blob/master/fs/locks.c) seems buggy
because it can easily give false positives. Suppose we have two
processes A and B, process A has threads 1 and 2, process B has
threads 3 and 4. When this processes execute concurrently, following
sequence of actions is possible:
1. processA thread1 gets lockI
2. processB thread2 gets lockII
3. processA thread3 tries to get lockII, starts to wait
4. processB thread4 tries to get lockI, kernel detects deadlock,
EDEADLK is returned from lockf function
Steps to reproduce this scenario (see attached file):
1. gcc -o edeadlk ./edeadlk.c -lpthread
2. Launch "./edeadlk a b" in the first terminal window.
3. Launch "./edeadlk a b" in the second terminal window.
What I expected to happen: two instances of the program are steadily working.
What happened instead:
Assertion failed: (lockf(fd, 1, 1)) != -1 file: ./edeadlk.c, line:25,
errno:35 . Error:: Resource deadlock avoided
Aborted (core dumped)
Surely, this behavior is kind of "right". lockf file locks belongs to
process, so on the process level it seems that deadlock is just about
to happen: process A holds lockI and waits for lockII, process B holds
lockII and is going to wait for lockI. However, the algorithm in the
kernel doesn't take threads into account. In fact, a deadlock is not
going to happen here if the thread scheduler will give control to some
thread holding a lock.
I think there's a problem with the deadlock detection algorithm
because it's overly pessimistic, which in turn creates problems --
lockf errors in applications. I had to patch my application to use
flock instead because flock doesn't have this overly-pessimistic
behavior.
[-- Attachment #2: edeadlk.c --]
[-- Type: application/octet-stream, Size: 1051 bytes --]
#include<unistd.h>
#include<stdio.h>
#include<stdlib.h>
#include<fcntl.h>
#include<errno.h>
#include<pthread.h>
#include<stddef.h>
#include<stdint.h>
#define DIE(x)\
{\
fprintf(stderr, "Assertion failed: " #x " file: %s, line:%d, errno:%d ", __FILE__, __LINE__, errno); \
perror(". Error:");\
fflush(stdout);\
abort();\
}
#define ASS(x) if (!(x)) DIE(x)
#define ASS1(x) ASS((x) != -1)
#define ASS0(x) ASS((x) == 0)
void * deadlocker(void *arg)
{
int fd = (int)(ptrdiff_t)arg;
for (;;) {
ASS1( lockf(fd, F_LOCK, 1) );
ASS1( lockf(fd, F_ULOCK, 1) );
}
return NULL;
}
int main(int argc, char * argv[])
{
int fd1, fd2;
ASS( argc >= 3 );
ASS1( fd1 = creat(argv[1], 0660) );
ASS1( fd2 = creat(argv[2], 0660) );
void * thrv;
pthread_t thr1, thr2;
ASS0( pthread_create(&thr1, NULL, deadlocker, (void *)(ptrdiff_t)fd2) );
ASS0( pthread_create(&thr2, NULL, deadlocker, (void *)(ptrdiff_t)fd1) );
ASS0( pthread_join(thr1, &thrv) );
ASS0( pthread_join(thr2, &thrv) );
return 0;
}
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Fwd: Bug: lockf returns false-positive EDEADLK in multiprocess multithreaded environment
[not found] ` <CAL-cVeiHF3+1bq9+RLsdZU-kzfMNYxD0CJBGVeKOrrEpBAyt4Q@mail.gmail.com>
@ 2022-01-31 11:21 ` Jeff Layton
2022-01-31 12:06 ` Ivan Zuboff
0 siblings, 1 reply; 5+ messages in thread
From: Jeff Layton @ 2022-01-31 11:21 UTC (permalink / raw)
To: Ivan Zuboff; +Cc: linux-fsdevel
On Mon, 2022-01-31 at 12:37 +0300, Ivan Zuboff wrote:
> Hello, Jeff!
>
> Several weeks ago I mailed linux-fsdevel about some weird behavior
> I've found. To me, it looks like a bug. Unfortunately, I've got no
> response, so I decided to forward this message to you directly.
>
> Sorry for the interruption and for my bad English -- it's not my
> native language.
>
> Hope to hear your opinion on this!
>
> Best regards,
> Ivan
>
Sorry I missed your message. Re-cc'ing linux-fsdevel, so others can join
in on the discussion:
> ---------- Forwarded message ---------
> From: Ivan Zuboff <anotherdiskmag@gmail.com>
> Date: Mon, Jan 10, 2022 at 1:46 PM
> Subject: Bug: lockf returns false-positive EDEADLK in multiprocess
> multithreaded environment
> To: <linux-fsdevel@vger.kernel.org>
>
>
> As an application-level developer, I found a counter-intuitive
> behavior in lockf function provided by glibc and Linux kernel that is
> likely a bug.
>
> In glibc, lockf function is implemented on top of fcntl system call:
> https://github.com/lattera/glibc/blob/master/io/lockf.c
> man page says that lockf can sometimes detect deadlock:
> http://manpages.ubuntu.com/manpages/xenial/man3/lockf.3.html
> Same with fcntl(F_SETLKW), on top of which lockf is implemented:
> http://manpages.ubuntu.com/manpages/hirsute/en/man3/fcntl.3posix.html
>
> Deadlock detection algorithm in the Linux kernel
> (https://github.com/torvalds/linux/blob/master/fs/locks.c) seems buggy
> because it can easily give false positives. Suppose we have two
> processes A and B, process A has threads 1 and 2, process B has
> threads 3 and 4. When this processes execute concurrently, following
> sequence of actions is possible:
> 1. processA thread1 gets lockI
> 2. processB thread2 gets lockII
> 3. processA thread3 tries to get lockII, starts to wait
> 4. processB thread4 tries to get lockI, kernel detects deadlock,
> EDEADLK is returned from lockf function
>
> Steps to reproduce this scenario (see attached file):
> 1. gcc -o edeadlk ./edeadlk.c -lpthread
> 2. Launch "./edeadlk a b" in the first terminal window.
> 3. Launch "./edeadlk a b" in the second terminal window.
>
> What I expected to happen: two instances of the program are steadily working.
>
> What happened instead:
> Assertion failed: (lockf(fd, 1, 1)) != -1 file: ./edeadlk.c, line:25,
> errno:35 . Error:: Resource deadlock avoided
> Aborted (core dumped)
>
> Surely, this behavior is kind of "right". lockf file locks belongs to
> process, so on the process level it seems that deadlock is just about
> to happen: process A holds lockI and waits for lockII, process B holds
> lockII and is going to wait for lockI. However, the algorithm in the
> kernel doesn't take threads into account. In fact, a deadlock is not
> going to happen here if the thread scheduler will give control to some
> thread holding a lock.
>
> I think there's a problem with the deadlock detection algorithm
> because it's overly pessimistic, which in turn creates problems --
> lockf errors in applications. I had to patch my application to use
> flock instead because flock doesn't have this overly-pessimistic
> behavior.
>
>
The POSIX locking API predates the concept of threading, and so it was
written with some unfortunate concepts around processes. Because you're
doing all of your lock acquisition from different threads, obviously
nothing should deadlock, but all of the locks are owned by the process
so the deadlock detection algorithm can't tell that.
If you have need to do something like this, then you may want to
consider using OFD locks, which were designed to allow proper file
locking in threaded programs. Here's an older article that predates the
name, but it gives a good overview:
https://lwn.net/Articles/586904/
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Fwd: Bug: lockf returns false-positive EDEADLK in multiprocess multithreaded environment
2022-01-31 11:21 ` Fwd: " Jeff Layton
@ 2022-01-31 12:06 ` Ivan Zuboff
2022-01-31 12:45 ` Jeff Layton
0 siblings, 1 reply; 5+ messages in thread
From: Ivan Zuboff @ 2022-01-31 12:06 UTC (permalink / raw)
To: Jeff Layton; +Cc: linux-fsdevel
On Mon, Jan 31, 2022 at 2:21 PM Jeff Layton <jlayton@kernel.org> wrote:
>
> On Mon, 2022-01-31 at 12:37 +0300, Ivan Zuboff wrote:
> > Hello, Jeff!
> >
> > Several weeks ago I mailed linux-fsdevel about some weird behavior
> > I've found. To me, it looks like a bug. Unfortunately, I've got no
> > response, so I decided to forward this message to you directly.
> >
> > Sorry for the interruption and for my bad English -- it's not my
> > native language.
> >
> > Hope to hear your opinion on this!
> >
> > Best regards,
> > Ivan
> >
>
> Sorry I missed your message. Re-cc'ing linux-fsdevel, so others can join
> in on the discussion:
>
> > ---------- Forwarded message ---------
> > From: Ivan Zuboff <anotherdiskmag@gmail.com>
> > Date: Mon, Jan 10, 2022 at 1:46 PM
> > Subject: Bug: lockf returns false-positive EDEADLK in multiprocess
> > multithreaded environment
> > To: <linux-fsdevel@vger.kernel.org>
> >
> >
> > As an application-level developer, I found a counter-intuitive
> > behavior in lockf function provided by glibc and Linux kernel that is
> > likely a bug.
> >
> > In glibc, lockf function is implemented on top of fcntl system call:
> > https://github.com/lattera/glibc/blob/master/io/lockf.c
> > man page says that lockf can sometimes detect deadlock:
> > http://manpages.ubuntu.com/manpages/xenial/man3/lockf.3.html
> > Same with fcntl(F_SETLKW), on top of which lockf is implemented:
> > http://manpages.ubuntu.com/manpages/hirsute/en/man3/fcntl.3posix.html
> >
> > Deadlock detection algorithm in the Linux kernel
> > (https://github.com/torvalds/linux/blob/master/fs/locks.c) seems buggy
> > because it can easily give false positives. Suppose we have two
> > processes A and B, process A has threads 1 and 2, process B has
> > threads 3 and 4. When this processes execute concurrently, following
> > sequence of actions is possible:
> > 1. processA thread1 gets lockI
> > 2. processB thread2 gets lockII
> > 3. processA thread3 tries to get lockII, starts to wait
> > 4. processB thread4 tries to get lockI, kernel detects deadlock,
> > EDEADLK is returned from lockf function
> >
> > Steps to reproduce this scenario (see attached file):
> > 1. gcc -o edeadlk ./edeadlk.c -lpthread
> > 2. Launch "./edeadlk a b" in the first terminal window.
> > 3. Launch "./edeadlk a b" in the second terminal window.
> >
> > What I expected to happen: two instances of the program are steadily working.
> >
> > What happened instead:
> > Assertion failed: (lockf(fd, 1, 1)) != -1 file: ./edeadlk.c, line:25,
> > errno:35 . Error:: Resource deadlock avoided
> > Aborted (core dumped)
> >
> > Surely, this behavior is kind of "right". lockf file locks belongs to
> > process, so on the process level it seems that deadlock is just about
> > to happen: process A holds lockI and waits for lockII, process B holds
> > lockII and is going to wait for lockI. However, the algorithm in the
> > kernel doesn't take threads into account. In fact, a deadlock is not
> > going to happen here if the thread scheduler will give control to some
> > thread holding a lock.
> >
> > I think there's a problem with the deadlock detection algorithm
> > because it's overly pessimistic, which in turn creates problems --
> > lockf errors in applications. I had to patch my application to use
> > flock instead because flock doesn't have this overly-pessimistic
> > behavior.
> >
> >
>
> The POSIX locking API predates the concept of threading, and so it was
> written with some unfortunate concepts around processes. Because you're
> doing all of your lock acquisition from different threads, obviously
> nothing should deadlock, but all of the locks are owned by the process
> so the deadlock detection algorithm can't tell that.
>
> If you have need to do something like this, then you may want to
> consider using OFD locks, which were designed to allow proper file
> locking in threaded programs. Here's an older article that predates the
> name, but it gives a good overview:
>
> https://lwn.net/Articles/586904/
>
> --
> Jeff Layton <jlayton@kernel.org>
Thank you very much for your reply.
Yes, I've considered OFD locks and flock for my specific task, and
flock seemed the more reasonable solution because of its portability
(which is valuable for my task). So my specific problem is indeed
solved, I just wanted to warn kernel developers about such kind of
unexpectable behavior deep under the hood. I thought that maybe if the
algorithm in locks.c can't detect deadlock without such false
positives then maybe it shouldn't try to do it at all? I have no
specific stance on this question, I just wanted to inform the people
who may care about it and maybe would want to do something about it.
At least there will be messages in mailing list archives explaining
the situation for people who will face the same problem -- not bad in
itself!
Best regards,
Ivan
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Fwd: Bug: lockf returns false-positive EDEADLK in multiprocess multithreaded environment
2022-01-31 12:06 ` Ivan Zuboff
@ 2022-01-31 12:45 ` Jeff Layton
2022-01-31 13:10 ` Ivan Zuboff
0 siblings, 1 reply; 5+ messages in thread
From: Jeff Layton @ 2022-01-31 12:45 UTC (permalink / raw)
To: Ivan Zuboff; +Cc: linux-fsdevel
On Mon, 2022-01-31 at 15:06 +0300, Ivan Zuboff wrote:
> On Mon, Jan 31, 2022 at 2:21 PM Jeff Layton <jlayton@kernel.org> wrote:
> >
> > On Mon, 2022-01-31 at 12:37 +0300, Ivan Zuboff wrote:
> > > Hello, Jeff!
> > >
> > > Several weeks ago I mailed linux-fsdevel about some weird behavior
> > > I've found. To me, it looks like a bug. Unfortunately, I've got no
> > > response, so I decided to forward this message to you directly.
> > >
> > > Sorry for the interruption and for my bad English -- it's not my
> > > native language.
> > >
> > > Hope to hear your opinion on this!
> > >
> > > Best regards,
> > > Ivan
> > >
> >
> > Sorry I missed your message. Re-cc'ing linux-fsdevel, so others can join
> > in on the discussion:
> >
> > > ---------- Forwarded message ---------
> > > From: Ivan Zuboff <anotherdiskmag@gmail.com>
> > > Date: Mon, Jan 10, 2022 at 1:46 PM
> > > Subject: Bug: lockf returns false-positive EDEADLK in multiprocess
> > > multithreaded environment
> > > To: <linux-fsdevel@vger.kernel.org>
> > >
> > >
> > > As an application-level developer, I found a counter-intuitive
> > > behavior in lockf function provided by glibc and Linux kernel that is
> > > likely a bug.
> > >
> > > In glibc, lockf function is implemented on top of fcntl system call:
> > > https://github.com/lattera/glibc/blob/master/io/lockf.c
> > > man page says that lockf can sometimes detect deadlock:
> > > http://manpages.ubuntu.com/manpages/xenial/man3/lockf.3.html
> > > Same with fcntl(F_SETLKW), on top of which lockf is implemented:
> > > http://manpages.ubuntu.com/manpages/hirsute/en/man3/fcntl.3posix.html
> > >
> > > Deadlock detection algorithm in the Linux kernel
> > > (https://github.com/torvalds/linux/blob/master/fs/locks.c) seems buggy
> > > because it can easily give false positives. Suppose we have two
> > > processes A and B, process A has threads 1 and 2, process B has
> > > threads 3 and 4. When this processes execute concurrently, following
> > > sequence of actions is possible:
> > > 1. processA thread1 gets lockI
> > > 2. processB thread2 gets lockII
> > > 3. processA thread3 tries to get lockII, starts to wait
> > > 4. processB thread4 tries to get lockI, kernel detects deadlock,
> > > EDEADLK is returned from lockf function
> > >
> > > Steps to reproduce this scenario (see attached file):
> > > 1. gcc -o edeadlk ./edeadlk.c -lpthread
> > > 2. Launch "./edeadlk a b" in the first terminal window.
> > > 3. Launch "./edeadlk a b" in the second terminal window.
> > >
> > > What I expected to happen: two instances of the program are steadily working.
> > >
> > > What happened instead:
> > > Assertion failed: (lockf(fd, 1, 1)) != -1 file: ./edeadlk.c, line:25,
> > > errno:35 . Error:: Resource deadlock avoided
> > > Aborted (core dumped)
> > >
> > > Surely, this behavior is kind of "right". lockf file locks belongs to
> > > process, so on the process level it seems that deadlock is just about
> > > to happen: process A holds lockI and waits for lockII, process B holds
> > > lockII and is going to wait for lockI. However, the algorithm in the
> > > kernel doesn't take threads into account. In fact, a deadlock is not
> > > going to happen here if the thread scheduler will give control to some
> > > thread holding a lock.
> > >
> > > I think there's a problem with the deadlock detection algorithm
> > > because it's overly pessimistic, which in turn creates problems --
> > > lockf errors in applications. I had to patch my application to use
> > > flock instead because flock doesn't have this overly-pessimistic
> > > behavior.
> > >
> > >
> >
> > The POSIX locking API predates the concept of threading, and so it was
> > written with some unfortunate concepts around processes. Because you're
> > doing all of your lock acquisition from different threads, obviously
> > nothing should deadlock, but all of the locks are owned by the process
> > so the deadlock detection algorithm can't tell that.
> >
> > If you have need to do something like this, then you may want to
> > consider using OFD locks, which were designed to allow proper file
> > locking in threaded programs. Here's an older article that predates the
> > name, but it gives a good overview:
> >
> > https://lwn.net/Articles/586904/
> >
> > --
> > Jeff Layton <jlayton@kernel.org>
>
> Thank you very much for your reply.
>
> Yes, I've considered OFD locks and flock for my specific task, and
> flock seemed the more reasonable solution because of its portability
> (which is valuable for my task). So my specific problem is indeed
> solved, I just wanted to warn kernel developers about such kind of
> unexpectable behavior deep under the hood. I thought that maybe if the
> algorithm in locks.c can't detect deadlock without such false
> positives then maybe it shouldn't try to do it at all?
>
Heh, I once made this argument as well, but it does work in some
traditional cases so we decided to keep it around. It is onerous to
track though.
OFD and flock locks specifically do not do any sort of deadlock
detection (thank goodness).
> I have no
> specific stance on this question, I just wanted to inform the people
> who may care about it and maybe would want to do something about it.
>
> At least there will be messages in mailing list archives explaining
> the situation for people who will face the same problem -- not bad in
> itself!
>
I think the moral of the story is that you don't really want to use
classic POSIX locks in anything that involves locking between different
threads, as their design just doesn't mesh well with that model.
We did try to convey that in the fcntl manpage in the lead-in to OFD
locks section:
* The threads in a process share locks. In other words, a multi‐
threaded program can't use record locking to ensure that
threads don't simultaneously access the same region of a file.
Maybe we need to revise that to be more clear? Or possibly add something
that points out that this can also manifest as false-positives in
deadlock detection?
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Fwd: Bug: lockf returns false-positive EDEADLK in multiprocess multithreaded environment
2022-01-31 12:45 ` Jeff Layton
@ 2022-01-31 13:10 ` Ivan Zuboff
0 siblings, 0 replies; 5+ messages in thread
From: Ivan Zuboff @ 2022-01-31 13:10 UTC (permalink / raw)
To: Jeff Layton; +Cc: linux-fsdevel
On Mon, Jan 31, 2022 at 3:45 PM Jeff Layton <jlayton@kernel.org> wrote:
>
> On Mon, 2022-01-31 at 15:06 +0300, Ivan Zuboff wrote:
> > On Mon, Jan 31, 2022 at 2:21 PM Jeff Layton <jlayton@kernel.org> wrote:
> > >
> > > On Mon, 2022-01-31 at 12:37 +0300, Ivan Zuboff wrote:
> > > > Hello, Jeff!
> > > >
> > > > Several weeks ago I mailed linux-fsdevel about some weird behavior
> > > > I've found. To me, it looks like a bug. Unfortunately, I've got no
> > > > response, so I decided to forward this message to you directly.
> > > >
> > > > Sorry for the interruption and for my bad English -- it's not my
> > > > native language.
> > > >
> > > > Hope to hear your opinion on this!
> > > >
> > > > Best regards,
> > > > Ivan
> > > >
> > >
> > > Sorry I missed your message. Re-cc'ing linux-fsdevel, so others can join
> > > in on the discussion:
> > >
> > > > ---------- Forwarded message ---------
> > > > From: Ivan Zuboff <anotherdiskmag@gmail.com>
> > > > Date: Mon, Jan 10, 2022 at 1:46 PM
> > > > Subject: Bug: lockf returns false-positive EDEADLK in multiprocess
> > > > multithreaded environment
> > > > To: <linux-fsdevel@vger.kernel.org>
> > > >
> > > >
> > > > As an application-level developer, I found a counter-intuitive
> > > > behavior in lockf function provided by glibc and Linux kernel that is
> > > > likely a bug.
> > > >
> > > > In glibc, lockf function is implemented on top of fcntl system call:
> > > > https://github.com/lattera/glibc/blob/master/io/lockf.c
> > > > man page says that lockf can sometimes detect deadlock:
> > > > http://manpages.ubuntu.com/manpages/xenial/man3/lockf.3.html
> > > > Same with fcntl(F_SETLKW), on top of which lockf is implemented:
> > > > http://manpages.ubuntu.com/manpages/hirsute/en/man3/fcntl.3posix.html
> > > >
> > > > Deadlock detection algorithm in the Linux kernel
> > > > (https://github.com/torvalds/linux/blob/master/fs/locks.c) seems buggy
> > > > because it can easily give false positives. Suppose we have two
> > > > processes A and B, process A has threads 1 and 2, process B has
> > > > threads 3 and 4. When this processes execute concurrently, following
> > > > sequence of actions is possible:
> > > > 1. processA thread1 gets lockI
> > > > 2. processB thread2 gets lockII
> > > > 3. processA thread3 tries to get lockII, starts to wait
> > > > 4. processB thread4 tries to get lockI, kernel detects deadlock,
> > > > EDEADLK is returned from lockf function
> > > >
> > > > Steps to reproduce this scenario (see attached file):
> > > > 1. gcc -o edeadlk ./edeadlk.c -lpthread
> > > > 2. Launch "./edeadlk a b" in the first terminal window.
> > > > 3. Launch "./edeadlk a b" in the second terminal window.
> > > >
> > > > What I expected to happen: two instances of the program are steadily working.
> > > >
> > > > What happened instead:
> > > > Assertion failed: (lockf(fd, 1, 1)) != -1 file: ./edeadlk.c, line:25,
> > > > errno:35 . Error:: Resource deadlock avoided
> > > > Aborted (core dumped)
> > > >
> > > > Surely, this behavior is kind of "right". lockf file locks belongs to
> > > > process, so on the process level it seems that deadlock is just about
> > > > to happen: process A holds lockI and waits for lockII, process B holds
> > > > lockII and is going to wait for lockI. However, the algorithm in the
> > > > kernel doesn't take threads into account. In fact, a deadlock is not
> > > > going to happen here if the thread scheduler will give control to some
> > > > thread holding a lock.
> > > >
> > > > I think there's a problem with the deadlock detection algorithm
> > > > because it's overly pessimistic, which in turn creates problems --
> > > > lockf errors in applications. I had to patch my application to use
> > > > flock instead because flock doesn't have this overly-pessimistic
> > > > behavior.
> > > >
> > > >
> > >
> > > The POSIX locking API predates the concept of threading, and so it was
> > > written with some unfortunate concepts around processes. Because you're
> > > doing all of your lock acquisition from different threads, obviously
> > > nothing should deadlock, but all of the locks are owned by the process
> > > so the deadlock detection algorithm can't tell that.
> > >
> > > If you have need to do something like this, then you may want to
> > > consider using OFD locks, which were designed to allow proper file
> > > locking in threaded programs. Here's an older article that predates the
> > > name, but it gives a good overview:
> > >
> > > https://lwn.net/Articles/586904/
> > >
> > > --
> > > Jeff Layton <jlayton@kernel.org>
> >
> > Thank you very much for your reply.
> >
> > Yes, I've considered OFD locks and flock for my specific task, and
> > flock seemed the more reasonable solution because of its portability
> > (which is valuable for my task). So my specific problem is indeed
> > solved, I just wanted to warn kernel developers about such kind of
> > unexpectable behavior deep under the hood. I thought that maybe if the
> > algorithm in locks.c can't detect deadlock without such false
> > positives then maybe it shouldn't try to do it at all?
> >
>
> Heh, I once made this argument as well, but it does work in some
> traditional cases so we decided to keep it around. It is onerous to
> track though.
>
> OFD and flock locks specifically do not do any sort of deadlock
> detection (thank goodness).
>
>
> > I have no
> > specific stance on this question, I just wanted to inform the people
> > who may care about it and maybe would want to do something about it.
> >
> > At least there will be messages in mailing list archives explaining
> > the situation for people who will face the same problem -- not bad in
> > itself!
> >
>
> I think the moral of the story is that you don't really want to use
> classic POSIX locks in anything that involves locking between different
> threads, as their design just doesn't mesh well with that model.
>
> We did try to convey that in the fcntl manpage in the lead-in to OFD
> locks section:
>
> * The threads in a process share locks. In other words, a multi‐
> threaded program can't use record locking to ensure that
> threads don't simultaneously access the same region of a file.
>
> Maybe we need to revise that to be more clear? Or possibly add something
> that points out that this can also manifest as false-positives in
> deadlock detection?
>
> --
> Jeff Layton <jlayton@kernel.org>
Yes, I see the moral of this story the same way as you.
I read this passage in fcntl manpage before when I thought about the
best way to solve my problem. In my opinion, this description tells
about another problem. It says that record locking will not help with
races between threads because record locking is performed on the
thread level. Okay, I thought, I have a mutex for that purpose. But in
my humble opinion, it would be great if manpage said that record locks
not just "don't protect threads from each other" but "don't pair well
with threading at all, because it can interfere with threading in
counter-intuitive way". So yes, I fully appreciate an idea to edit
manpage. But I personally have zero experience dealing with manpage
documentation so I'd prefer someone else did it, to be honest. Of
course, if it's possible.
Best regards,
Ivan
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-01-31 13:10 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-01-10 10:46 Bug: lockf returns false-positive EDEADLK in multiprocess multithreaded environment Ivan Zuboff
[not found] ` <CAL-cVeiHF3+1bq9+RLsdZU-kzfMNYxD0CJBGVeKOrrEpBAyt4Q@mail.gmail.com>
2022-01-31 11:21 ` Fwd: " Jeff Layton
2022-01-31 12:06 ` Ivan Zuboff
2022-01-31 12:45 ` Jeff Layton
2022-01-31 13:10 ` Ivan Zuboff
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).