Linux NFS development
 help / color / mirror / Atom feed
* Parallel shared to exclusive flock conversion blocks forever on single NFS client
@ 2025-03-12 21:57 Tycho Kirchner
  2025-03-12 22:57 ` Trond Myklebust
  0 siblings, 1 reply; 3+ messages in thread
From: Tycho Kirchner @ 2025-03-12 21:57 UTC (permalink / raw)
  To: linux-nfs

Dear NFS kernel developers,
In `man 2 flock` it is documented, that an existing lock can be 
converted to a new lock mode. Multiple processes on the *same* client 
converting their LOCK_SH to LOCK_EX quickly results in a deadlock of the 
client processes. This can already be reproduced on a single physical 
machine, with for instance the NFS server running in a VM and the host 
machine connecting to it as a client.

Steps to reproduce:
- Setup a virtual machine with Virtualbox and install NFS-server
- Create an /etc/export: /home/VMUSER/nfs  10.0.2.2(rw,async)
- Create a NAT firewall rule forwarding NFS port 2049 to the VM
- Mount the export on the host, chdir it and create an empty file:
   $ sudo mount -t nfs 127.0.0.1:/home/VMUSER/nfs  /somedir
   $ cd /somedir
   $ touch foo
- Execute below attached ~/locktest.py in parallel on the client:
   $ for i in {1..10}; do ~/locktest.py foo & done; wait
- Wait half a minute. The command does not terminate. Ever.
- Abort execution with Ctrl+C and kill leftovers: pkill -f locktest.py

Notes:
- According to my tests, from three concurrent client-processes onwards, 
the block quickly occurs.
- Placing a `fcntl.flock(a, fcntl.LOCK_UN)` before fcntl.LOCK_EX is 
enough, so the deadlock never occurs.
- OR'ing `| fcntl.LOCK_NB` quickly results in endless »BlockingIOError« 
exceptions with no client process making any progress. See the also 
attached ~/locktest_NB.py.
- Multiple distributions, Kernelversions and combinations tested, e.g. 
NFS-client KVER 6.6.67 on Debian12 and KVER 6.12.17-amd64 on 
DebianTesting, or KVER 6.4.0-150600.23.38-default on openSUSE Leap 15.6. 
The error was always and quickly reproducible.

Kind regards
Tycho

###___ ~/locktest.py ___###

#!/usr/bin/env python3
import fcntl
import sys
import time

a = open(sys.argv[1], 'r+')
fcntl.flock(a, fcntl.LOCK_SH)
fcntl.flock(a, fcntl.LOCK_EX)
time.sleep(1)

___________________________


###___ ~/locktest_NB.py ___###

#!/usr/bin/env python3
import fcntl
import sys
import time

def lock_nb(lockfile, l_mode):
     for i in range(20):
         try:
             fcntl.flock(lockfile.fileno(), l_mode | fcntl.LOCK_NB)
             return
         except BlockingIOError as e:
             time.sleep(1)
             continue
     print("gave up waiting for lock...", file=sys.stderr)

a = open(sys.argv[1], 'r+')
lock_nb(a, fcntl.LOCK_SH)
lock_nb(a, fcntl.LOCK_EX)
time.sleep(1)




^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Parallel shared to exclusive flock conversion blocks forever on single NFS client
  2025-03-12 21:57 Parallel shared to exclusive flock conversion blocks forever on single NFS client Tycho Kirchner
@ 2025-03-12 22:57 ` Trond Myklebust
  2025-03-13  9:37   ` Tycho Kirchner
  0 siblings, 1 reply; 3+ messages in thread
From: Trond Myklebust @ 2025-03-12 22:57 UTC (permalink / raw)
  To: linux-nfs@vger.kernel.org, tychokirchner@mail.de

On Wed, 2025-03-12 at 22:57 +0100, Tycho Kirchner wrote:
> Dear NFS kernel developers,
> In `man 2 flock` it is documented, that an existing lock can be 
> converted to a new lock mode. Multiple processes on the *same* client
> converting their LOCK_SH to LOCK_EX quickly results in a deadlock of
> the 
> client processes. This can already be reproduced on a single physical
> machine, with for instance the NFS server running in a VM and the
> host 
> machine connecting to it as a client.
> 
> Steps to reproduce:
> - Setup a virtual machine with Virtualbox and install NFS-server
> - Create an /etc/export: /home/VMUSER/nfs  10.0.2.2(rw,async)
> - Create a NAT firewall rule forwarding NFS port 2049 to the VM
> - Mount the export on the host, chdir it and create an empty file:
>    $ sudo mount -t nfs 127.0.0.1:/home/VMUSER/nfs  /somedir
>    $ cd /somedir
>    $ touch foo
> - Execute below attached ~/locktest.py in parallel on the client:
>    $ for i in {1..10}; do ~/locktest.py foo & done; wait
> - Wait half a minute. The command does not terminate. Ever.
> - Abort execution with Ctrl+C and kill leftovers: pkill -f
> locktest.py
> 
> Notes:
> - According to my tests, from three concurrent client-processes
> onwards, 
> the block quickly occurs.
> - Placing a `fcntl.flock(a, fcntl.LOCK_UN)` before fcntl.LOCK_EX is 
> enough, so the deadlock never occurs.
> - OR'ing `| fcntl.LOCK_NB` quickly results in endless
> »BlockingIOError« 
> exceptions with no client process making any progress. See the also 
> attached ~/locktest_NB.py.
> - Multiple distributions, Kernelversions and combinations tested,
> e.g. 
> NFS-client KVER 6.6.67 on Debian12 and KVER 6.12.17-amd64 on 
> DebianTesting, or KVER 6.4.0-150600.23.38-default on openSUSE Leap
> 15.6. 
> The error was always and quickly reproducible.
> 

The same manpage also states:

       Converting a lock (shared to exclusive, or vice versa) is not guaranteed
       to be atomic: the existing lock is first removed, and then a new lock is
       established.  Between these two steps, a pending lock request by another
       process may be granted, with  the  result  that  the  conversion  either
       blocks,  or  fails  if LOCK_NB was specified.  (This is the original BSD
       behavior, and occurs on many other implementations.)

so there is no harm in adding the LOCK_UN because you cannot expect
atomicity.

Cheers
  Trond
-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Parallel shared to exclusive flock conversion blocks forever on single NFS client
  2025-03-12 22:57 ` Trond Myklebust
@ 2025-03-13  9:37   ` Tycho Kirchner
  0 siblings, 0 replies; 3+ messages in thread
From: Tycho Kirchner @ 2025-03-13  9:37 UTC (permalink / raw)
  To: Trond Myklebust, linux-nfs@vger.kernel.org



On 12.03.25 23:57, Trond Myklebust wrote:
> On Wed, 2025-03-12 at 22:57 +0100, Tycho Kirchner wrote:
>> Dear NFS kernel developers,
>> In `man 2 flock` it is documented, that an existing lock can be
>> converted to a new lock mode. Multiple processes on the *same* client
>> converting their LOCK_SH to LOCK_EX quickly results in a deadlock of
>> the
>> client processes. This can already be reproduced on a single physical
>> machine, with for instance the NFS server running in a VM and the
>> host
>> machine connecting to it as a client.
>>
>> Steps to reproduce:
>> - Setup a virtual machine with Virtualbox and install NFS-server
>> - Create an /etc/export: /home/VMUSER/nfs  10.0.2.2(rw,async)
>> - Create a NAT firewall rule forwarding NFS port 2049 to the VM
>> - Mount the export on the host, chdir it and create an empty file:
>>     $ sudo mount -t nfs 127.0.0.1:/home/VMUSER/nfs  /somedir
>>     $ cd /somedir
>>     $ touch foo
>> - Execute below attached ~/locktest.py in parallel on the client:
>>     $ for i in {1..10}; do ~/locktest.py foo & done; wait
>> - Wait half a minute. The command does not terminate. Ever.
>> - Abort execution with Ctrl+C and kill leftovers: pkill -f
>> locktest.py
>>
>> Notes:
>> - According to my tests, from three concurrent client-processes
>> onwards,
>> the block quickly occurs.
>> - Placing a `fcntl.flock(a, fcntl.LOCK_UN)` before fcntl.LOCK_EX is
>> enough, so the deadlock never occurs.
>> - OR'ing `| fcntl.LOCK_NB` quickly results in endless
>> »BlockingIOError«
>> exceptions with no client process making any progress. See the also
>> attached ~/locktest_NB.py.
>> - Multiple distributions, Kernelversions and combinations tested,
>> e.g.
>> NFS-client KVER 6.6.67 on Debian12 and KVER 6.12.17-amd64 on
>> DebianTesting, or KVER 6.4.0-150600.23.38-default on openSUSE Leap
>> 15.6.
>> The error was always and quickly reproducible.
>>
> 
> The same manpage also states:
> 
>         Converting a lock (shared to exclusive, or vice versa) is not guaranteed
>         to be atomic: the existing lock is first removed, and then a new lock is
>         established.  Between these two steps, a pending lock request by another
>         process may be granted, with  the  result  that  the  conversion  either
>         blocks,  or  fails  if LOCK_NB was specified.  (This is the original BSD
>         behavior, and occurs on many other implementations.)
> 
> so there is no harm in adding the LOCK_UN because you cannot expect
> atomicity.

Thanks for the response, Trond. I also read this part of the manpage, 
but fail to understand, why that would justify a deadlock-scenario using 
the commands I described. On the contrary, in my understanding, the lack 
of atomicity actually makes it feasible for an implementation, to avoid 
the deadlock. Here's how:

   Process A          Process B      comment
LOCK_SH granted   _not_started_
…                 LOCK_SH granted
LOCK_EX blocking  …                A removes SH-lock and waits for B
…                 LOCK_EX granted  granted since A removed SH-lock
…                 LOCK_UN
LOCK_EX granted


However, I think the NFS-implementation incorrectly does _not_ remove 
the initial shared lock of A. As a result, the processes deadlock in the 
following way:

   Process A          Process B      comment
LOCK_SH granted   _not_started_
…                 LOCK_SH granted
LOCK_EX blocking  …                 A keeps SH-lock and waits for B
…                 LOCK_EX blocking  B keeps SH-lock and waits for A
DEADLOCK          DEADLOCK


This deadlock is unnecessary and I think the NFS implementation of flock 
conversions(or fcntl.F_SETLK) should be fixed.
Thanks, Tycho

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-03-13  9:37 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-12 21:57 Parallel shared to exclusive flock conversion blocks forever on single NFS client Tycho Kirchner
2025-03-12 22:57 ` Trond Myklebust
2025-03-13  9:37   ` Tycho Kirchner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox