From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from fhigh-b2-smtp.messagingengine.com (fhigh-b2-smtp.messagingengine.com [202.12.124.153]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B03803793A0; Wed, 4 Mar 2026 23:03:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.12.124.153 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772665410; cv=none; b=ESKR7N2J+foxieOudyCnBaTekwsebWNnFfabzbXqQd9AuEWvfK5q+1RUgJPg3JAkoHvqUBoJ3A+3hJ6BeYBoEc983gBFvOA4y6pXryQMqqiLts7cIiG0R0y4yn9Goa70NL3yooDPmo8RLRsMgMX/ocYRjw+S5QftJSmTHhGZAYo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772665410; c=relaxed/simple; bh=O8kBrtBtOPAI6uj0gytOhu3Is4E8hr94H8g3GkofaCc=; h=Content-Type:MIME-Version:From:To:Cc:Subject:In-reply-to: References:Date:Message-id; b=oQfbgmbZEd5j0VIOU8cpnT8iLU8xPdpBx+DFaL9h2Ax10HHvZXgl0ZpGFObGyJnZvhoaxXZx6az0y4pFvTkRc8Benp7Ywvi2fg2tEjNo2blymRqmJKs6ZyFdzHfrRjW7cOKD/Jtjh9CnUtHC7ZIidqIAlI41mBpnnG0JTA44nno= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ownmail.net; spf=pass smtp.mailfrom=ownmail.net; dkim=pass (2048-bit key) header.d=ownmail.net header.i=@ownmail.net header.b=LdPFlQGP; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=FTS77x5H; arc=none smtp.client-ip=202.12.124.153 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ownmail.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ownmail.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ownmail.net header.i=@ownmail.net header.b="LdPFlQGP"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="FTS77x5H" Received: from phl-compute-05.internal (phl-compute-05.internal [10.202.2.45]) by mailfhigh.stl.internal (Postfix) with ESMTP id CB6B37A01D1; Wed, 4 Mar 2026 18:03:27 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-05.internal (MEProxy); Wed, 04 Mar 2026 18:03:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ownmail.net; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm1; t= 1772665407; x=1772751807; bh=zCexf+MdQmjBh727ay4vZda/kQnXXO5zStK 27sGWvs4=; b=LdPFlQGP1pbMpi9Wo0a6+PNvDBqkpJyc3cwL5vRn2Zb8UwczOUR ntpRuW0ooUr09iJkVnEd73DWPLAapvMg7Osn1UpcMTgxzxfPd3lH9RM6+Yc4Bk58 mhWNinTuxYHf6BURRkSzudmXJzivQgSR9j1sMYybOnLCyfWax1AXAbdvz1Kb/4gU bn5SFZLQYscLj1+qUoljCc8HhDxaQHCOMWefHLDZTvoxAsxUND3yh9/vQ/sM/BGU DvOnaEw+gLw+Oyb1SW7LLQpIijPLtJ6Q4HJOA78xoyvIYbyWTgWZljXDVGfN6CmH 0W/5REwJskNE9SrvniQCXeQN7MWUHEeTzUw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1772665407; x= 1772751807; bh=zCexf+MdQmjBh727ay4vZda/kQnXXO5zStK27sGWvs4=; b=F TS77x5HGr2/9nHDRt7F8L7zMudcZ4UXA26whikbwmUAI6AwX3RJPMJSlzzfWbC++ viQzt4G/RYDOe3FPzk9QJm2Flj3GH76uvNekknIO+VQun5c1p2nZpqKojvbB46UM PpOI6FHNOaL4Cgznq4HJmRailkK5n/OH+ZgIa1TA1+rdckZ9lcS0NJkEt6+Bt07e qZyk2NMNOFlPb/OM1HVcC8Axg+83UBrfvCOX+OTs7/KtA9kDk7NdAw5GVXyl/Y/c 0vVafrZNal2u3Xnw6W/nWs5DKyJrpAGUWM8/+qYTYTqzmKrytEJ+cMeHfOS/Cd9i PxDAIQ7XJiKE3oVlXmWGg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddvieegjeejucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurheptgfgggfhvfevufgjfhffkfhrsehtkeertddttdejnecuhfhrohhmpefpvghilheu rhhofihnuceonhgvihhlsgesohifnhhmrghilhdrnhgvtheqnecuggftrfgrthhtvghrnh epvdfhgfehkeekiedtleefhefhkeevvdegfffhgfduffeiveelffehlefhfeehveetnecu vehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepnhgvihhlsg esohifnhhmrghilhdrnhgvthdpnhgspghrtghpthhtohepiedpmhhouggvpehsmhhtphho uhhtpdhrtghpthhtohepshhtrggslhgvsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheplhhinhhugidqnhhfshesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphht thhopehokhhorhhnihgvvhesrhgvughhrghtrdgtohhmpdhrtghpthhtohepthhjrdhirg hmrdhtjhesphhrohhtohhnrdhmvgdprhgtphhtthhopehjlhgrhihtohhnsehkvghrnhgv lhdrohhrghdprhgtphhtthhopeduuddvkeekiedusegsuhhgshdruggvsghirghnrdhorh hg X-ME-Proxy: Feedback-ID: i9d664b8f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 4 Mar 2026 18:03:25 -0500 (EST) Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: NeilBrown To: "Tj" , Jeff Layton Cc: 1128861@bugs.debian.org, linux-nfs@vger.kernel.org, "Olga Kornievskaia" , stable@vger.kernel.org Subject: Re: Regression: Missing check in nfsd_permission() causes -ENOLCK No locks available In-reply-to: References: Date: Thu, 05 Mar 2026 10:03:21 +1100 Message-id: <177266540127.7472.3460090956713656639@noble.neil.brown.name> Reply-To: NeilBrown On Tue, 24 Feb 2026, Tj wrote: > Upstream commit 4cc9b9f2bf4dfe13fe573 "nfsd: refine and rename > NFSD_MAY_LOCK" and >  stable v6.12.54 commit 18744bc56b0ec  (re)moves checks from > fs/nfsd/vfs.c::nfsd_permission(). > >  This causes NFS clients to see > > $ flock -e -w 4 /srv/NAS/test/debian-13.3.0-amd64-netinst.iso sleep 1 > flock: /srv/NAS/test/debian-13.3.0-amd64-netinst.iso: No locks available > > Keeping the check in nfsd_permission() whilst also copying it to > fs/nfsd/nfsfh.c::__fh_verify() resolves the issue. > > This was discovered on the Debian openQA infrastructure server when > upgrading kernel from v6.12.48 to later v6.12.y where worker hosts (with > any earlier or later kernel version) pass NFSv3 mounted ISO images to > qemu-system-x86_64 and it reports: > > !!! : qemu-system-x86_64: -device > scsi-cd,id=cd0-device,drive=cd0-overlay0,serial=cd0: Failed to get > "consistent read" lock: No locks available > QEMU: Is another process using the image > [/var/lib/openqa/pool/2/20260223-1-debian-testing-amd64-netinst.iso]? > > A simple reproducer with the server using: > > # cat /etc/exports.d/test.exports > /srv/NAS/test > fdff::/64(fsid=0,rw,no_root_squash,sync,no_subtree_check,auth_nlm) > > and clients using: > > # mount -t nfs [fdff::2]:/srv/NAS/test /srv/NAS/test -o > proto=tcp6,ro,fsc,soft Linux has two quite different sorts of locks - flock and fcntl. flocks lock the whole file, shared or exclusive. fcntl can lock any byte-range (including the whole file), shared or exclusive. flock and fcntl locks don't conflict. exclusive flock locks only require read access to the file exclusive fcntl locks require write access to the file. The NLM protocol only supports one type of byte-range lock. It is natural to map fcntl locks onto NLM locks. The early Linux NFS implementation handled flock locks entirely locally so different clients didn't conflict. This could be confusing but was widely documented and understood. Some years ago Linux NFS was enhanced to handle flock locks like whole-file fcntl locks. This means that clients with flock locks would conflict (maybe good) but that flock locks and fcntl locks would now conflict (maybe bad). You can still get the old behaviour with "-o local_lock=flock". So if you open a file on NFS read-only and attempt an exclusive flock, that will be sent to the server as a full-range fcntl lock which should require write access. If the server finds you don't have write access - you lose. It would seems to make sense to tell qemu that the device is read-only. Then it will hopefully only request a shared lock. Can you try that? Note that even before my patch, if the filesystem was exported read-only or mounted read-only on the server, then exclusive flock locks would fail. I think that the current behaviour is correct, however I do understand that it is a regression and maybe that justifies incorrect behaviour. Maybe Jeff, as locking maintainer, would be willing to do something like diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c index dd0214dcb695..6c674fc51bab 100644 --- a/fs/lockd/svcsubs.c +++ b/fs/lockd/svcsubs.c @@ -73,6 +73,14 @@ static inline unsigned int file_hash(struct nfs_fh *f) int lock_to_openmode(struct file_lock *lock) { + /* + * flock only requires READ access and to support + * clients which send flock locks via NLM we + * report O_RDONLY for full-file locks. + */ + if (lock->fl_start == 0 && + lock->fl_end == NLM4_OFFSET_MAX) + return O_RDONLY; return lock_is_write(lock) ? O_WRONLY : O_RDONLY; } But I wouldn't encourage him to. NeilBrown > > will trigger the error as shown above: > > $ flock -e -w 4 /srv/NAS/test/debian-13.3.0-amd64-netinst.iso sleep 1 > flock: /srv/NAS/test/debian-13.3.0-amd64-netinst.iso: No locks available > > A simple test program calling fcntl() with the same arguments QEMU uses > also fails in the same way. > > $ ./nfs3_range_lock_test > /srv/NAS/test/debian-13.3.0-amd64-netinst.{iso,overlay} > Opened base file: /srv/NAS/test/debian-13.3.0-amd64-netinst.iso > Opened overlay file: /srv/NAS/test/debian-13.3.0-amd64-netinst.overlay > Attempting lock at 4 on /srv/NAS/test/debian-13.3.0-amd64-netinst.iso > fcntl(fd, F_GETLK, &fl) failed on base: No locks available > Attempting lock at 8 on /srv/NAS/test/debian-13.3.0-amd64-netinst.overlay > fcntl(fd, F_GETLK, &fl) failed on overlay: No locks available > > > >