From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CD6C60860; Mon, 22 Jan 2024 15:04:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705935883; cv=none; b=OkuyDJlgiVaYRc6atO8vFxzDXZGIN8N4D12QAEVYB+ZwmK0AisGeIWcYpAWUKkAbVeZqV29QIv/DyblSjCac+I5x4c5mTwKw3mRKSHCsGiwXdPPcIl/EnXpRtwytcowuwmXcddPu6jhiLGAlnOFDJErc9OhTySsKc6NRd7SuwrQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705935883; c=relaxed/simple; bh=VPRWKVJwR+x5x+U1V64fzrJg/kZFD58rr+flrh30mIQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b31jNOHRlZctThzlVvWV1A0MZ84oizyJAVVkjRU63tZ+LysHTzkzZZrDsBwKwd6x037Tyyo0mcWHxq61x2MQTTj0xDYoDpswEU+R1JTYlWV9s1uEPf3bV2pop7qUxeT3dMXh1LrBhUqGG5xV9n9H4Kl8enbDlf3KIe1N0aHktJY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AaKpzhe2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AaKpzhe2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2183AC43390; Mon, 22 Jan 2024 15:04:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705935882; bh=VPRWKVJwR+x5x+U1V64fzrJg/kZFD58rr+flrh30mIQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AaKpzhe2mK7PN9Bm/lOVXyR4XnAyEh2sjJDjY0WxtxQ8JQ7yCgi3r/5brgNYNl7py cdaUx3MBbVlC4thfRCmwvbNbg8DYsbCaYFNO8tj/YiyvNWi0HuXH1L7zMmwvezEhlN OIirbzqtlfxA13MQaaqZEmEoASMsTF31vxLUlgU2WU6i74dyyI6wKrZ4JotbO/4HLr zkhb0PoNRIXXYVgPTiXyQkao92ehGG+5hG/F1tZ0GctYPbHueULaH1Dy8RAcGeVNmB VR64JtlzhFQRyLnx08Sj+MZ/qPRNwTSPp1Blqi24PmG5uH3WiO8uj9hsTYdWBU/Xg3 xbQVYSTYieIFA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Al Viro , Christian Brauner , Sasha Levin , linux-fsdevel@vger.kernel.org Subject: [PATCH AUTOSEL 6.6 05/73] fast_dput(): handle underflows gracefully Date: Mon, 22 Jan 2024 10:01:19 -0500 Message-ID: <20240122150432.992458-5-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240122150432.992458-1-sashal@kernel.org> References: <20240122150432.992458-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.6.13 Content-Transfer-Encoding: 8bit From: Al Viro [ Upstream commit 504e08cebe1d4e1efe25f915234f646e74a364a8 ] If refcount is less than 1, we should just warn, unlock dentry and return true, so that the caller doesn't try to do anything else. Taking care of that leaves the rest of "lockref_put_return() has failed" case equivalent to "decrement refcount and rejoin the normal slow path after the point where we grab ->d_lock". NOTE: lockref_put_return() is strictly a fastpath thing - unlike the rest of lockref primitives, it does not contain a fallback. Caller (and it looks like fast_dput() is the only legitimate one in the entire kernel) has to do that itself. Reasons for lockref_put_return() failures: * ->d_lock held by somebody * refcount <= 0 * ... or an architecture not supporting lockref use of cmpxchg - sparc, anything non-SMP, config with spinlock debugging... We could add a fallback, but it would be a clumsy API - we'd have to distinguish between: (1) refcount > 1 - decremented, lock not held on return (2) refcount < 1 - left alone, probably no sense to hold the lock (3) refcount is 1, no cmphxcg - decremented, lock held on return (4) refcount is 1, cmphxcg supported - decremented, lock *NOT* held on return. We want to return with no lock held in case (4); that's the whole point of that thing. We very much do not want to have the fallback in case (3) return without a lock, since the caller might have to retake it in that case. So it wouldn't be more convenient than doing the fallback in the caller and it would be very easy to screw up, especially since the test coverage would suck - no way to test (3) and (4) on the same kernel build. Reviewed-by: Christian Brauner Signed-off-by: Al Viro Signed-off-by: Sasha Levin --- fs/dcache.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 25ac74d30bff..9ae808fba517 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -787,12 +787,12 @@ static inline bool fast_dput(struct dentry *dentry) */ if (unlikely(ret < 0)) { spin_lock(&dentry->d_lock); - if (dentry->d_lockref.count > 1) { - dentry->d_lockref.count--; + if (WARN_ON_ONCE(dentry->d_lockref.count <= 0)) { spin_unlock(&dentry->d_lock); return true; } - return false; + dentry->d_lockref.count--; + goto locked; } /* @@ -850,6 +850,7 @@ static inline bool fast_dput(struct dentry *dentry) * else could have killed it and marked it dead. Either way, we * don't need to do anything else. */ +locked: if (dentry->d_lockref.count) { spin_unlock(&dentry->d_lock); return true; -- 2.43.0