From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0D261468F0; Mon, 22 Jan 2024 15:17:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705936629; cv=none; b=ZzCc5TIZOGf2mjk46p4QN3LpfT1iTeInpwQECq30Yvikr4ZUsr4sp9hXoiHEDCTRW2OgFUi7AO1nSuIpITsnNt+6ZGqD2lp78pPvsFsOcGSxstiH9KPUu3aWKKbKt+pV19geRcmApKy12P1zs336hP3fPEq8GQD2aoJloiTNqKo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705936629; c=relaxed/simple; bh=1EfDp5MH6ut9HONHsU9afbFH3rFz3umZncSRQ6fd9OE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Wv9IYJfMNVfS5BOd42k5hja6VgqsArm3KsfUALjZiB/KCP/p40TsUGSq1HureitVbUDpBKRtD+gl6Ovu6HXbY+i8MsJT7U8Ngw3vgnAKqOooUIOBTI6BQf9fO7j7aoIFFrENNAmhhWVcsbT08yqtUc2n7v3CrIzQ0YWVLAqlEtI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=itlaKQBM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="itlaKQBM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6D29C43399; Mon, 22 Jan 2024 15:17:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705936628; bh=1EfDp5MH6ut9HONHsU9afbFH3rFz3umZncSRQ6fd9OE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=itlaKQBMjCL9Ppf8aFxcOxsB+lsaxNmeDUDUObpwNKqsEnj+AQrdTdj42RR8woTp3 F6QFSqQpwKM28G7/8b5jXYZWFoKctiFGCKyTd0/S98noBTpX/a8ezNYYIRJQi9qTyA wY4yL4g8+bjvFCGSQBSML5E2Wz4H31K3xaD8uHLO7wtQAnAyFg6swgiLrNp7J9RCZW ZXWYxffdYHDWXBJO1+AykkyT0y15JIOsxZQ96PcZeHXs63N9hRb+jIfheFeRBiCIC2 dIs1cJPLlIhCKwnsD5kkquVXIipT6h6LX9kNldAZdDRP4Cfwc5g7mKNvVStLxTIAoK V1z6E5rn57QqQ== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Al Viro , Christian Brauner , Sasha Levin , linux-fsdevel@vger.kernel.org Subject: [PATCH AUTOSEL 5.4 04/24] fast_dput(): handle underflows gracefully Date: Mon, 22 Jan 2024 10:16:18 -0500 Message-ID: <20240122151659.997085-4-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240122151659.997085-1-sashal@kernel.org> References: <20240122151659.997085-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 5.4.267 Content-Transfer-Encoding: 8bit From: Al Viro [ Upstream commit 504e08cebe1d4e1efe25f915234f646e74a364a8 ] If refcount is less than 1, we should just warn, unlock dentry and return true, so that the caller doesn't try to do anything else. Taking care of that leaves the rest of "lockref_put_return() has failed" case equivalent to "decrement refcount and rejoin the normal slow path after the point where we grab ->d_lock". NOTE: lockref_put_return() is strictly a fastpath thing - unlike the rest of lockref primitives, it does not contain a fallback. Caller (and it looks like fast_dput() is the only legitimate one in the entire kernel) has to do that itself. Reasons for lockref_put_return() failures: * ->d_lock held by somebody * refcount <= 0 * ... or an architecture not supporting lockref use of cmpxchg - sparc, anything non-SMP, config with spinlock debugging... We could add a fallback, but it would be a clumsy API - we'd have to distinguish between: (1) refcount > 1 - decremented, lock not held on return (2) refcount < 1 - left alone, probably no sense to hold the lock (3) refcount is 1, no cmphxcg - decremented, lock held on return (4) refcount is 1, cmphxcg supported - decremented, lock *NOT* held on return. We want to return with no lock held in case (4); that's the whole point of that thing. We very much do not want to have the fallback in case (3) return without a lock, since the caller might have to retake it in that case. So it wouldn't be more convenient than doing the fallback in the caller and it would be very easy to screw up, especially since the test coverage would suck - no way to test (3) and (4) on the same kernel build. Reviewed-by: Christian Brauner Signed-off-by: Al Viro Signed-off-by: Sasha Levin --- fs/dcache.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index b2a7f1765f0b..43864a276faa 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -740,12 +740,12 @@ static inline bool fast_dput(struct dentry *dentry) */ if (unlikely(ret < 0)) { spin_lock(&dentry->d_lock); - if (dentry->d_lockref.count > 1) { - dentry->d_lockref.count--; + if (WARN_ON_ONCE(dentry->d_lockref.count <= 0)) { spin_unlock(&dentry->d_lock); return true; } - return false; + dentry->d_lockref.count--; + goto locked; } /* @@ -796,6 +796,7 @@ static inline bool fast_dput(struct dentry *dentry) * else could have killed it and marked it dead. Either way, we * don't need to do anything else. */ +locked: if (dentry->d_lockref.count) { spin_unlock(&dentry->d_lock); return true; -- 2.43.0