From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC2B93594D; Thu, 17 Apr 2025 18:18:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744913919; cv=none; b=dRqa+9fb3e93KqNEXACXZ2E9HU9NGvWE2ill8NuTt/KpqS0ijrGDqBTUdpGMdPTKYH2RDu6q7r8YsFsu7q3cTXC04pSbi4N3f/oRGKNrQ5yuqW2fspOtQm7HE5FTx+IGhViSsQCGvBbiKfWGr0F0URnRp8J2frhx0b3S5ueGuMg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744913919; c=relaxed/simple; bh=OreKoswsgRh0tRYPp78h7Q54/sxidBFbdVUbtLms3PI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MFOZHtaIEPrBX5UaeE7AB90lAhW19qC/eTIcWVEL93IyN4C81E5ySYJszc3etOiyBSVbWf4J0BZiazgqLV3OGvkFLBTxt4h+3Ct9R+itQ2qltECqyWt32gQ5TPprJUilFwD8H6+4L+eMsYs0GhRWs/hI2ePEdXddF/cNaeNJUNA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=qyvMLo0H; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="qyvMLo0H" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EF85C4CEE4; Thu, 17 Apr 2025 18:18:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1744913919; bh=OreKoswsgRh0tRYPp78h7Q54/sxidBFbdVUbtLms3PI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qyvMLo0HLMPll6WNgrvZT3ufPHo5cQfSDY59c+RDFkrS5XE5JA3rz8hzJHMFbEgW9 462MFfw9azwzSErR/5Q8lFmSsm8R/P+QGXEeLZOcy17PAXHtYK7UvsO4PxG8JY6VZO HDPBz4ZNZQ2TJS4H5DVIh71lBTQc9ol0Eor0l0w4= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, syzbot+16a19b06125a2963eaee@syzkaller.appspotmail.com, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org, Boqun Feng , Sasha Levin Subject: [PATCH 6.13 044/414] srcu: Force synchronization for srcu_get_delay() Date: Thu, 17 Apr 2025 19:46:42 +0200 Message-ID: <20250417175113.186374969@linuxfoundation.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250417175111.386381660@linuxfoundation.org> References: <20250417175111.386381660@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.13-stable review patch. If anyone has any objections, please let me know. ------------------ From: Paul E. McKenney [ Upstream commit d31e31365b5b6c0cdfc74d71be87234ced564395 ] Currently, srcu_get_delay() can be called concurrently, for example, by a CPU that is the first to request a new grace period and the CPU processing the current grace period. Although concurrent access is harmless, it unnecessarily expands the state space. Additionally, all calls to srcu_get_delay() are from slow paths. This commit therefore protects all calls to srcu_get_delay() with ssp->srcu_sup->lock, which is already held on the invocation from the srcu_funnel_gp_start() function. While in the area, this commit also adds a lockdep_assert_held() to srcu_get_delay() itself. Reported-by: syzbot+16a19b06125a2963eaee@syzkaller.appspotmail.com Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: Signed-off-by: Boqun Feng Signed-off-by: Sasha Levin --- kernel/rcu/srcutree.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 5e2e534647946..c5419e97bd97b 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -647,6 +647,7 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) unsigned long jbase = SRCU_INTERVAL; struct srcu_usage *sup = ssp->srcu_sup; + lockdep_assert_held(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); if (srcu_gp_is_expedited(ssp)) jbase = 0; if (rcu_seq_state(READ_ONCE(sup->srcu_gp_seq))) { @@ -674,9 +675,13 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) void cleanup_srcu_struct(struct srcu_struct *ssp) { int cpu; + unsigned long delay; struct srcu_usage *sup = ssp->srcu_sup; - if (WARN_ON(!srcu_get_delay(ssp))) + spin_lock_irq_rcu_node(ssp->srcu_sup); + delay = srcu_get_delay(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); + if (WARN_ON(!delay)) return; /* Just leak it! */ if (WARN_ON(srcu_readers_active(ssp))) return; /* Just leak it! */ @@ -1102,7 +1107,9 @@ static bool try_check_zero(struct srcu_struct *ssp, int idx, int trycount) { unsigned long curdelay; + spin_lock_irq_rcu_node(ssp->srcu_sup); curdelay = !srcu_get_delay(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); for (;;) { if (srcu_readers_active_idx_check(ssp, idx)) @@ -1849,7 +1856,9 @@ static void process_srcu(struct work_struct *work) ssp = sup->srcu_ssp; srcu_advance_state(ssp); + spin_lock_irq_rcu_node(ssp->srcu_sup); curdelay = srcu_get_delay(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); if (curdelay) { WRITE_ONCE(sup->reschedule_count, 0); } else { -- 2.39.5