From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 771712DCF6C for ; Sat, 21 Mar 2026 19:45:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774122331; cv=none; b=rNJlfcoA7D+TfMxHeCkslkGcqdBiO50nYUgeChGC8P/695eVctQIa3ZPmDpj27tThXqWkuzmIuGdO7bApp0nexGDAu4NzWt8rkVnNg7pKUQ/ZD2YOYYHM0qPh7IWqmd6BrB//n2LYt7Qst1Os0EZTYB79T2rEXFj9Xowk65dGew= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774122331; c=relaxed/simple; bh=UZaHZ43UtKwYrDvozF15j133cHdPTpTHKvJWSIom670=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=NiJmgtoQCebAiBNZBW0xzUVgOpZiPpOfmOUxkBb3ttqbWcAeKgiUjo6wwg11TROzVLZxqYbAAGFQg8v8aV3PfpPSyYtY/rn5IWw6M3Ad3WI0z3OOb9fa+I5/Yy6THrOhwFCtnxyBJJXCSgfc2pNY4M8dGlbMhuvt8EBcTCZbUas= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Wpx9t5Wq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Wpx9t5Wq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 700A2C4AF09; Sat, 21 Mar 2026 19:45:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774122331; bh=UZaHZ43UtKwYrDvozF15j133cHdPTpTHKvJWSIom670=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Wpx9t5Wq8S2rzp2C3GZey2B0/nX5wxj1fnCKodVNdqcG4kXEe2+2/aLLcB+StNRN2 bFBPzgpLzwTNK9YfVO19lG9gfgrJHZ1uxjVFZvJMYzpjdstEqSTg6H5uEcG2ul7s7i RLYPCIjvnTaO8yUDG6JNnbg5mzwA91ZTXQrCKw1fcAIyLNiPACUmH8iqS/hvIshln8 5qmv3Oq6NjVIb7qOQNsVS+zQ5g5EHBPQyORs6gkpF6ItebAFT+F2+98V4TPizElHmj hkYwwny8dJcOVMm2Gc/hUbXzOZ15GKAmeop+nyklOuwgTq85K0lZiw7KTZoBlPBWqX Vr+o4d4PEV7DQ== Received: from phl-compute-06.internal (phl-compute-06.internal [10.202.2.46]) by mailfauth.phl.internal (Postfix) with ESMTP id 5C773F4007F; Sat, 21 Mar 2026 15:45:29 -0400 (EDT) Received: from phl-frontend-03 ([10.202.2.162]) by phl-compute-06.internal (MEProxy); Sat, 21 Mar 2026 15:45:29 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgdefudefjeegucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhepfffhvfevuffkfhggtggujgesthdtredttddtvdenucfhrhhomhepuehoqhhunhcu hfgvnhhguceosghoqhhunheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtthgvrhhnpe elueehtefhtddtgfejvdejueehhfekteevueeuueekgeetieeggeehvdffhefhhfenucff ohhmrghinhepkhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeptdenucfrrg hrrghmpehmrghilhhfrhhomhepsghoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhn rghlihhthidqudeijedtleekgeejuddqudejjeekheehhedvqdgsohhquhhnpeepkhgvrh hnvghlrdhorhhgsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepudejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehprghulhhmtghksehkvghrnhgvlhdrohhrgh dprhgtphhtthhopehjohgvlhgrghhnvghlfhesnhhvihguihgrrdgtohhmpdhrtghpthht ohepmhgvmhigohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghighgvrghshieslh hinhhuthhrohhnihigrdguvgdprhgtphhtthhopehfrhgvuggvrhhitgeskhgvrhhnvghl rdhorhhgpdhrtghpthhtohepnhgvvghrrghjrdhiihhtrhdutdesghhmrghilhdrtghomh dprhgtphhtthhopehurhgviihkihesghhmrghilhdrtghomhdprhgtphhtthhopegsohhq uhhnrdhfvghnghesghhmrghilhdrtghomhdprhgtphhtthhopehrtghusehvghgvrhdrkh gvrhhnvghlrdhorhhg X-ME-Proxy: Feedback-ID: i8dbe485b:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 21 Mar 2026 15:45:28 -0400 (EDT) Date: Sat, 21 Mar 2026 12:45:27 -0700 From: Boqun Feng To: "Paul E. McKenney" Cc: Joel Fernandes , Kumar Kartikeya Dwivedi , Sebastian Andrzej Siewior , frederic@kernel.org, neeraj.iitr10@gmail.com, urezki@gmail.com, boqun.feng@gmail.com, rcu@vger.kernel.org, Tejun Heo , bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrea Righi , Zqiang Subject: Re: [PATCH] rcu: Use an intermediate irq_work to start process_srcu() Message-ID: References: <2d9e7e42-8667-4880-9708-b81a82443809@nvidia.com> <20260320181400.15909-1-boqun@kernel.org> <492ba226-79c7-4345-b691-eb775082b799@paulmck-laptop> <609b5df1-aa06-46a9-8e93-0bf9eb8b7738@paulmck-laptop> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <609b5df1-aa06-46a9-8e93-0bf9eb8b7738@paulmck-laptop> On Sat, Mar 21, 2026 at 12:31:04PM -0700, Paul E. McKenney wrote: > On Sat, Mar 21, 2026 at 11:06:59AM -0700, Boqun Feng wrote: > > On Sat, Mar 21, 2026 at 10:41:47AM -0700, Paul E. McKenney wrote: > > [...] > > > > > + raw_spin_lock_rcu_node(ssp->srcu_sup); > > > > > + delay = srcu_get_delay(ssp); > > > > > + raw_spin_unlock_rcu_node(ssp->srcu_sup); > > > > > > > > > > > > > It was fixed differently in v2: > > > > > > > > https://lore.kernel.org/rcu/20260320222916.19987-1-boqun@kernel.org/ > > > > > > > > I used _irqsave/_irqrestore just in case. Given it's an urgent fix, > > > > overly careful code is probably fine ;-) > > > > > > > > Thanks for the testing and feedback. > > > > > > OK, I will try that one, thank you! > > > > > > FYI, with my change on your earlier version, SRCU-T got deadlocks between > > > the pi-lock and the workqueue pool lock. Which might or might not be > > > particularly urgent. > > > > > > > I just checked my run yesterday, I also hit it. It's probably what > > Zqiang has found: > > > > https://lore.kernel.org/rcu/4c23c66f86a2aff8f2d7b759f9dd257b82147a17@linux.dev/ > > > > We have a queue_work_on() in srcu_schedule_cbs_sdp(), so > > > > srcu_torture_deferred_free(): > > raw_spin_lock_irqsave(->pi_lock,...); > > call_srcu(): > > if (snp == snp_leaf && snp_seq != s) { > > srcu_schedule_cbs_sdp(sdp, do_norm ? SRCU_INTERVAL : 0): > > if (!delay) > > queue_work_on(...) > > > > I was about to reply to Zqiang, fixing that could be a touch design > > decision. Since it's a per srcu_data work ;-) NR_CPUS x irq_work > > incoming. > > Just to be clear, SRCU-T is Tiny SRCU rather than Tree SRCU. So perhaps > lower priority, though perhaps not lower irritation. ;-) > I see, there is a schedule_work() in srcutiny's srcu_gp_start_if_needed(). But it couldn't cause deadlock on UP since locks are (almost) no-op. Maybe we can make RCU torture only test it on SMP? Regards, Boqun > Thanx, Paul