From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36E451E49F for ; Sun, 22 Mar 2026 16:17:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774196224; cv=none; b=cDCX/+D/RoStAYp8HMBw/Djzzrw55WnIB/+09ZnXmmrd6hGz61xruZ9NZRnu35I/yDhtcY34L3lnTnZwbISfYM0rZk8uqKsMCrSYyn6nMwNLbYmhGnhcT7arIqFd3ryKiey8svjdhK7H/s5j+gc3m/mE/cTUNPkbnM6pD1SQE1Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774196224; c=relaxed/simple; bh=blUG7B4JdkGtIxEKF9tmlJNMk6cUUI19xq7UHnv/i4I=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=YyaGNNf2QKVP+3vjvbYeBUXY5POfoBHY/UF0wavPNNj8nKeKwVCHYas419odF3q6CSOHDJbPB1Wy0cHek946Wud5Qtr/JrhI/Yf4U0o9gMeIWUag2tkD5SO0SA5dUPAQ/AC6Clb4LMV/MYsbR/uA0/Wa6+9X/t3fObXtbxrlBdo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JRL0JpK5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JRL0JpK5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 736BBC2BC9E; Sun, 22 Mar 2026 16:17:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774196224; bh=blUG7B4JdkGtIxEKF9tmlJNMk6cUUI19xq7UHnv/i4I=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=JRL0JpK5FCd1LUaIjsJlIT91uUuxOAjIrjnyb+ZHq5b8U+Zm7ua4Ld6NPUxkixXvM XV53TU442MIcuVnmS9gWFgzT2jdR1tiYTIxx9oGR8WVH7ZIfX9vkxrV6GF5pzG5Ksb T0+H/b1sDkrs58Sy0Wv/ioGkB3AVhdXzjX7jSBpQOrc8uBD7YenM2YLbdAnlZOKeJM Vx9J+lvCSoMjlHiSpm5UmMzClPbozhbb4kxJ2mBtuXLg0D7/VxE53UFh556OwO8mt0 S3mcTO+cTO8fJX7RuqMWi/Y6LVepVf9gfIPp9lKp3hKeMBZVacAuJe6Lisezkr9WSP tO/+yVLj/UBuQ== Received: from phl-compute-01.internal (phl-compute-01.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 5E81DF40072; Sun, 22 Mar 2026 12:17:02 -0400 (EDT) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-01.internal (MEProxy); Sun, 22 Mar 2026 12:17:02 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgdefudeivdegucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhepfffhvfevuffkfhggtggujgesthdtredttddtvdenucfhrhhomhepuehoqhhunhcu hfgvnhhguceosghoqhhunheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtthgvrhhnpe elueehtefhtddtgfejvdejueehhfekteevueeuueekgeetieeggeehvdffhefhhfenucff ohhmrghinhepkhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeptdenucfrrg hrrghmpehmrghilhhfrhhomhepsghoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhn rghlihhthidqudeijedtleekgeejuddqudejjeekheehhedvqdgsohhquhhnpeepkhgvrh hnvghlrdhorhhgsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepudejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehprghulhhmtghksehkvghrnhgvlhdrohhrgh dprhgtphhtthhopehjohgvlhgrghhnvghlfhesnhhvihguihgrrdgtohhmpdhrtghpthht ohepmhgvmhigohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghighgvrghshieslh hinhhuthhrohhnihigrdguvgdprhgtphhtthhopehfrhgvuggvrhhitgeskhgvrhhnvghl rdhorhhgpdhrtghpthhtohepnhgvvghrrghjrdhiihhtrhdutdesghhmrghilhdrtghomh dprhgtphhtthhopehurhgviihkihesghhmrghilhdrtghomhdprhgtphhtthhopegsohhq uhhnrdhfvghnghesghhmrghilhdrtghomhdprhgtphhtthhopehrtghusehvghgvrhdrkh gvrhhnvghlrdhorhhg X-ME-Proxy: Feedback-ID: i8dbe485b:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun, 22 Mar 2026 12:17:00 -0400 (EDT) Date: Sun, 22 Mar 2026 09:16:59 -0700 From: Boqun Feng To: "Paul E. McKenney" Cc: Joel Fernandes , Kumar Kartikeya Dwivedi , Sebastian Andrzej Siewior , frederic@kernel.org, neeraj.iitr10@gmail.com, urezki@gmail.com, boqun.feng@gmail.com, rcu@vger.kernel.org, Tejun Heo , bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrea Righi , Zqiang Subject: Re: [PATCH] rcu: Use an intermediate irq_work to start process_srcu() Message-ID: References: <20260320181400.15909-1-boqun@kernel.org> <492ba226-79c7-4345-b691-eb775082b799@paulmck-laptop> <609b5df1-aa06-46a9-8e93-0bf9eb8b7738@paulmck-laptop> <4d2b07a9-e3fd-4a95-8924-0839bdfc28b3@paulmck-laptop> <3486c7a2-73e6-46f2-a030-c9349ce964dd@paulmck-laptop> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3486c7a2-73e6-46f2-a030-c9349ce964dd@paulmck-laptop> On Sun, Mar 22, 2026 at 03:09:19AM -0700, Paul E. McKenney wrote: > On Sat, Mar 21, 2026 at 01:08:59PM -0700, Boqun Feng wrote: > > On Sat, Mar 21, 2026 at 01:07:45PM -0700, Paul E. McKenney wrote: > > > On Sat, Mar 21, 2026 at 12:45:27PM -0700, Boqun Feng wrote: > > > > On Sat, Mar 21, 2026 at 12:31:04PM -0700, Paul E. McKenney wrote: > > > > > On Sat, Mar 21, 2026 at 11:06:59AM -0700, Boqun Feng wrote: > > > > > > On Sat, Mar 21, 2026 at 10:41:47AM -0700, Paul E. McKenney wrote: > > > > > > [...] > > > > > > > > > + raw_spin_lock_rcu_node(ssp->srcu_sup); > > > > > > > > > + delay = srcu_get_delay(ssp); > > > > > > > > > + raw_spin_unlock_rcu_node(ssp->srcu_sup); > > > > > > > > > > > > > > > > > > > > > > > > > It was fixed differently in v2: > > > > > > > > > > > > > > > > https://lore.kernel.org/rcu/20260320222916.19987-1-boqun@kernel.org/ > > > > > > > > > > > > > > > > I used _irqsave/_irqrestore just in case. Given it's an urgent fix, > > > > > > > > overly careful code is probably fine ;-) > > > > > > > > > > > > > > > > Thanks for the testing and feedback. > > > > > > > > > > > > > > OK, I will try that one, thank you! > > > > > > > > > > > > > > FYI, with my change on your earlier version, SRCU-T got deadlocks between > > > > > > > the pi-lock and the workqueue pool lock. Which might or might not be > > > > > > > particularly urgent. > > > > > > > > > > > > > > > > > > > I just checked my run yesterday, I also hit it. It's probably what > > > > > > Zqiang has found: > > > > > > > > > > > > https://lore.kernel.org/rcu/4c23c66f86a2aff8f2d7b759f9dd257b82147a17@linux.dev/ > > > > > > > > > > > > We have a queue_work_on() in srcu_schedule_cbs_sdp(), so > > > > > > > > > > > > srcu_torture_deferred_free(): > > > > > > raw_spin_lock_irqsave(->pi_lock,...); > > > > > > call_srcu(): > > > > > > if (snp == snp_leaf && snp_seq != s) { > > > > > > srcu_schedule_cbs_sdp(sdp, do_norm ? SRCU_INTERVAL : 0): > > > > > > if (!delay) > > > > > > queue_work_on(...) > > > > > > > > > > > > I was about to reply to Zqiang, fixing that could be a touch design > > > > > > decision. Since it's a per srcu_data work ;-) NR_CPUS x irq_work > > > > > > incoming. > > > > > > > > > > Just to be clear, SRCU-T is Tiny SRCU rather than Tree SRCU. So perhaps > > > > > lower priority, though perhaps not lower irritation. ;-) > > > > > > > > I see, there is a schedule_work() in srcutiny's > > > > srcu_gp_start_if_needed(). But it couldn't cause deadlock on UP since > > > > locks are (almost) no-op. Maybe we can make RCU torture only test it on > > > > SMP? > > > > > > Like this, you mean? I will give it a shot tomorrow. > > > > Yes, thanks! > > OK, the previous patch did fine on short rcutorture testing aside from > the !SMP lockdep splat, so I have started the test without pi_lock. > > Longer term, shouldn't lockdep take into account the fact that on !SMP, > the disabling of preemption (or interrupts or...) is essentially the same > as acquiring a global lock? This means that only one task at a time can > be acquiring a raw spinlock on !SMP, so that the order of acquisition > of raw spinlocks on !SMP is irrelevant? (Aside from self-deadlocking > double acquisitions, perhaps.) In other words, shouldn't lockdep leave > raw spinlocks out of lockdep's cycle-detection data structure? > Lockdep doesn't know whether a code path is UP-only, so it'll apply the general locking rule to check. Similar as lockdep still detect PREEMPT_RT locking issue for !PREEMPT_RT kernel. Maybe we can add a separate kconfig to narrow down lockdep detection for UP-only if UP=y. Regards, Boqun > Thanx, Paul