From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D923830ACEA; Mon, 3 Nov 2025 12:51:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762174315; cv=none; b=gjXfCp/7WBxz97cMzDlbZh1N6HaD6r+wwOU93x5TTRatTHszKBrTVhCsV6HONDJHIKsO0fPDoEm/1UXIj7YNpoD6t+3MLT9mVxftPyFC0sT1dzAkasr1m8FK0OUnN10jBRHLgHD/MPsnNvG8itkw4GLli7JrfJvQsxbYH+Fa3iY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762174315; c=relaxed/simple; bh=qaatkfcleskF5A5nvzlK4r/WGYzHFYOBVI4ibhGc4fY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=dNawz8Rt06waK8KG5rD+YTTeQtKKnj3YXi5FAAy3GtOirupRuyLvNpUENAs35TwKaT+DfbRPHmCGqLg+WIC418qqrkIkP13H/uKHq0tFvetWNlEFp8qRwjNb+NtP2ekyfgE06v+05OEFMX4UD6aOyCI9voXon7KeLW8WnDiWyT0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RTVvodky; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RTVvodky" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 216FCC4CEFD; Mon, 3 Nov 2025 12:51:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762174314; bh=qaatkfcleskF5A5nvzlK4r/WGYzHFYOBVI4ibhGc4fY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=RTVvodkydVGO/qZA9Kt85/9qV3NeZSbaxJPXmaYb7Du8/VrHlMY9VEBCP54vUMk7P gYZktn+jvkLtNAPMD9Z3nmUjMftDyLytWc1t301sNoJkj2Gi8G+BEJ023CM/NgzKMd 3J0+FSCY6r/cVQ+DiPDoe1gI/aJRbsSTAa5gjgu90sVjih1bY7Um8jikMRY/jH9Qkw 0lXbHpLzCr9Eu2TlgzDThERnXxxdLaJBwxnjCdphLpmM1h8wKFhtKiVIRATxNCNjIE BIJqZteaUJ84yylMSt/tFrY27ukojXlyKS4ywH2Ws4a7dH6l0AoDnJW9D+ispoOY4g fzmQCrMnWHyEw== Date: Mon, 3 Nov 2025 12:51:48 +0000 From: Will Deacon To: "Paul E. McKenney" Cc: rcu@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, Catalin Marinas , Mark Rutland , Mathieu Desnoyers , Sebastian Andrzej Siewior , linux-arm-kernel@lists.infradead.org, bpf@vger.kernel.org Subject: Re: [PATCH 17/19] srcu: Optimize SRCU-fast-updown for arm64 Message-ID: References: <082fb8ba-91b8-448e-a472-195eb7b282fd@paulmck-laptop> <20251102214436.3905633-17-paulmck@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251102214436.3905633-17-paulmck@kernel.org> Hi Paul, On Sun, Nov 02, 2025 at 01:44:34PM -0800, Paul E. McKenney wrote: > Some arm64 platforms have slow per-CPU atomic operations, for example, > the Neoverse V2. This commit therefore moves SRCU-fast from per-CPU > atomic operations to interrupt-disabled non-read-modify-write-atomic > atomic_read()/atomic_set() operations. This works because > SRCU-fast-updown is not invoked from read-side primitives, which > means that if srcu_read_unlock_fast() NMI handlers. This means that > srcu_read_lock_fast_updown() and srcu_read_unlock_fast_updown() can > exclude themselves and each other > > This reduces the overhead of calls to srcu_read_lock_fast_updown() and > srcu_read_unlock_fast_updown() from about 100ns to about 12ns on an ARM > Neoverse V2. Although this is not excellent compared to about 2ns on x86, > it sure beats 100ns. > > This command was used to measure the overhead: > > tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale --allcpus --duration 5 --configs NOPREEMPT --kconfig "CONFIG_NR_CPUS=64 CONFIG_TASKS_TRACE_RCU=y" --bootargs "refscale.loops=100000 refscale.guest_os_delay=5 refscale.nreaders=64 refscale.holdoff=30 torture.disable_onoff_at_boot refscale.scale_type=srcu-fast-updown refscale.verbose_batched=8 torture.verbose_sleep_frequency=8 torture.verbose_sleep_duration=8 refscale.nruns=100" --trust-make > > Signed-off-by: Paul E. McKenney > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Mark Rutland > Cc: Mathieu Desnoyers > Cc: Steven Rostedt > Cc: Sebastian Andrzej Siewior > Cc: > Cc: > --- > include/linux/srcutree.h | 56 ++++++++++++++++++++++++++++++++++++---- > 1 file changed, 51 insertions(+), 5 deletions(-) [...] > @@ -327,12 +355,23 @@ __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) > static inline > struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struct *ssp) > { > - struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); > + struct srcu_ctr __percpu *scp; > > - if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) > + if (IS_ENABLED(CONFIG_ARM64) && IS_ENABLED(CONFIG_ARM64_USE_LSE_PERCPU_ATOMICS)) { > + unsigned long flags; > + > + local_irq_save(flags); > + scp = __srcu_read_lock_fast_na(ssp); > + local_irq_restore(flags); /* Avoids leaking the critical section. */ > + return scp; > + } Do we still need to pursue this after Catalin's prefetch suggestion for the per-cpu atomics? https://lore.kernel.org/r/aQU7l-qMKJTx4znJ@arm.com Although disabling/enabling interrupts on your system seems to be significantly faster than an atomic instruction, I'm worried that it's all very SoC-specific and on a mobile part (especially with pseudo-NMI), the relative costs could easily be the other way around. Will