public inbox for rcu@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] srcu: Optimize SRCU-fast per-CPU counter increments on arm64
@ 2026-03-26 10:26 Puranjay Mohan
  2026-03-26 10:58 ` Will Deacon
  0 siblings, 1 reply; 3+ messages in thread
From: Puranjay Mohan @ 2026-03-26 10:26 UTC (permalink / raw)
  To: Lai Jiangshan, Will Deacon, Mark Rutland, Catalin Marinas,
	Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, rcu, linux-arm-kernel, linux-kernel
  Cc: Puranjay Mohan

On architectures like arm64, this_cpu_inc() wraps the underlying atomic
instruction (ldadd) with preempt_disable/enable to prevent migration
between the per-CPU address calculation and the atomic operation.
However, SRCU does not need this protection because it sums counters
across all CPUs for grace-period detection, so operating on a "stale"
CPU's counter after migration is harmless.

This commit therefore introduces srcu_percpu_counter_inc(), which
consolidates the SRCU-fast reader counter updates into a single helper,
replacing the if/else dispatch between this_cpu_inc() and
atomic_long_inc(raw_cpu_ptr(...)) that was previously open-coded at
each call site.

On arm64, this helper uses atomic_long_fetch_add_relaxed(), which
compiles to the value-returning ldadd instruction.  This is preferred
over atomic_long_inc()'s non-value-returning stadd because ldadd is
resolved in L1 cache whereas stadd may be resolved further out in the
memory hierarchy [1].

On x86, where this_cpu_inc() compiles to a single "incl %gs:offset"
instruction with no preempt wrappers, the helper falls through to
this_cpu_inc(), so there is no change.  Architectures with
NEED_SRCU_NMI_SAFE continue to use atomic_long_inc(raw_cpu_ptr(...)),
again with no change.  All remaining architectures also use the
this_cpu_inc() path, again with no change.

refscale measurements on a 72-CPU arm64 Neoverse-V2 system show ~11%
improvement in SRCU-fast reader duration:

  Unpatched: median 9.273 ns, avg 9.319 ns (min 9.219, max 9.853)
    Patched: median 8.275 ns, avg 8.411 ns (min 8.186, max 9.183)

  Command: kvm.sh --torture refscale --duration 1 --cpus 72 \
           --configs NOPREEMPT --trust-make --bootargs \
           "refscale.scale_type=srcu-fast refscale.nreaders=72 \
           refscale.nruns=100"

[1] https://lore.kernel.org/r/e7d539ed-ced0-4b96-8ecd-048a5b803b85@paulmck-laptop

Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
 include/linux/srcutree.h | 51 +++++++++++++++++++++++++++-------------
 1 file changed, 35 insertions(+), 16 deletions(-)

diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
index fd1a9270cb9a..4ff18de3edfd 100644
--- a/include/linux/srcutree.h
+++ b/include/linux/srcutree.h
@@ -286,15 +286,43 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss
  * on architectures that support NMIs but do not supply NMI-safe
  * implementations of this_cpu_inc().
  */
+
+/*
+ * Atomically increment a per-CPU SRCU counter.
+ *
+ * On most architectures, this_cpu_inc() is optimal (e.g., on x86 it is
+ * a single "incl %gs:offset" instruction).  However, on architectures
+ * like arm64, s390, and loongarch, this_cpu_inc() wraps the underlying
+ * atomic instruction with preempt_disable/enable to prevent migration
+ * between the per-CPU address calculation and the atomic operation.
+ * SRCU does not need this protection because it sums counters across
+ * all CPUs for grace-period detection, so operating on a "stale" CPU's
+ * counter after migration is harmless.
+ *
+ * On arm64, use atomic_long_fetch_add_relaxed() which compiles to the
+ * value-returning ldadd instruction instead of atomic_long_inc()'s
+ * non-value-returning stadd, because ldadd is resolved in L1 cache
+ * whereas stadd may be resolved further out in the memory hierarchy.
+ * https://lore.kernel.org/r/e7d539ed-ced0-4b96-8ecd-048a5b803b85@paulmck-laptop
+ */
+static __always_inline void
+srcu_percpu_counter_inc(atomic_long_t __percpu *v)
+{
+#ifdef CONFIG_ARM64
+	(void)atomic_long_fetch_add_relaxed(1, raw_cpu_ptr(v));
+#elif IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)
+	atomic_long_inc(raw_cpu_ptr(v));
+#else
+	this_cpu_inc(v->counter);
+#endif
+}
+
 static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast(struct srcu_struct *ssp)
 	__acquires_shared(ssp)
 {
 	struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
 
-	if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		this_cpu_inc(scp->srcu_locks.counter); // Y, and implicit RCU reader.
-	else
-		atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks));  // Y, and implicit RCU reader.
+	srcu_percpu_counter_inc(&scp->srcu_locks); // Y, and implicit RCU reader.
 	barrier(); /* Avoid leaking the critical section. */
 	__acquire_shared(ssp);
 	return scp;
@@ -315,10 +343,7 @@ __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
 {
 	__release_shared(ssp);
 	barrier();  /* Avoid leaking the critical section. */
-	if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		this_cpu_inc(scp->srcu_unlocks.counter);  // Z, and implicit RCU reader.
-	else
-		atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks));  // Z, and implicit RCU reader.
+	srcu_percpu_counter_inc(&scp->srcu_unlocks);  // Z, and implicit RCU reader.
 }
 
 /*
@@ -335,10 +360,7 @@ struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struc
 {
 	struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
 
-	if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		this_cpu_inc(scp->srcu_locks.counter); // Y, and implicit RCU reader.
-	else
-		atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks));  // Y, and implicit RCU reader.
+	srcu_percpu_counter_inc(&scp->srcu_locks); // Y, and implicit RCU reader.
 	barrier(); /* Avoid leaking the critical section. */
 	__acquire_shared(ssp);
 	return scp;
@@ -359,10 +381,7 @@ __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu
 {
 	__release_shared(ssp);
 	barrier();  /* Avoid leaking the critical section. */
-	if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		this_cpu_inc(scp->srcu_unlocks.counter);  // Z, and implicit RCU reader.
-	else
-		atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks));  // Z, and implicit RCU reader.
+	srcu_percpu_counter_inc(&scp->srcu_unlocks);  // Z, and implicit RCU reader.
 }
 
 void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor);

base-commit: 16ad40d1089c5f212d7d87babc2376284f3bf244
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] srcu: Optimize SRCU-fast per-CPU counter increments on arm64
  2026-03-26 10:26 [PATCH] srcu: Optimize SRCU-fast per-CPU counter increments on arm64 Puranjay Mohan
@ 2026-03-26 10:58 ` Will Deacon
  2026-03-26 11:17   ` Puranjay Mohan
  0 siblings, 1 reply; 3+ messages in thread
From: Will Deacon @ 2026-03-26 10:58 UTC (permalink / raw)
  To: Puranjay Mohan
  Cc: Lai Jiangshan, Mark Rutland, Catalin Marinas, Paul E. McKenney,
	Josh Triplett, Steven Rostedt, Mathieu Desnoyers, rcu,
	linux-arm-kernel, linux-kernel

On Thu, Mar 26, 2026 at 03:26:07AM -0700, Puranjay Mohan wrote:
> On architectures like arm64, this_cpu_inc() wraps the underlying atomic
> instruction (ldadd) with preempt_disable/enable to prevent migration
> between the per-CPU address calculation and the atomic operation.
> However, SRCU does not need this protection because it sums counters
> across all CPUs for grace-period detection, so operating on a "stale"
> CPU's counter after migration is harmless.
> 
> This commit therefore introduces srcu_percpu_counter_inc(), which
> consolidates the SRCU-fast reader counter updates into a single helper,
> replacing the if/else dispatch between this_cpu_inc() and
> atomic_long_inc(raw_cpu_ptr(...)) that was previously open-coded at
> each call site.
> 
> On arm64, this helper uses atomic_long_fetch_add_relaxed(), which
> compiles to the value-returning ldadd instruction.  This is preferred
> over atomic_long_inc()'s non-value-returning stadd because ldadd is
> resolved in L1 cache whereas stadd may be resolved further out in the
> memory hierarchy [1].
> 
> On x86, where this_cpu_inc() compiles to a single "incl %gs:offset"
> instruction with no preempt wrappers, the helper falls through to
> this_cpu_inc(), so there is no change.  Architectures with
> NEED_SRCU_NMI_SAFE continue to use atomic_long_inc(raw_cpu_ptr(...)),
> again with no change.  All remaining architectures also use the
> this_cpu_inc() path, again with no change.
> 
> refscale measurements on a 72-CPU arm64 Neoverse-V2 system show ~11%
> improvement in SRCU-fast reader duration:
> 
>   Unpatched: median 9.273 ns, avg 9.319 ns (min 9.219, max 9.853)
>     Patched: median 8.275 ns, avg 8.411 ns (min 8.186, max 9.183)
> 
>   Command: kvm.sh --torture refscale --duration 1 --cpus 72 \
>            --configs NOPREEMPT --trust-make --bootargs \
>            "refscale.scale_type=srcu-fast refscale.nreaders=72 \
>            refscale.nruns=100"
> 
> [1] https://lore.kernel.org/r/e7d539ed-ced0-4b96-8ecd-048a5b803b85@paulmck-laptop
> 
> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> ---
>  include/linux/srcutree.h | 51 +++++++++++++++++++++++++++-------------
>  1 file changed, 35 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
> index fd1a9270cb9a..4ff18de3edfd 100644
> --- a/include/linux/srcutree.h
> +++ b/include/linux/srcutree.h
> @@ -286,15 +286,43 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss
>   * on architectures that support NMIs but do not supply NMI-safe
>   * implementations of this_cpu_inc().
>   */
> +
> +/*
> + * Atomically increment a per-CPU SRCU counter.
> + *
> + * On most architectures, this_cpu_inc() is optimal (e.g., on x86 it is
> + * a single "incl %gs:offset" instruction).  However, on architectures
> + * like arm64, s390, and loongarch, this_cpu_inc() wraps the underlying
> + * atomic instruction with preempt_disable/enable to prevent migration
> + * between the per-CPU address calculation and the atomic operation.
> + * SRCU does not need this protection because it sums counters across
> + * all CPUs for grace-period detection, so operating on a "stale" CPU's
> + * counter after migration is harmless.
> + *
> + * On arm64, use atomic_long_fetch_add_relaxed() which compiles to the
> + * value-returning ldadd instruction instead of atomic_long_inc()'s
> + * non-value-returning stadd, because ldadd is resolved in L1 cache
> + * whereas stadd may be resolved further out in the memory hierarchy.
> + * https://lore.kernel.org/r/e7d539ed-ced0-4b96-8ecd-048a5b803b85@paulmck-laptop
> + */
> +static __always_inline void
> +srcu_percpu_counter_inc(atomic_long_t __percpu *v)
> +{
> +#ifdef CONFIG_ARM64
> +	(void)atomic_long_fetch_add_relaxed(1, raw_cpu_ptr(v));
> +#elif IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)
> +	atomic_long_inc(raw_cpu_ptr(v));
> +#else
> +	this_cpu_inc(v->counter);
> +#endif
> +}

No, this is a hack. arm64 shouldn't be treated specially here.

The ldadd issue was already fixed properly in
git.kernel.org/linus/535fdfc5a2285. If you want to improve our preempt
disable/enable code or add helpers that don't require that, then patches
are welcome, but bodging random callers with arch-specific code for a
micro-benchmark is completely the wrong approach.

Will

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] srcu: Optimize SRCU-fast per-CPU counter increments on arm64
  2026-03-26 10:58 ` Will Deacon
@ 2026-03-26 11:17   ` Puranjay Mohan
  0 siblings, 0 replies; 3+ messages in thread
From: Puranjay Mohan @ 2026-03-26 11:17 UTC (permalink / raw)
  To: Will Deacon
  Cc: Lai Jiangshan, Mark Rutland, Catalin Marinas, Paul E. McKenney,
	Josh Triplett, Steven Rostedt, Mathieu Desnoyers, rcu,
	linux-arm-kernel, linux-kernel

On Thu, Mar 26, 2026 at 10:58 AM Will Deacon <will@kernel.org> wrote:
>
> On Thu, Mar 26, 2026 at 03:26:07AM -0700, Puranjay Mohan wrote:
> > On architectures like arm64, this_cpu_inc() wraps the underlying atomic
> > instruction (ldadd) with preempt_disable/enable to prevent migration
> > between the per-CPU address calculation and the atomic operation.
> > However, SRCU does not need this protection because it sums counters
> > across all CPUs for grace-period detection, so operating on a "stale"
> > CPU's counter after migration is harmless.
> >
> > This commit therefore introduces srcu_percpu_counter_inc(), which
> > consolidates the SRCU-fast reader counter updates into a single helper,
> > replacing the if/else dispatch between this_cpu_inc() and
> > atomic_long_inc(raw_cpu_ptr(...)) that was previously open-coded at
> > each call site.
> >
> > On arm64, this helper uses atomic_long_fetch_add_relaxed(), which
> > compiles to the value-returning ldadd instruction.  This is preferred
> > over atomic_long_inc()'s non-value-returning stadd because ldadd is
> > resolved in L1 cache whereas stadd may be resolved further out in the
> > memory hierarchy [1].
> >
> > On x86, where this_cpu_inc() compiles to a single "incl %gs:offset"
> > instruction with no preempt wrappers, the helper falls through to
> > this_cpu_inc(), so there is no change.  Architectures with
> > NEED_SRCU_NMI_SAFE continue to use atomic_long_inc(raw_cpu_ptr(...)),
> > again with no change.  All remaining architectures also use the
> > this_cpu_inc() path, again with no change.
> >
> > refscale measurements on a 72-CPU arm64 Neoverse-V2 system show ~11%
> > improvement in SRCU-fast reader duration:
> >
> >   Unpatched: median 9.273 ns, avg 9.319 ns (min 9.219, max 9.853)
> >     Patched: median 8.275 ns, avg 8.411 ns (min 8.186, max 9.183)
> >
> >   Command: kvm.sh --torture refscale --duration 1 --cpus 72 \
> >            --configs NOPREEMPT --trust-make --bootargs \
> >            "refscale.scale_type=srcu-fast refscale.nreaders=72 \
> >            refscale.nruns=100"
> >
> > [1] https://lore.kernel.org/r/e7d539ed-ced0-4b96-8ecd-048a5b803b85@paulmck-laptop
> >
> > Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> > ---
> >  include/linux/srcutree.h | 51 +++++++++++++++++++++++++++-------------
> >  1 file changed, 35 insertions(+), 16 deletions(-)
> >
> > diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
> > index fd1a9270cb9a..4ff18de3edfd 100644
> > --- a/include/linux/srcutree.h
> > +++ b/include/linux/srcutree.h
> > @@ -286,15 +286,43 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss
> >   * on architectures that support NMIs but do not supply NMI-safe
> >   * implementations of this_cpu_inc().
> >   */
> > +
> > +/*
> > + * Atomically increment a per-CPU SRCU counter.
> > + *
> > + * On most architectures, this_cpu_inc() is optimal (e.g., on x86 it is
> > + * a single "incl %gs:offset" instruction).  However, on architectures
> > + * like arm64, s390, and loongarch, this_cpu_inc() wraps the underlying
> > + * atomic instruction with preempt_disable/enable to prevent migration
> > + * between the per-CPU address calculation and the atomic operation.
> > + * SRCU does not need this protection because it sums counters across
> > + * all CPUs for grace-period detection, so operating on a "stale" CPU's
> > + * counter after migration is harmless.
> > + *
> > + * On arm64, use atomic_long_fetch_add_relaxed() which compiles to the
> > + * value-returning ldadd instruction instead of atomic_long_inc()'s
> > + * non-value-returning stadd, because ldadd is resolved in L1 cache
> > + * whereas stadd may be resolved further out in the memory hierarchy.
> > + * https://lore.kernel.org/r/e7d539ed-ced0-4b96-8ecd-048a5b803b85@paulmck-laptop
> > + */
> > +static __always_inline void
> > +srcu_percpu_counter_inc(atomic_long_t __percpu *v)
> > +{
> > +#ifdef CONFIG_ARM64
> > +     (void)atomic_long_fetch_add_relaxed(1, raw_cpu_ptr(v));
> > +#elif IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)
> > +     atomic_long_inc(raw_cpu_ptr(v));
> > +#else
> > +     this_cpu_inc(v->counter);
> > +#endif
> > +}
>
> No, this is a hack. arm64 shouldn't be treated specially here.
>
> The ldadd issue was already fixed properly in
> git.kernel.org/linus/535fdfc5a2285. If you want to improve our preempt
> disable/enable code or add helpers that don't require that, then patches
> are welcome, but bodging random callers with arch-specific code for a
> micro-benchmark is completely the wrong approach.

Thanks for the feedback.

I basically want to remove the overhead of preempt disable/enable that
comes with this_cpu_*(), because in SRCU (and maybe at other places
too) we don't need that safety. One way would be to define
raw_cpu_add_* helpers in arch/arm64/include/asm/percpu.h but that
wouldn't be good for existing callers of raw_cpu_add() as currently
raw_cpu_add() resolves to raw_cpu_generic_to_op(pcp, val, +=), which
is not atomic. Another way would be to add new helpers that do per-CPU
atomics without preempt enable/disable.

And do you think this optimization is worth doing? or should I just not do it?

Thanks,
Puranjay

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-03-26 11:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26 10:26 [PATCH] srcu: Optimize SRCU-fast per-CPU counter increments on arm64 Puranjay Mohan
2026-03-26 10:58 ` Will Deacon
2026-03-26 11:17   ` Puranjay Mohan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox