From: Joel Fernandes <joelagnelf@nvidia.com>
To: Vishal Chourasia <vishalc@linux.ibm.com>
Cc: peterz@infradead.org, aboorvad@linux.ibm.com,
boqun.feng@gmail.com, frederic@kernel.org, josh@joshtriplett.org,
linux-kernel@vger.kernel.org, neeraj.upadhyay@kernel.org,
paulmck@kernel.org, rcu@vger.kernel.org, rostedt@goodmis.org,
srikar@linux.ibm.com, sshegde@linux.ibm.com, tglx@linutronix.de,
urezki@gmail.com, samir@linux.ibm.com
Subject: Re: [PATCH v3 2/2] cpuhp: Expedite RCU grace periods during SMT operations
Date: Thu, 26 Feb 2026 20:13:52 -0500 [thread overview]
Message-ID: <20260227011352.GA1089964@joelbox2> (raw)
In-Reply-To: <20260218083915.660252-6-vishalc@linux.ibm.com>
On Wed, Feb 18, 2026 at 02:09:18PM +0530, Vishal Chourasia wrote:
> Expedite synchronize_rcu during the SMT mode switch operation when
> initiated via /sys/devices/system/cpu/smt/control interface
>
> SMT mode switch operation i.e. between SMT 8 to SMT 1 or vice versa and
> others are user driven operations and therefore should complete as soon
> as possible. Switching SMT states involves iterating over a list of CPUs
> and performing hotplug operations. It was found these transitions took
> significantly large amount of time to complete particularly on
> high-core-count systems.
>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> ---
> include/linux/rcupdate.h | 8 ++++++++
> kernel/cpu.c | 4 ++++
> kernel/rcu/rcu.h | 4 ----
> 3 files changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 7729fef249e1..61b80c29d53b 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -1190,6 +1190,14 @@ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
> extern int rcu_expedited;
> extern int rcu_normal;
>
> +#ifdef CONFIG_TINY_RCU
> +static inline void rcu_expedite_gp(void) { }
> +static inline void rcu_unexpedite_gp(void) { }
> +#else
> +void rcu_expedite_gp(void);
> +void rcu_unexpedite_gp(void);
> +#endif
> +
> DEFINE_LOCK_GUARD_0(rcu, rcu_read_lock(), rcu_read_unlock())
> DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(RCU))
>
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 62e209eda78c..1377a68d6f47 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -2682,6 +2682,7 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
> ret = -EBUSY;
> goto out;
> }
> + rcu_expedite_gp();
After the locking related changes in patch 1, is expediting still required? I
am just a bit concerned that we are papering over the real issue of over
usage of synchronize_rcu() (which IIRC we discussed in earlier versions of
the patches that reducing the number of lock acquire/release was supposed to
help.)
Could you provide more justification of why expediting these sections is
required if the locking concerns were addressed? It would be great if you can
provide performance numbers with only the first patch and without the second
patch. That way we can quantify this patch.
thanks,
--
Joel Fernandes
> /* Hold cpus_write_lock() for entire batch operation. */
> cpus_write_lock();
> for_each_online_cpu(cpu) {
> @@ -2714,6 +2715,7 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
> if (!ret)
> cpu_smt_control = ctrlval;
> cpus_write_unlock();
> + rcu_unexpedite_gp();
> arch_smt_update();
> out:
> cpu_maps_update_done();
> @@ -2733,6 +2735,7 @@ int cpuhp_smt_enable(void)
> int cpu, ret = 0;
>
> cpu_maps_update_begin();
> + rcu_expedite_gp();
> /* Hold cpus_write_lock() for entire batch operation. */
> cpus_write_lock();
> cpu_smt_control = CPU_SMT_ENABLED;
> @@ -2749,6 +2752,7 @@ int cpuhp_smt_enable(void)
> cpuhp_online_cpu_device(cpu);
> }
> cpus_write_unlock();
> + rcu_unexpedite_gp();
> arch_smt_update();
> cpu_maps_update_done();
> return ret;
> diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
> index dc5d614b372c..41a0d262e964 100644
> --- a/kernel/rcu/rcu.h
> +++ b/kernel/rcu/rcu.h
> @@ -512,8 +512,6 @@ do { \
> static inline bool rcu_gp_is_normal(void) { return true; }
> static inline bool rcu_gp_is_expedited(void) { return false; }
> static inline bool rcu_async_should_hurry(void) { return false; }
> -static inline void rcu_expedite_gp(void) { }
> -static inline void rcu_unexpedite_gp(void) { }
> static inline void rcu_async_hurry(void) { }
> static inline void rcu_async_relax(void) { }
> static inline bool rcu_cpu_online(int cpu) { return true; }
> @@ -521,8 +519,6 @@ static inline bool rcu_cpu_online(int cpu) { return true; }
> bool rcu_gp_is_normal(void); /* Internal RCU use. */
> bool rcu_gp_is_expedited(void); /* Internal RCU use. */
> bool rcu_async_should_hurry(void); /* Internal RCU use. */
> -void rcu_expedite_gp(void);
> -void rcu_unexpedite_gp(void);
> void rcu_async_hurry(void);
> void rcu_async_relax(void);
> void rcupdate_announce_bootup_oddness(void);
> --
> 2.53.0
>
next prev parent reply other threads:[~2026-02-27 1:14 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-18 8:39 [PATCH v3 0/2] cpuhp: Improve SMT switch time via lock batching and RCU expedition Vishal Chourasia
2026-02-18 8:39 ` [PATCH v3 1/2] cpuhp: Optimize SMT switch operation by batching lock acquisition Vishal Chourasia
2026-03-25 19:09 ` Thomas Gleixner
2026-03-26 10:06 ` Vishal Chourasia
2026-02-18 8:39 ` [PATCH v3 2/2] cpuhp: Expedite RCU grace periods during SMT operations Vishal Chourasia
2026-02-27 1:13 ` Joel Fernandes [this message]
2026-03-02 11:47 ` Samir M
2026-03-06 5:44 ` Vishal Chourasia
2026-03-06 15:12 ` Paul E. McKenney
2026-03-20 18:49 ` Vishal Chourasia
2026-03-25 19:10 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260227011352.GA1089964@joelbox2 \
--to=joelagnelf@nvidia.com \
--cc=aboorvad@linux.ibm.com \
--cc=boqun.feng@gmail.com \
--cc=frederic@kernel.org \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=neeraj.upadhyay@kernel.org \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rcu@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=samir@linux.ibm.com \
--cc=srikar@linux.ibm.com \
--cc=sshegde@linux.ibm.com \
--cc=tglx@linutronix.de \
--cc=urezki@gmail.com \
--cc=vishalc@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox