From: Samir M <samir@linux.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>,
Vishal Chourasia <vishalc@linux.ibm.com>
Cc: boqun.feng@gmail.com, frederic@kernel.org, joelagnelf@nvidia.com,
josh@joshtriplett.org, linux-kernel@vger.kernel.org,
neeraj.upadhyay@kernel.org, paulmck@kernel.org,
rcu@vger.kernel.org, rostedt@goodmis.org, srikar@linux.ibm.com,
sshegde@linux.ibm.com, tglx@linutronix.de, urezki@gmail.com
Subject: Re: [PATCH] cpuhp: Expedite synchronize_rcu during SMT switch
Date: Tue, 3 Feb 2026 12:01:33 +0530 [thread overview]
Message-ID: <367a0168-be38-48ad-b55e-688d8eaaca49@linux.ibm.com> (raw)
In-Reply-To: <068ef765-8999-41c0-8733-1184df2adb3a@linux.ibm.com>
On 27/01/26 11:18 pm, Samir M wrote:
>
> On 19/01/26 5:13 pm, Peter Zijlstra wrote:
>> On Mon, Jan 19, 2026 at 04:17:40PM +0530, Vishal Chourasia wrote:
>>> Expedite synchronize_rcu() during the cpuhp_smt_[enable|disable]
>>> path to
>>> accelerate the operation.
>>>
>>> Bulk CPU hotplug operations—such as switching SMT modes across all
>>> cores—require hotplugging multiple CPUs in rapid succession. On large
>>> systems, this process takes significant time, increasing as the number
>>> of CPUs to hotplug during SMT switch grows, leading to substantial
>>> delays on high-core-count machines. Analysis [1] reveals that the
>>> majority of this time is spent waiting for synchronize_rcu().
>>>
>> You seem to have left out all the useful bits from your changelog again
>> :/
>>
>> Anyway, ISTR Joel posted a patch hoisting a lock; it was a icky, but not
>> something we can't live with either.
>>
>> Also, memory got jogged and I think something like the below will remove
>> 2/3 of your rcu woes as well.
>>
>> diff --git a/kernel/cpu.c b/kernel/cpu.c
>> index 8df2d773fe3b..1365c19444b2 100644
>> --- a/kernel/cpu.c
>> +++ b/kernel/cpu.c
>> @@ -2669,6 +2669,7 @@ int cpuhp_smt_disable(enum cpuhp_smt_control
>> ctrlval)
>> int cpu, ret = 0;
>> cpu_maps_update_begin();
>> + rcu_sync_enter(&cpu_hotplug_lock.rss);
>> for_each_online_cpu(cpu) {
>> if (topology_is_primary_thread(cpu))
>> continue;
>> @@ -2698,6 +2699,7 @@ int cpuhp_smt_disable(enum cpuhp_smt_control
>> ctrlval)
>> }
>> if (!ret)
>> cpu_smt_control = ctrlval;
>> + rcu_sync_exit(&cpu_hotplug_lock.rss);
>> cpu_maps_update_done();
>> return ret;
>> }
>> @@ -2715,6 +2717,7 @@ int cpuhp_smt_enable(void)
>> int cpu, ret = 0;
>> cpu_maps_update_begin();
>> + rcu_sync_enter(&cpu_hotplug_lock.rss);
>> cpu_smt_control = CPU_SMT_ENABLED;
>> for_each_present_cpu(cpu) {
>> /* Skip online CPUs and CPUs on offline nodes */
>> @@ -2728,6 +2731,7 @@ int cpuhp_smt_enable(void)
>> /* See comment in cpuhp_smt_disable() */
>> cpuhp_online_cpu_device(cpu);
>> }
>> + rcu_sync_exit(&cpu_hotplug_lock.rss);
>> cpu_maps_update_done();
>> return ret;
>> }
>
>
> Hi,
>
> I verified this patch using the configuration described below.
> Configuration:
> • Kernel version: 6.19.0-rc6
> • Number of CPUs: 1536
>
> Earlier verification of an older version of this patch was performed
> on a system with *2048 CPUs*. Due to system unavailability, the
> current verification was carried out on a *different system.*
>
>
> Using this setup, I evaluated the patch with both SMT enabled and SMT
> disabled. patch shows a significant improvement in the SMT=off case
> and a measurable improvement in the SMT=on case.
> The results indicate that when SMT is enabled, the system time is
> noticeably higher. In contrast, with SMT disabled, no significant
> increase in system time is observed.
>
> SMT=ON -> sys 50m42.805s
> SMT=OFF -> sys 0m0.064s
>
>
> SMT Mode | Without Patch | With Patch | % Improvement |
> ------------------------------------------------------------------
> SMT=off | 20m 32.210s | 5m 30.898s | +73.15% |
> SMT=on | 62m 46.549s | 55m 45.671s | +11.18% |
>
>
> Please add below tag:
Tested-by: Samir M <samir@linux.ibm.com>
>
> Regards,
> Samir
>
Hi All,
Apologies for the confusion in the previous report.
In the previous report, I used the b4 am command to apply the patch. As
a result, all patches in the mail thread were applied, rather than only
the intended one.
Consequently, the results that were posted earlier included changes from
multiple patches, and therefore cannot be considered valid for
evaluating this patch in isolation.
We have since tested Peter’s patch separately, applied in isolation.
Based on this testing, we did not observe any improvement in the SMT
timing details.
Configuration:
• Kernel version: 6.19.0-rc6
• Number of CPUs: 1536
SMT Mode | Without Patch | With Patch | % Improvement |
------------------------------------------------------------------
SMT=off | 20m 32.210s | 20m22.441s | +0.79% |
SMT=on | 62m 46.549s | 63m0.532s | -0.37% |
Regards,
Samir
next prev parent reply other threads:[~2026-02-03 6:32 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-12 9:43 [PATCH] cpuhp: Expedite synchronize_rcu during CPU hotplug operations Vishal Chourasia
2026-01-12 10:08 ` Uladzislau Rezki
2026-01-12 10:43 ` Vishal Chourasia
2026-01-12 11:07 ` Uladzislau Rezki
2026-01-12 12:02 ` Shrikanth Hegde
2026-01-12 12:57 ` Uladzislau Rezki
2026-01-12 16:09 ` Joel Fernandes
2026-01-12 16:48 ` Paul E. McKenney
2026-01-12 17:05 ` Uladzislau Rezki
2026-01-12 18:27 ` Vishal Chourasia
2026-01-13 0:03 ` Paul E. McKenney
2026-01-12 22:24 ` Joel Fernandes
2026-01-13 0:01 ` Paul E. McKenney
2026-01-13 2:46 ` Joel Fernandes
2026-01-13 4:53 ` Shrikanth Hegde
2026-01-13 8:57 ` Joel Fernandes
2026-01-14 4:00 ` Paul E. McKenney
2026-01-14 8:54 ` Joel Fernandes
2026-01-16 19:02 ` Paul E. McKenney
2026-01-14 3:59 ` Paul E. McKenney
2026-01-12 17:09 ` Uladzislau Rezki
2026-01-12 17:36 ` Joel Fernandes
2026-01-13 12:18 ` Uladzislau Rezki
2026-01-13 12:44 ` Joel Fernandes
2026-01-13 14:17 ` Uladzislau Rezki
2026-01-13 14:32 ` Joel Fernandes
2026-01-13 14:53 ` Shrikanth Hegde
2026-01-13 18:17 ` Uladzislau Rezki
2026-01-13 17:58 ` Uladzislau Rezki
2026-01-12 12:21 ` Shrikanth Hegde
2026-01-12 12:46 ` Vishal Chourasia
2026-01-12 14:03 ` Joel Fernandes
2026-01-12 14:20 ` Joel Fernandes
2026-01-12 14:23 ` Peter Zijlstra
2026-01-12 14:37 ` Joel Fernandes
2026-01-12 17:52 ` Vishal Chourasia
2026-01-12 14:24 ` Peter Zijlstra
2026-01-12 18:00 ` Vishal Chourasia
2026-01-13 9:01 ` Peter Zijlstra
2026-01-19 10:47 ` [PATCH] cpuhp: Expedite synchronize_rcu during SMT switch Vishal Chourasia
2026-01-19 11:43 ` Peter Zijlstra
2026-01-19 13:45 ` Shrikanth Hegde
2026-01-19 14:11 ` Peter Zijlstra
2026-01-19 14:45 ` Joel Fernandes
2026-01-19 14:59 ` Peter Zijlstra
2026-01-27 17:48 ` Samir M
2026-01-29 7:05 ` Samir M
2026-02-03 6:31 ` Samir M [this message]
2026-01-19 10:54 ` [RESEND] " Vishal Chourasia
2026-01-18 11:38 ` [PATCH] cpuhp: Expedite synchronize_rcu during CPU hotplug operations Samir M
2026-01-19 5:18 ` Joel Fernandes
2026-01-19 13:53 ` Shrikanth Hegde
2026-01-19 21:10 ` joelagnelf
2026-02-02 8:46 ` Vishal Chourasia
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=367a0168-be38-48ad-b55e-688d8eaaca49@linux.ibm.com \
--to=samir@linux.ibm.com \
--cc=boqun.feng@gmail.com \
--cc=frederic@kernel.org \
--cc=joelagnelf@nvidia.com \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=neeraj.upadhyay@kernel.org \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rcu@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=srikar@linux.ibm.com \
--cc=sshegde@linux.ibm.com \
--cc=tglx@linutronix.de \
--cc=urezki@gmail.com \
--cc=vishalc@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox