public inbox for rcu@vger.kernel.org
 help / color / mirror / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Uladzislau Rezki <urezki@gmail.com>,
	Shrikanth Hegde <sshegde@linux.ibm.com>,
	Vishal Chourasia <vishalc@linux.ibm.com>,
	"rcu@vger.kernel.org" <rcu@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"paulmck@kernel.org" <paulmck@kernel.org>,
	"frederic@kernel.org" <frederic@kernel.org>,
	"neeraj.upadhyay@kernel.org" <neeraj.upadhyay@kernel.org>,
	"josh@joshtriplett.org" <josh@joshtriplett.org>,
	"boqun.feng@gmail.com" <boqun.feng@gmail.com>,
	"rostedt@goodmis.org" <rostedt@goodmis.org>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"srikar@linux.ibm.com" <srikar@linux.ibm.com>
Subject: Re: [PATCH] cpuhp: Expedite synchronize_rcu during CPU hotplug operations
Date: Tue, 13 Jan 2026 18:58:44 +0100	[thread overview]
Message-ID: <aWaH1AdJeSVVB5UZ@milan> (raw)
In-Reply-To: <6d05f9ea-fc4f-4115-a416-8e779f17e0fb@nvidia.com>

On Tue, Jan 13, 2026 at 09:32:13AM -0500, Joel Fernandes wrote:
> 
> 
> On 1/13/2026 9:17 AM, Uladzislau Rezki wrote:
> > On Tue, Jan 13, 2026 at 12:44:10PM +0000, Joel Fernandes wrote:
> >>
> >>
> >>> On Jan 13, 2026, at 7:19 AM, Uladzislau Rezki <urezki@gmail.com> wrote:
> >>>
> >>> On Mon, Jan 12, 2026 at 05:36:24PM +0000, Joel Fernandes wrote:
> >>>>
> >>>>
> >>>>>> On Jan 12, 2026, at 12:09 PM, Uladzislau Rezki <urezki@gmail.com> wrote:
> >>>>>
> >>>>> On Mon, Jan 12, 2026 at 04:09:49PM +0000, Joel Fernandes wrote:
> >>>>>>
> >>>>>>
> >>>>>>>> On Jan 12, 2026, at 7:57 AM, Uladzislau Rezki <urezki@gmail.com> wrote:
> >>>>>>>
> >>>>>>> Hello, Shrikanth!
> >>>>>>>
> >>>>>>>>
> >>>>>>>>> On 1/12/26 3:38 PM, Uladzislau Rezki wrote:
> >>>>>>>>> On Mon, Jan 12, 2026 at 03:13:33PM +0530, Vishal Chourasia wrote:
> >>>>>>>>>> Bulk CPU hotplug operations—such as switching SMT modes across all
> >>>>>>>>>> cores—require hotplugging multiple CPUs in rapid succession. On large
> >>>>>>>>>> systems, this process takes significant time, increasing as the number
> >>>>>>>>>> of CPUs grows, leading to substantial delays on high-core-count
> >>>>>>>>>> machines. Analysis [1] reveals that the majority of this time is spent
> >>>>>>>>>> waiting for synchronize_rcu().
> >>>>>>>>>>
> >>>>>>>>>> Expedite synchronize_rcu() during the hotplug path to accelerate the
> >>>>>>>>>> operation. Since CPU hotplug is a user-initiated administrative task,
> >>>>>>>>>> it should complete as quickly as possible.
> >>>>>>>>>>
> >>>>>>>>>> Performance data on a PPC64 system with 400 CPUs:
> >>>>>>>>>>
> >>>>>>>>>> + ppc64_cpu --smt=1 (SMT8 to SMT1)
> >>>>>>>>>> Before: real 1m14.792s
> >>>>>>>>>> After:  real 0m03.205s  # ~23x improvement
> >>>>>>>>>>
> >>>>>>>>>> + ppc64_cpu --smt=8 (SMT1 to SMT8)
> >>>>>>>>>> Before: real 2m27.695s
> >>>>>>>>>> After:  real 0m02.510s  # ~58x improvement
> >>>>>>>>>>
> >>>>>>>>>> Above numbers were collected on Linux 6.19.0-rc4-00310-g755bc1335e3b
> >>>>>>>>>>
> >>>>>>>>>> [1] https://lore.kernel.org/all/5f2ab8a44d685701fe36cdaa8042a1aef215d10d.camel@linux.vnet.ibm.com
> >>>>>>>>>>
> >>>>>>>>> Also you can try: echo 1 > /sys/module/rcutree/parameters/rcu_normal_wake_from_gp
> >>>>>>>>> to speedup regular synchronize_rcu() call. But i am not saying that it would beat
> >>>>>>>>> your "expedited switch" improvement.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> Hi Uladzislau.
> >>>>>>>>
> >>>>>>>> Had a discussion on this at LPC, having in kernel solution is likely
> >>>>>>>> better than having it in userspace.
> >>>>>>>>
> >>>>>>>> - Having it in kernel would make it work across all archs. Why should
> >>>>>>>> any user wait when one initiates the hotplug.
> >>>>>>>>
> >>>>>>>> - userspace tools are spread across such as chcpu, ppc64_cpu etc.
> >>>>>>>> though internally most do "0/1 > /sys/devices/system/cpu/cpuN/online".
> >>>>>>>> We will have to repeat the same in each tool.
> >>>>>>>>
> >>>>>>>> - There is already /sys/kernel/rcu_expedited which is better if at all
> >>>>>>>> we need to fallback to userspace.
> >>>>>>>>
> >>>>>>> Sounds good to me. I agree it is better to bypass parameters.
> >>>>>>
> >>>>>> Another way to make it in-kernel would be to make the RCU normal wake from GP optimization enabled for > 16 CPUs by default.
> >>>>>>
> >>>>>> I was considering this, but I did not bring it up because I did not know that there are large systems that might benefit from it until now.
> >>>>>>
> >>>>> IMO, we can increase that threshold. 512/1024 is not a problem at all.
> >>>>> But as Paul mentioned, we should consider scalability enhancement. From
> >>>>> the other hand it is also probably worth to get into the state when we
> >>>>> really see them :)
> >>>>
> >>>> Instead of pegging to number of CPUs, perhaps the optimization should be dynamic? That is, default to it unless synchronize_rcu load is high, default to the sr_normal wake-up optimization. Of course carefully considering all corner cases, adequate testing and all that ;-)
> >>>>
> >>> Honestly i do not see use cases when we are not up to speed to process
> >>> all callbacks in time keeping in mind that it is blocking context call.
> >>>
> >>> How many of them should be in flight(blocked contexts) to make it starve... :)
> >>> According to my last evaluation it was ~64K.
> >>>
> >>> Note i do not say that it should not be scaled.
> >>
> >> But you did not test that on large system with 1000s of CPUs right? 
> >>
> > No, no. I do not have access to such systems.
> > 
> >>
> >> So the options I see are: either default to always using the optimization,
> >> not just for less than 17 CPUs (what you are saying above). Or, do what I said
> >> above (safer for system with 1000s of CPUs and less risky).
> >>
> > You mean introduce threshold and count how many nodes are in queue?
> 
> Yes.
> 
> > To me it sounds not optimal and looks like a temporary solution. 
> 
> Not more sub-optimal than the existing 16 CPU hard-coded solution I suppose.
> 
It was trial testing :) Agree we should do something with it.

> > 
> > Long term wise, it is better to split it, i mean to scale.
> 
> But the scalable solution is already there: the !synchronize_rcu_normal path,
> right? And splitting the list won't help this use case anyway.
> 
Fair point.

> > 
> > Do you know who can test it on ~1000 CPUs system? So we have some figures.
> 
> I don't have such systems either. The most I can go is ~200+ CPUs. Perhaps the
> folks on this thread have such systems as they mentioned 1900+ CPU systems. They
> should be happy to test.
> 
> > 
> > What i have is 256 CPUs system i can test on.
> Same boat. ;-)
> 
:)

--
Uladzislau Rezki

  parent reply	other threads:[~2026-01-13 17:58 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-12  9:43 [PATCH] cpuhp: Expedite synchronize_rcu during CPU hotplug operations Vishal Chourasia
2026-01-12 10:08 ` Uladzislau Rezki
2026-01-12 10:43   ` Vishal Chourasia
2026-01-12 11:07     ` Uladzislau Rezki
2026-01-12 12:02   ` Shrikanth Hegde
2026-01-12 12:57     ` Uladzislau Rezki
2026-01-12 16:09       ` Joel Fernandes
2026-01-12 16:48         ` Paul E. McKenney
2026-01-12 17:05           ` Uladzislau Rezki
2026-01-12 18:27             ` Vishal Chourasia
2026-01-13  0:03               ` Paul E. McKenney
2026-01-12 22:24           ` Joel Fernandes
2026-01-13  0:01             ` Paul E. McKenney
2026-01-13  2:46               ` Joel Fernandes
2026-01-13  4:53                 ` Shrikanth Hegde
2026-01-13  8:57                   ` Joel Fernandes
2026-01-14  4:00                     ` Paul E. McKenney
2026-01-14  8:54                       ` Joel Fernandes
2026-01-16 19:02                         ` Paul E. McKenney
2026-01-14  3:59                 ` Paul E. McKenney
2026-01-12 17:09         ` Uladzislau Rezki
2026-01-12 17:36           ` Joel Fernandes
2026-01-13 12:18             ` Uladzislau Rezki
2026-01-13 12:44               ` Joel Fernandes
2026-01-13 14:17                 ` Uladzislau Rezki
2026-01-13 14:32                   ` Joel Fernandes
2026-01-13 14:53                     ` Shrikanth Hegde
2026-01-13 18:17                       ` Uladzislau Rezki
2026-01-13 17:58                     ` Uladzislau Rezki [this message]
2026-01-12 12:21 ` Shrikanth Hegde
2026-01-12 12:46   ` Vishal Chourasia
2026-01-12 14:03 ` Joel Fernandes
2026-01-12 14:20   ` Joel Fernandes
2026-01-12 14:23     ` Peter Zijlstra
2026-01-12 14:37       ` Joel Fernandes
2026-01-12 17:52         ` Vishal Chourasia
2026-01-12 14:24 ` Peter Zijlstra
2026-01-12 18:00   ` Vishal Chourasia
2026-01-13  9:01     ` Peter Zijlstra
2026-01-19 10:47       ` [PATCH] cpuhp: Expedite synchronize_rcu during SMT switch Vishal Chourasia
2026-01-19 11:43         ` Peter Zijlstra
2026-01-19 13:45           ` Shrikanth Hegde
2026-01-19 14:11             ` Peter Zijlstra
2026-01-19 14:45               ` Joel Fernandes
2026-01-19 14:59                 ` Peter Zijlstra
2026-01-27 17:48           ` Samir M
2026-01-29  7:05             ` Samir M
2026-02-03  6:31             ` Samir M
2026-01-19 10:54       ` [RESEND] " Vishal Chourasia
2026-01-18 11:38 ` [PATCH] cpuhp: Expedite synchronize_rcu during CPU hotplug operations Samir M
2026-01-19  5:18   ` Joel Fernandes
2026-01-19 13:53     ` Shrikanth Hegde
2026-01-19 21:10       ` joelagnelf
2026-02-02  8:46     ` Vishal Chourasia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aWaH1AdJeSVVB5UZ@milan \
    --to=urezki@gmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=frederic@kernel.org \
    --cc=joelagnelf@nvidia.com \
    --cc=josh@joshtriplett.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=neeraj.upadhyay@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=srikar@linux.ibm.com \
    --cc=sshegde@linux.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=vishalc@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox