* [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-19 23:12 [PATCH -next v3 0/3] rcu/nocb: Cleanup patches for next merge window Joel Fernandes
@ 2026-01-19 23:12 ` Joel Fernandes
2026-01-19 23:53 ` Frederic Weisbecker
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Joel Fernandes @ 2026-01-19 23:12 UTC (permalink / raw)
To: linux-kernel
Cc: Paul E . McKenney, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang, Joel Fernandes
During callback overload (exceeding qhimark), the NOCB code attempts
opportunistic advancement via rcu_advance_cbs_nowake(). Analysis shows
this entire code path is dead:
- 30 overload conditions triggered with 300,000 callback flood
- 0 advancements actually occurred
- 100% of time blocked because current GP not done
The overload condition triggers when callbacks are coming in at a high
rate with GPs not completing as fast. But the advancement requires the
GP to be complete - a logical contradiction. Even if the GP did complete
in time, nocb_gp_wait() has to wake up anyway to do the advancement, so
it is pointless.
Since the advancement is dead code, the entire overload handling block
serves no purpose. Remove it entirely.
Suggested-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
kernel/rcu/tree_nocb.h | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index f525e4f7985b..64a8ff350f92 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -526,8 +526,6 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
__releases(rdp->nocb_lock)
{
long bypass_len;
- unsigned long cur_gp_seq;
- unsigned long j;
long lazy_len;
long len;
struct task_struct *t;
@@ -562,16 +560,6 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
}
return;
- } else if (len > rdp->qlen_last_fqs_check + qhimark) {
- /* ... or if many callbacks queued. */
- rdp->qlen_last_fqs_check = len;
- j = jiffies;
- if (j != rdp->nocb_gp_adv_time &&
- rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) &&
- rcu_seq_done(&rdp->mynode->gp_seq, cur_gp_seq)) {
- rcu_advance_cbs_nowake(rdp->mynode, rdp);
- rdp->nocb_gp_adv_time = j;
- }
}
rcu_nocb_unlock(rdp);
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-19 23:12 ` [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling Joel Fernandes
@ 2026-01-19 23:53 ` Frederic Weisbecker
2026-01-20 0:07 ` Paul E. McKenney
2026-01-22 21:55 ` Paul E. McKenney
2026-01-23 5:41 ` Paul E. McKenney
2 siblings, 1 reply; 16+ messages in thread
From: Frederic Weisbecker @ 2026-01-19 23:53 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Paul E . McKenney, Boqun Feng, rcu, Neeraj Upadhyay,
Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
Le Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes a écrit :
> During callback overload (exceeding qhimark), the NOCB code attempts
> opportunistic advancement via rcu_advance_cbs_nowake(). Analysis shows
> this entire code path is dead:
>
> - 30 overload conditions triggered with 300,000 callback flood
> - 0 advancements actually occurred
> - 100% of time blocked because current GP not done
>
> The overload condition triggers when callbacks are coming in at a high
> rate with GPs not completing as fast. But the advancement requires the
> GP to be complete - a logical contradiction. Even if the GP did complete
> in time, nocb_gp_wait() has to wake up anyway to do the advancement, so
> it is pointless.
>
> Since the advancement is dead code, the entire overload handling block
> serves no purpose. Remove it entirely.
>
> Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Would be nice to have Paul's ack as well, in case we missed something subtle
here.
Also probably for upcoming merge window + 1, note that similar code with
similar removal opportunity resides in rcu_nocb_try_bypass().
And ->nocb_gp_adv_time could then be removed.
Thanks.
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-19 23:53 ` Frederic Weisbecker
@ 2026-01-20 0:07 ` Paul E. McKenney
2026-01-20 0:59 ` joelagnelf
0 siblings, 1 reply; 16+ messages in thread
From: Paul E. McKenney @ 2026-01-20 0:07 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Joel Fernandes, linux-kernel, Boqun Feng, rcu, Neeraj Upadhyay,
Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Tue, Jan 20, 2026 at 12:53:26AM +0100, Frederic Weisbecker wrote:
> Le Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes a écrit :
> > During callback overload (exceeding qhimark), the NOCB code attempts
> > opportunistic advancement via rcu_advance_cbs_nowake(). Analysis shows
> > this entire code path is dead:
> >
> > - 30 overload conditions triggered with 300,000 callback flood
> > - 0 advancements actually occurred
> > - 100% of time blocked because current GP not done
> >
> > The overload condition triggers when callbacks are coming in at a high
> > rate with GPs not completing as fast. But the advancement requires the
> > GP to be complete - a logical contradiction. Even if the GP did complete
> > in time, nocb_gp_wait() has to wake up anyway to do the advancement, so
> > it is pointless.
> >
> > Since the advancement is dead code, the entire overload handling block
> > serves no purpose. Remove it entirely.
> >
> > Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> > Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>
> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
>
> Would be nice to have Paul's ack as well, in case we missed something subtle
> here.
Given that you are good with it, I will take a look. And test it. ;-)
> Also probably for upcoming merge window + 1, note that similar code with
> similar removal opportunity resides in rcu_nocb_try_bypass().
> And ->nocb_gp_adv_time could then be removed.
Further simplification sounds like a good thing! Just not too simple,
you understand! ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-20 0:07 ` Paul E. McKenney
@ 2026-01-20 0:59 ` joelagnelf
0 siblings, 0 replies; 16+ messages in thread
From: joelagnelf @ 2026-01-20 0:59 UTC (permalink / raw)
To: paulmck
Cc: Frederic Weisbecker, linux-kernel, Boqun Feng, rcu,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
> On Jan 19, 2026, at 7:07 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Tue, Jan 20, 2026 at 12:53:26AM +0100, Frederic Weisbecker wrote:
>> Le Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes a écrit :
>>> During callback overload (exceeding qhimark), the NOCB code attempts
>>> opportunistic advancement via rcu_advance_cbs_nowake(). Analysis shows
>>> this entire code path is dead:
>>>
>>> - 30 overload conditions triggered with 300,000 callback flood
>>> - 0 advancements actually occurred
>>> - 100% of time blocked because current GP not done
>>>
>>> The overload condition triggers when callbacks are coming in at a high
>>> rate with GPs not completing as fast. But the advancement requires the
>>> GP to be complete - a logical contradiction. Even if the GP did complete
>>> in time, nocb_gp_wait() has to wake up anyway to do the advancement, so
>>> it is pointless.
>>>
>>> Since the advancement is dead code, the entire overload handling block
>>> serves no purpose. Remove it entirely.
>>>
>>> Suggested-by: Frederic Weisbecker <frederic@kernel.org>
>>> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>>
>> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
>>
>> Would be nice to have Paul's ack as well, in case we missed something subtle
>> here.
>
> Given that you are good with it, I will take a look. And test it. ;-)
Sure, thanks!
>> Also probably for upcoming merge window + 1, note that similar code with
>> similar removal opportunity resides in rcu_nocb_try_bypass().
>> And ->nocb_gp_adv_time could then be removed.
>
> Further simplification sounds like a good thing! Just not too simple,
> you understand! ;-)
Yes I have some more queued in my local tree that I plan for merge window + 1. :-)
By the way, I have another recent idea: why don't we trigger nocb poll mode
automatically under overload condition? Currently rcu_nocb_poll is only set via
the boot parameter and stays constant. Testing shows me that poll mode can cause
GP completion faster during overload, so dynamically enabling it when we exceed
qhimark could be beneficial. The question then is how do we turn it off
dynamically as well - perhaps when callback count drops below qlowmark, and
using some debounce logic to avoid too frequent toggling?
> Thanx, Paul
thanks,
- Joel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-19 23:12 ` [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling Joel Fernandes
2026-01-19 23:53 ` Frederic Weisbecker
@ 2026-01-22 21:55 ` Paul E. McKenney
2026-01-22 23:43 ` Joel Fernandes
2026-01-23 5:41 ` Paul E. McKenney
2 siblings, 1 reply; 16+ messages in thread
From: Paul E. McKenney @ 2026-01-22 21:55 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes wrote:
> During callback overload (exceeding qhimark), the NOCB code attempts
> opportunistic advancement via rcu_advance_cbs_nowake(). Analysis shows
> this entire code path is dead:
>
> - 30 overload conditions triggered with 300,000 callback flood
> - 0 advancements actually occurred
> - 100% of time blocked because current GP not done
>
> The overload condition triggers when callbacks are coming in at a high
> rate with GPs not completing as fast. But the advancement requires the
> GP to be complete - a logical contradiction. Even if the GP did complete
> in time, nocb_gp_wait() has to wake up anyway to do the advancement, so
> it is pointless.
>
> Since the advancement is dead code, the entire overload handling block
> serves no purpose. Remove it entirely.
>
> Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
> kernel/rcu/tree_nocb.h | 12 ------------
> 1 file changed, 12 deletions(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index f525e4f7985b..64a8ff350f92 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -526,8 +526,6 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
> __releases(rdp->nocb_lock)
> {
> long bypass_len;
> - unsigned long cur_gp_seq;
> - unsigned long j;
> long lazy_len;
> long len;
> struct task_struct *t;
> @@ -562,16 +560,6 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
> }
>
> return;
> - } else if (len > rdp->qlen_last_fqs_check + qhimark) {
> - /* ... or if many callbacks queued. */
> - rdp->qlen_last_fqs_check = len;
> - j = jiffies;
> - if (j != rdp->nocb_gp_adv_time &&
> - rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) &&
This places in cur_gp_seq not the grace period for the current callback
(which would be unlikely to have finished), but rather the grace period
for the oldest callback that has not yet been marked as done. And that
callback started some time ago, and thus might well have finished.
So while this code might not have been executed in your tests, it is
definitely not a logical contradiction.
Or am I missing something subtle here?
Thanx, Paul
> - rcu_seq_done(&rdp->mynode->gp_seq, cur_gp_seq)) {
> - rcu_advance_cbs_nowake(rdp->mynode, rdp);
> - rdp->nocb_gp_adv_time = j;
> - }
> }
>
> rcu_nocb_unlock(rdp);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-22 21:55 ` Paul E. McKenney
@ 2026-01-22 23:43 ` Joel Fernandes
2026-01-23 0:12 ` Paul E. McKenney
0 siblings, 1 reply; 16+ messages in thread
From: Joel Fernandes @ 2026-01-22 23:43 UTC (permalink / raw)
To: Paul E. McKenney
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Thu, Jan 22, 2026 at 01:55:11PM -0800, Paul E. McKenney wrote:
> On Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes wrote:
> > - } else if (len > rdp->qlen_last_fqs_check + qhimark) {
> > - /* ... or if many callbacks queued. */
> > - rdp->qlen_last_fqs_check = len;
> > - j = jiffies;
> > - if (j != rdp->nocb_gp_adv_time &&
> > - rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) &&
>
> This places in cur_gp_seq not the grace period for the current callback
> (which would be unlikely to have finished), but rather the grace period
> for the oldest callback that has not yet been marked as done. And that
> callback started some time ago, and thus might well have finished.
>
> So while this code might not have been executed in your tests, it is
> definitely not a logical contradiction.
>
> Or am I missing something subtle here?
You're right that it's not a logical contradiction - I was imprecise.
rcu_segcblist_nextgp() returns the GP for the oldest pending callback,
which could indeed have completed.
However, the question becomes: under what scenario do we need to advance
here? If that GP completed, rcuog should have already advanced those
callbacks. The only way this code path can execute is if rcuog is starved
and not running to advance them, right?
But as Frederic pointed out, even if rcuog is starved, advancing here
doesn't help - rcuog must still run anyway to wake the callback thread.
We're just duplicating work it will do when it finally gets to run.
The extensive testing (300K callback floods, hours of rcutorture) showing
zero hits confirms this window is practically unreachable. I can update the
commit message to remove the "logical contradiction" claim and focus on the
redundancy argument instead.
Would that address your concern?
--
Joel Fernandes
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-22 23:43 ` Joel Fernandes
@ 2026-01-23 0:12 ` Paul E. McKenney
0 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2026-01-23 0:12 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Thu, Jan 22, 2026 at 06:43:31PM -0500, Joel Fernandes wrote:
> On Thu, Jan 22, 2026 at 01:55:11PM -0800, Paul E. McKenney wrote:
> > On Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes wrote:
> > > - } else if (len > rdp->qlen_last_fqs_check + qhimark) {
> > > - /* ... or if many callbacks queued. */
> > > - rdp->qlen_last_fqs_check = len;
> > > - j = jiffies;
> > > - if (j != rdp->nocb_gp_adv_time &&
> > > - rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) &&
> >
> > This places in cur_gp_seq not the grace period for the current callback
> > (which would be unlikely to have finished), but rather the grace period
> > for the oldest callback that has not yet been marked as done. And that
> > callback started some time ago, and thus might well have finished.
> >
> > So while this code might not have been executed in your tests, it is
> > definitely not a logical contradiction.
> >
> > Or am I missing something subtle here?
>
> You're right that it's not a logical contradiction - I was imprecise.
> rcu_segcblist_nextgp() returns the GP for the oldest pending callback,
> which could indeed have completed.
>
> However, the question becomes: under what scenario do we need to advance
> here? If that GP completed, rcuog should have already advanced those
> callbacks. The only way this code path can execute is if rcuog is starved
> and not running to advance them, right?
That is one way. The other way is if the RCU grace-period gets delayed
(perhaps by vCPU preemption) between the time that it updates the
leaf rcu_node structure's ->gp_seq field and the time that it invokes
rcu_nocb_gp_cleanup().
> But as Frederic pointed out, even if rcuog is starved, advancing here
> doesn't help - rcuog must still run anyway to wake the callback thread.
> We're just duplicating work it will do when it finally gets to run.
So maybe we don't want that first patch after all? ;-)
> The extensive testing (300K callback floods, hours of rcutorture) showing
> zero hits confirms this window is practically unreachable. I can update the
> commit message to remove the "logical contradiction" claim and focus on the
> redundancy argument instead.
That would definitely be good!
> Would that address your concern?
Your point about the rcuoc kthread needing to be awakened is a good one.
I am still concerned about flooding on busy systems, especially if the
busy component is an underlying hypervisor, but we might need a more
principled approach for that situation.
Thanx, Paul
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
[not found] <EBEF016B-721C-4A54-98E3-4B8BE6AA4C21@nvidia.com>
@ 2026-01-23 1:29 ` Joel Fernandes
2026-01-23 5:46 ` Paul E. McKenney
0 siblings, 1 reply; 16+ messages in thread
From: Joel Fernandes @ 2026-01-23 1:29 UTC (permalink / raw)
To: paulmck
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Jan 22, 2026, at 7:18 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> On Thu, Jan 22, 2026 at 06:43:31PM -0500, Joel Fernandes wrote:
>> On Thu, Jan 22, 2026 at 01:55:11PM -0800, Paul E. McKenney wrote:
>>> On Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes wrote:
>>>> - } else if (len > rdp->qlen_last_fqs_check + qhimark) {
>>>> - /* ... or if many callbacks queued. */
>>>> - rdp->qlen_last_fqs_check = len;
>>>> - j = jiffies;
>>>> - if (j != rdp->nocb_gp_adv_time &&
>>>> - rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) &&
>>> This places in cur_gp_seq not the grace period for the current callback
>>> (which would be unlikely to have finished), but rather the grace period
>>> for the oldest callback that has not yet been marked as done. And that
>>> callback started some time ago, and thus might well have finished.
>>> So while this code might not have been executed in your tests, it is
>>> definitely not a logical contradiction.
>>> Or am I missing something subtle here?
>>
>> You're right that it's not a logical contradiction - I was imprecise.
>> rcu_segcblist_nextgp() returns the GP for the oldest pending callback,
>> which could indeed have completed.
>>
>> However, the question becomes: under what scenario do we need to advance
>> here? If that GP completed, rcuog should have already advanced those
>> callbacks. The only way this code path can execute is if rcuog is starved
>> and not running to advance them, right?
>
> That is one way. The other way is if the RCU grace-period gets delayed
> (perhaps by vCPU preemption) between the time that it updates the
> leaf rcu_node structure's ->gp_seq field and the time that it invokes
> rcu_nocb_gp_cleanup().
I see the window you're describing. In rcu_gp_cleanup(), for each leaf node:
WRITE_ONCE(rnp->gp_seq, new_gp_seq); // GP appears complete
...
raw_spin_unlock_irq_rcu_node(rnp);
/* vCPU preemption */
rcu_nocb_gp_cleanup(sq); // wakes rcuog
So yes, in this window, the call_rcu() CPU could see the updated gp_seq
and have rcu_seq_done() return true for the now-completed GP.
However, even in this window, advancing callbacks doesn't help:
1. We advance callbacks from WAIT to DONE state
2. But rcuog is still sleeping, waiting for GP kthread to wake it
3. rcuoc is still sleeping, waiting for rcuog to wake it
4. Callbacks sit in DONE state but nobody invokes them
So the critical path is unchanged:
swake_up_all() -> rcuog -> rcuoc -> invoke.
I guess this is the redundancy argument - the window exists, but
exploiting it provides no meaningful benefit AFAICS.
>
>> But as Frederic pointed out, even if rcuog is starved, advancing here
>> doesn't help - rcuog must still run anyway to wake the callback thread.
>> We're just duplicating work it will do when it finally gets to run.
>
> So maybe we don't want that first patch after all? ;-)
Do you mean we want the first patch so that it can remove the code that
we don't want?
>
>> The extensive testing (300K callback floods, hours of rcutorture) showing
>> zero hits confirms this window is practically unreachable. I can update the
>> commit message to remove the "logical contradiction" claim and focus on the
>> redundancy argument instead.
>
> That would definitely be good!
Thanks. I will focus on this argument, then. I will resend with a better
patch description in the morning.
>
>> Would that address your concern?
>
> Your point about the rcuoc kthread needing to be awakened is a good one.
> I am still concerned about flooding on busy systems, especially if the
> busy component is an underlying hypervisor, but we might need a more
> principled approach for that situation.
Hmm true. There is also the case that any of the kthreads in the way of
the callback getting preempted by the hypervisor could also be
problematic, to your point of requiring a more principled approach. I
guess we did not want the reader side vCPU preemption workarounds either
for similar reason.
One trick I found irrespective of virtualization, is, rcu_nocb_poll can
result in grace periods completing faster. I think this could help
overload situations by retiring callbacks sooner than later. I can
experiment with this idea in future. Was considering a dynamic trigger
to enable polling mode in overload. I guess there is one way to find out
how well this will work, but initial testing does look promising. :-D.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-19 23:12 ` [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling Joel Fernandes
2026-01-19 23:53 ` Frederic Weisbecker
2026-01-22 21:55 ` Paul E. McKenney
@ 2026-01-23 5:41 ` Paul E. McKenney
2 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2026-01-23 5:41 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes wrote:
> During callback overload (exceeding qhimark), the NOCB code attempts
> opportunistic advancement via rcu_advance_cbs_nowake(). Analysis shows
> this entire code path is dead:
>
> - 30 overload conditions triggered with 300,000 callback flood
> - 0 advancements actually occurred
> - 100% of time blocked because current GP not done
>
> The overload condition triggers when callbacks are coming in at a high
> rate with GPs not completing as fast. But the advancement requires the
> GP to be complete - a logical contradiction. Even if the GP did complete
> in time, nocb_gp_wait() has to wake up anyway to do the advancement, so
> it is pointless.
>
> Since the advancement is dead code, the entire overload handling block
> serves no purpose. Remove it entirely.
>
> Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
> ---
> kernel/rcu/tree_nocb.h | 12 ------------
> 1 file changed, 12 deletions(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index f525e4f7985b..64a8ff350f92 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -526,8 +526,6 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
> __releases(rdp->nocb_lock)
> {
> long bypass_len;
> - unsigned long cur_gp_seq;
> - unsigned long j;
> long lazy_len;
> long len;
> struct task_struct *t;
> @@ -562,16 +560,6 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
> }
>
> return;
> - } else if (len > rdp->qlen_last_fqs_check + qhimark) {
> - /* ... or if many callbacks queued. */
> - rdp->qlen_last_fqs_check = len;
> - j = jiffies;
> - if (j != rdp->nocb_gp_adv_time &&
> - rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) &&
> - rcu_seq_done(&rdp->mynode->gp_seq, cur_gp_seq)) {
> - rcu_advance_cbs_nowake(rdp->mynode, rdp);
> - rdp->nocb_gp_adv_time = j;
> - }
> }
>
> rcu_nocb_unlock(rdp);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-23 1:29 ` [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling Joel Fernandes
@ 2026-01-23 5:46 ` Paul E. McKenney
2026-01-23 15:30 ` Joel Fernandes
0 siblings, 1 reply; 16+ messages in thread
From: Paul E. McKenney @ 2026-01-23 5:46 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Thu, Jan 22, 2026 at 08:29:41PM -0500, Joel Fernandes wrote:
> On Jan 22, 2026, at 7:18 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> > On Thu, Jan 22, 2026 at 06:43:31PM -0500, Joel Fernandes wrote:
> > > On Thu, Jan 22, 2026 at 01:55:11PM -0800, Paul E. McKenney wrote:
> > > > On Mon, Jan 19, 2026 at 06:12:22PM -0500, Joel Fernandes wrote:
> > > > > - } else if (len > rdp->qlen_last_fqs_check + qhimark) {
> > > > > - /* ... or if many callbacks queued. */
> > > > > - rdp->qlen_last_fqs_check = len;
> > > > > - j = jiffies;
> > > > > - if (j != rdp->nocb_gp_adv_time &&
> > > > > - rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) &&
> > > > This places in cur_gp_seq not the grace period for the current callback
> > > > (which would be unlikely to have finished), but rather the grace period
> > > > for the oldest callback that has not yet been marked as done. And that
> > > > callback started some time ago, and thus might well have finished.
> > > > So while this code might not have been executed in your tests, it is
> > > > definitely not a logical contradiction.
> > > > Or am I missing something subtle here?
> > >
> > > You're right that it's not a logical contradiction - I was imprecise.
> > > rcu_segcblist_nextgp() returns the GP for the oldest pending callback,
> > > which could indeed have completed.
> > >
> > > However, the question becomes: under what scenario do we need to advance
> > > here? If that GP completed, rcuog should have already advanced those
> > > callbacks. The only way this code path can execute is if rcuog is starved
> > > and not running to advance them, right?
> >
> > That is one way. The other way is if the RCU grace-period gets delayed
> > (perhaps by vCPU preemption) between the time that it updates the
> > leaf rcu_node structure's ->gp_seq field and the time that it invokes
> > rcu_nocb_gp_cleanup().
>
> I see the window you're describing. In rcu_gp_cleanup(), for each leaf node:
>
> WRITE_ONCE(rnp->gp_seq, new_gp_seq); // GP appears complete
> ...
> raw_spin_unlock_irq_rcu_node(rnp);
>
> /* vCPU preemption */
> rcu_nocb_gp_cleanup(sq); // wakes rcuog
>
> So yes, in this window, the call_rcu() CPU could see the updated gp_seq and
> have rcu_seq_done() return true for the now-completed GP.
>
> However, even in this window, advancing callbacks doesn't help:
>
> 1. We advance callbacks from WAIT to DONE state
> 2. But rcuog is still sleeping, waiting for GP kthread to wake it
> 3. rcuoc is still sleeping, waiting for rcuog to wake it
> 4. Callbacks sit in DONE state but nobody invokes them
>
> So the critical path is unchanged:
> swake_up_all() -> rcuog -> rcuoc -> invoke.
>
> I guess this is the redundancy argument - the window exists, but
> exploiting it provides no meaningful benefit AFAICS.
I gave you a Reviewed-by for this one and reaffirm my Reviewed-by for
the other two. But you break it, you buy it! ;-)
> > > But as Frederic pointed out, even if rcuog is starved, advancing here
> > > doesn't help - rcuog must still run anyway to wake the callback thread.
> > > We're just duplicating work it will do when it finally gets to run.
> >
> > So maybe we don't want that first patch after all? ;-)
>
> Do you mean we want the first patch so that it can remove the code that we
> don't want?
It would wake up the rcuog kthread if the RCU grace-period kthread was
slow to do so. But I agree with simplifying it and working out how to
make it more robust as a separate effort.
> > > The extensive testing (300K callback floods, hours of rcutorture) showing
> > > zero hits confirms this window is practically unreachable. I can update the
> > > commit message to remove the "logical contradiction" claim and focus on the
> > > redundancy argument instead.
> >
> > That would definitely be good!
>
> Thanks. I will focus on this argument, then. I will resend with a better
> patch description in the morning.
And my Reviewed-by does assume that change, so go ahead and send the
improved commit log with my Reviewed-by appended.
> > > Would that address your concern?
> >
> > Your point about the rcuoc kthread needing to be awakened is a good one.
> > I am still concerned about flooding on busy systems, especially if the
> > busy component is an underlying hypervisor, but we might need a more
> > principled approach for that situation.
>
> Hmm true. There is also the case that any of the kthreads in the way of the
> callback getting preempted by the hypervisor could also be problematic, to
> your point of requiring a more principled approach. I guess we did not want
> the reader side vCPU preemption workarounds either for similar reason.
Well, principles only get you so far. We need both the principles and the
pragmatism to know when to depart from those principles when warranted.
> One trick I found irrespective of virtualization, is, rcu_nocb_poll can
> result in grace periods completing faster. I think this could help overload
> situations by retiring callbacks sooner than later. I can experiment with
> this idea in future. Was considering a dynamic trigger to enable polling
> mode in overload. I guess there is one way to find out how well this will
> work, but initial testing does look promising. :-D.
Careful of the effect on power consumption, especially for the world of
battery-powered embedded systems! ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-23 5:46 ` Paul E. McKenney
@ 2026-01-23 15:30 ` Joel Fernandes
2026-01-23 16:49 ` Paul E. McKenney
0 siblings, 1 reply; 16+ messages in thread
From: Joel Fernandes @ 2026-01-23 15:30 UTC (permalink / raw)
To: Paul E. McKenney
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Thu, Jan 22, 2026 at 09:46:58PM -0800, Paul E. McKenney wrote:
> > Thanks. I will focus on this argument, then. I will resend with a better
> > patch description in the morning.
>
> And my Reviewed-by does assume that change, so go ahead and send the
> improved commit log with my Reviewed-by appended.
Sure, will do.
> > Hmm true. There is also the case that any of the kthreads in the way of the
> > callback getting preempted by the hypervisor could also be problematic, to
> > your point of requiring a more principled approach. I guess we did not want
> > the reader side vCPU preemption workarounds either for similar reason.
>
> Well, principles only get you so far. We need both the principles and the
> pragmatism to know when to depart from those principles when warranted.
Agreed. Indeed we have to balance the cost of workarounds and in the case
of per cpu blocked lists, I agree that perhaps the balance tipped more in
favor of not doing it pending other more comprehensive fixes.
> > One trick I found irrespective of virtualization, is, rcu_nocb_poll can
> > result in grace periods completing faster. I think this could help overload
> > situations by retiring callbacks sooner than later. I can experiment with
> > this idea in future. Was considering a dynamic trigger to enable polling
> > mode in overload. I guess there is one way to find out how well this will
> > work, but initial testing does look promising. :-D.
>
> Careful of the effect on power consumption, especially for the world of
> battery-powered embedded systems! ;-)
Thanks, yes I was considering this argument already to be honest as one of
the potential pitfalls, but thanks for the reminder! FWIW, my inclination
is that if we are in an overloaded situation, we would not benefit from
idleness anyway. To the contrary, I think we may hurt idleness and power
if we are not able to settle the system into a quiet state due to slowness
in alleviating the callback overload. I will profile for CPU consumption
and maybe run turbostat to check whenever I have the prototype.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-23 15:30 ` Joel Fernandes
@ 2026-01-23 16:49 ` Paul E. McKenney
2026-01-23 19:36 ` Joel Fernandes
0 siblings, 1 reply; 16+ messages in thread
From: Paul E. McKenney @ 2026-01-23 16:49 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Fri, Jan 23, 2026 at 10:30:00AM -0500, Joel Fernandes wrote:
> On Thu, Jan 22, 2026 at 09:46:58PM -0800, Paul E. McKenney wrote:
> > > Thanks. I will focus on this argument, then. I will resend with a better
> > > patch description in the morning.
> >
> > And my Reviewed-by does assume that change, so go ahead and send the
> > improved commit log with my Reviewed-by appended.
>
> Sure, will do.
>
> > > Hmm true. There is also the case that any of the kthreads in the way of the
> > > callback getting preempted by the hypervisor could also be problematic, to
> > > your point of requiring a more principled approach. I guess we did not want
> > > the reader side vCPU preemption workarounds either for similar reason.
> >
> > Well, principles only get you so far. We need both the principles and the
> > pragmatism to know when to depart from those principles when warranted.
>
> Agreed. Indeed we have to balance the cost of workarounds and in the case
> of per cpu blocked lists, I agree that perhaps the balance tipped more in
> favor of not doing it pending other more comprehensive fixes.
I would feel better about that balance if we actually had some of these
more comprehensive fixes in mind. ;-)
> > > One trick I found irrespective of virtualization, is, rcu_nocb_poll can
> > > result in grace periods completing faster. I think this could help overload
> > > situations by retiring callbacks sooner than later. I can experiment with
> > > this idea in future. Was considering a dynamic trigger to enable polling
> > > mode in overload. I guess there is one way to find out how well this will
> > > work, but initial testing does look promising. :-D.
> >
> > Careful of the effect on power consumption, especially for the world of
> > battery-powered embedded systems! ;-)
>
> Thanks, yes I was considering this argument already to be honest as one of
> the potential pitfalls, but thanks for the reminder! FWIW, my inclination
> is that if we are in an overloaded situation, we would not benefit from
> idleness anyway. To the contrary, I think we may hurt idleness and power
> if we are not able to settle the system into a quiet state due to slowness
> in alleviating the callback overload. I will profile for CPU consumption
> and maybe run turbostat to check whenever I have the prototype.
We could have one CPU flooding and the rest idle, and many other
combinations. And, if I recall correctly, polling can burn extra CPU
and cause extra wakeups even when the system is fully idle. Or has
that changed?
Thanx, Paul
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-23 16:49 ` Paul E. McKenney
@ 2026-01-23 19:36 ` Joel Fernandes
2026-01-23 21:27 ` Paul E. McKenney
0 siblings, 1 reply; 16+ messages in thread
From: Joel Fernandes @ 2026-01-23 19:36 UTC (permalink / raw)
To: paulmck
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On 1/23/2026 11:49 AM, Paul E. McKenney wrote:
> We could have one CPU flooding and the rest idle, and many other
> combinations. And, if I recall correctly, polling can burn extra CPU
> and cause extra wakeups even when the system is fully idle. Or has
> that changed?
In my experience working on lazy RCU, if you have such a kind of overload on
any CPU, then you're usually not saving any power anyway. The system has to
be really quiet and idle with a low stream of callbacks for you to save
power. Further, when the callback length increases too much, we don't turn
on lazy RCU anyway because the idea is that we are overloaded and the
system is busy - so we already have such assumptions baked in. I think a
similar argument could apply here for dynamically enabling polling mode only
when overloaded.
I was coming more from the point of view of improving grace period performance
when we do have an overload, potentially resolving the overloaded situation
faster than usual. We would dynamically trigger polling based on such
circumstances.
That said, I confess I don't have extensive experience with polling mode beyond
testing. I believe we should add more rcutorture test cases for this. I'm
considering adding a new config that enables polling for NOCB - this testing is
what revealed the potential for grace period performance improvement with NOCB
to me.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-23 19:36 ` Joel Fernandes
@ 2026-01-23 21:27 ` Paul E. McKenney
2026-01-24 1:11 ` Joel Fernandes
2026-01-25 14:46 ` Joel Fernandes
0 siblings, 2 replies; 16+ messages in thread
From: Paul E. McKenney @ 2026-01-23 21:27 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Fri, Jan 23, 2026 at 02:36:37PM -0500, Joel Fernandes wrote:
> On 1/23/2026 11:49 AM, Paul E. McKenney wrote:
> > We could have one CPU flooding and the rest idle, and many other
> > combinations. And, if I recall correctly, polling can burn extra CPU
> > and cause extra wakeups even when the system is fully idle. Or has
> > that changed?
>
> In my experience working on lazy RCU, if you have such a kind of overload on
> any CPU, then you're usually not saving any power anyway. The system has to
> be really quiet and idle with a low stream of callbacks for you to save
> power. Further, when the callback length increases too much, we don't turn
> on lazy RCU anyway because the idea is that we are overloaded and the
> system is busy - so we already have such assumptions baked in. I think a
> similar argument could apply here for dynamically enabling polling mode only
> when overloaded.
The concern is detecting overload quickly. Any unnecessary gaps in
invoking RCU callbacks cannot be made up. That time is gone.
And the polling does sleeps...
> I was coming more from the point of view of improving grace period performance
> when we do have an overload, potentially resolving the overloaded situation
> faster than usual. We would dynamically trigger polling based on such
> circumstances.
>
> That said, I confess I don't have extensive experience with polling mode beyond
> testing. I believe we should add more rcutorture test cases for this. I'm
> considering adding a new config that enables polling for NOCB - this testing is
> what revealed the potential for grace period performance improvement with NOCB
> to me.
The main purpose of polling was to make call_rcu() avoid at least some
of its slowpaths. If we are getting some other benefit out of it, is
polling the best way to achieve that benefit?
Thanx, Paul
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-23 21:27 ` Paul E. McKenney
@ 2026-01-24 1:11 ` Joel Fernandes
2026-01-25 14:46 ` Joel Fernandes
1 sibling, 0 replies; 16+ messages in thread
From: Joel Fernandes @ 2026-01-24 1:11 UTC (permalink / raw)
To: paulmck
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On 1/23/2026 4:27 PM, Paul E. McKenney wrote:
> On Fri, Jan 23, 2026 at 02:36:37PM -0500, Joel Fernandes wrote:
>> On 1/23/2026 11:49 AM, Paul E. McKenney wrote:
>>> We could have one CPU flooding and the rest idle, and many other
>>> combinations. And, if I recall correctly, polling can burn extra CPU
>>> and cause extra wakeups even when the system is fully idle. Or has
>>> that changed?
>>
>> In my experience working on lazy RCU, if you have such a kind of overload on
>> any CPU, then you're usually not saving any power anyway. The system has to
>> be really quiet and idle with a low stream of callbacks for you to save
>> power. Further, when the callback length increases too much, we don't turn
>> on lazy RCU anyway because the idea is that we are overloaded and the
>> system is busy - so we already have such assumptions baked in. I think a
>> similar argument could apply here for dynamically enabling polling mode only
>> when overloaded.
>
> The concern is detecting overload quickly. Any unnecessary gaps in
> invoking RCU callbacks cannot be made up. That time is gone.
> And the polling does sleeps...
Right, the time is gone, but perhaps the recent past is an indication
that the gears of the machinery need to move faster, possibly to improve
things. :-D. Obviously, it's also totally possible that entering polling
mode doesn't benefit anything if RCU readers are taking forever to exit
their critical sections and so forth.
>
>> I was coming more from the point of view of improving grace period performance
>> when we do have an overload, potentially resolving the overloaded situation
>> faster than usual. We would dynamically trigger polling based on such
>> circumstances.
>>
>> That said, I confess I don't have extensive experience with polling mode beyond
>> testing. I believe we should add more rcutorture test cases for this. I'm
>> considering adding a new config that enables polling for NOCB - this testing is
>> what revealed the potential for grace period performance improvement with NOCB
>> to me.
>
> The main purpose of polling was to make call_rcu() avoid at least some
> of its slowpaths. If we are getting some other benefit out of it, is
> polling the best way to achieve that benefit?
Thanks for the clarification. I will study what exactly is the behavior
first. The main benefit I see is that Grace Periods progress more
quickly in polling mode. My suspicion is perhaps this is because of the
speed of wakeups happening due to timer interrupts vs. those happening
because of one thread waking another. I am just speculating, and I will
study it more before being able to say anything meaningful here. ;-).
But thanks for the discussion!
--
Joel Fernandes
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling
2026-01-23 21:27 ` Paul E. McKenney
2026-01-24 1:11 ` Joel Fernandes
@ 2026-01-25 14:46 ` Joel Fernandes
1 sibling, 0 replies; 16+ messages in thread
From: Joel Fernandes @ 2026-01-25 14:46 UTC (permalink / raw)
To: Paul E. McKenney
Cc: linux-kernel, Boqun Feng, rcu, Frederic Weisbecker,
Neeraj Upadhyay, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang
On Fri, Jan 23, 2026 at 01:27:46PM -0800, Paul E. McKenney wrote:
> On Fri, Jan 23, 2026 at 02:36:37PM -0500, Joel Fernandes wrote:
> > I was coming more from the point of view of improving grace period performance
> > when we do have an overload, potentially resolving the overloaded situation
> > faster than usual. We would dynamically trigger polling based on such
> > circumstances.
> >
> > That said, I confess I don't have extensive experience with polling mode beyond
> > testing. I believe we should add more rcutorture test cases for this. I'm
> > considering adding a new config that enables polling for NOCB - this testing is
> > what revealed the potential for grace period performance improvement with NOCB
> > to me.
>
> The main purpose of polling was to make call_rcu() avoid at least some
> of its slowpaths. If we are getting some other benefit out of it, is
> polling the best way to achieve that benefit?
I only started looking into this, but there is the rcu_state.cbovld flag
which already does similar "extra work at the expense of more CPU" when
callback overload is detected. Specifically, when cbovld is set (triggered
when any CPU exceeds qovld_calc callbacks, default 20,000), the following
aggressive measures kick in:
1. FQS intervals are shortened making force quiescent
state scans happen more frequently.
2. Heavy quiescent state requests are triggered earlier.
3. Priority boosting kicks in immediately rather than waiting.
These are already along the same lines as what I was suggesting for polling:
do extra work at the expense of more CPU cycles to reduce the overload
situation faster. So perhaps the question is whether dynamically enabling
poll mode during cbovld would provide additional benefit on top of these.
As you said, the idea was to avoid the call rcu slow paths. But perhaps it
can also assist cbovld too?
I will study this more :)
Thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-01-25 14:46 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <EBEF016B-721C-4A54-98E3-4B8BE6AA4C21@nvidia.com>
2026-01-23 1:29 ` [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling Joel Fernandes
2026-01-23 5:46 ` Paul E. McKenney
2026-01-23 15:30 ` Joel Fernandes
2026-01-23 16:49 ` Paul E. McKenney
2026-01-23 19:36 ` Joel Fernandes
2026-01-23 21:27 ` Paul E. McKenney
2026-01-24 1:11 ` Joel Fernandes
2026-01-25 14:46 ` Joel Fernandes
2026-01-19 23:12 [PATCH -next v3 0/3] rcu/nocb: Cleanup patches for next merge window Joel Fernandes
2026-01-19 23:12 ` [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload handling Joel Fernandes
2026-01-19 23:53 ` Frederic Weisbecker
2026-01-20 0:07 ` Paul E. McKenney
2026-01-20 0:59 ` joelagnelf
2026-01-22 21:55 ` Paul E. McKenney
2026-01-22 23:43 ` Joel Fernandes
2026-01-23 0:12 ` Paul E. McKenney
2026-01-23 5:41 ` Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox