* [PATCH rcu 0/5] RCU changes for v6.17
@ 2025-07-09 10:41 neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 1/5] rcu: Robustify rcu_is_cpu_rrupt_from_idle() neeraj.upadhyay
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: neeraj.upadhyay @ 2025-07-09 10:41 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, paulmck, joelagnelf, frederic, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay, Neeraj Upadhyay (AMD)
From: "Neeraj Upadhyay (AMD)" <neeraj.upadhyay@kernel.org>
Hello,
This patch series contains following updates to the RCU code (rebased
on v6.16-rc3):
Frederic Weisbecker (1):
rcu: Robustify rcu_is_cpu_rrupt_from_idle()
Joel Fernandes (1):
rcu: Fix rcu_read_unlock() deadloop due to IRQ work
Paul E. McKenney (1):
rcu: Protect ->defer_qs_iw_pending from data race
Uladzislau Rezki (Sony) (2):
rcu: Enable rcu_normal_wake_from_gp on small systems
Documentation/kernel-parameters: Update rcu_normal_wake_from_gp doc
.../admin-guide/kernel-parameters.txt | 3 +-
kernel/rcu/tree.c | 41 +++++++++++++------
kernel/rcu/tree.h | 11 ++++-
kernel/rcu/tree_plugin.h | 26 ++++++++++--
4 files changed, 62 insertions(+), 19 deletions(-)
--
2.40.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH rcu 1/5] rcu: Robustify rcu_is_cpu_rrupt_from_idle()
2025-07-09 10:41 [PATCH rcu 0/5] RCU changes for v6.17 neeraj.upadhyay
@ 2025-07-09 10:41 ` neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 2/5] rcu: Protect ->defer_qs_iw_pending from data race neeraj.upadhyay
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: neeraj.upadhyay @ 2025-07-09 10:41 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, paulmck, joelagnelf, frederic, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay, Neeraj Upadhyay (AMD)
From: Frederic Weisbecker <frederic@kernel.org>
RCU relies on the context tracking nesting counter in order to determine
if it is running in extended quiescent state.
However the context tracking nesting counter is not completely
synchronized with the actual context tracking state:
* The nesting counter is set to 1 or incremented further _after_ the
actual state is set to RCU watching.
* The nesting counter is set to 0 or decremented further _before_ the
actual state is set to RCU not watching.
Therefore it is safe to assume that if ct_nesting() > 0, RCU is
watching. But if ct_nesting() <= 0, RCU is not watching except for tiny
windows.
This hasn't been a problem so far because rcu_is_cpu_rrupt_from_idle()
has only been called from interrupts. However the code is confusing
and abuses the role of the context tracking nesting counter while there
are more accurate indicators available.
Clarify and robustify accordingly.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
---
kernel/rcu/tree.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 14d4499c6fc3..f83bbb408895 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -377,7 +377,7 @@ EXPORT_SYMBOL_GPL(rcu_momentary_eqs);
*/
static int rcu_is_cpu_rrupt_from_idle(void)
{
- long nesting;
+ long nmi_nesting = ct_nmi_nesting();
/*
* Usually called from the tick; but also used from smp_function_call()
@@ -389,21 +389,28 @@ static int rcu_is_cpu_rrupt_from_idle(void)
/* Check for counter underflows */
RCU_LOCKDEP_WARN(ct_nesting() < 0,
"RCU nesting counter underflow!");
- RCU_LOCKDEP_WARN(ct_nmi_nesting() <= 0,
- "RCU nmi_nesting counter underflow/zero!");
- /* Are we at first interrupt nesting level? */
- nesting = ct_nmi_nesting();
- if (nesting > 1)
+ /* Non-idle interrupt or nested idle interrupt */
+ if (nmi_nesting > 1)
return false;
/*
- * If we're not in an interrupt, we must be in the idle task!
+ * Non nested idle interrupt (interrupting section where RCU
+ * wasn't watching).
*/
- WARN_ON_ONCE(!nesting && !is_idle_task(current));
+ if (nmi_nesting == 1)
+ return true;
+
+ /* Not in an interrupt */
+ if (!nmi_nesting) {
+ RCU_LOCKDEP_WARN(!in_task() || !is_idle_task(current),
+ "RCU nmi_nesting counter not in idle task!");
+ return !rcu_is_watching_curr_cpu();
+ }
- /* Does CPU appear to be idle from an RCU standpoint? */
- return ct_nesting() == 0;
+ RCU_LOCKDEP_WARN(1, "RCU nmi_nesting counter underflow/zero!");
+
+ return false;
}
#define DEFAULT_RCU_BLIMIT (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) ? 1000 : 10)
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH rcu 2/5] rcu: Protect ->defer_qs_iw_pending from data race
2025-07-09 10:41 [PATCH rcu 0/5] RCU changes for v6.17 neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 1/5] rcu: Robustify rcu_is_cpu_rrupt_from_idle() neeraj.upadhyay
@ 2025-07-09 10:41 ` neeraj.upadhyay
2025-07-09 11:27 ` Frederic Weisbecker
2025-07-09 10:41 ` [PATCH rcu 3/5] rcu: Enable rcu_normal_wake_from_gp on small systems neeraj.upadhyay
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: neeraj.upadhyay @ 2025-07-09 10:41 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, paulmck, joelagnelf, frederic, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay, Neeraj Upadhyay (AMD)
From: "Paul E. McKenney" <paulmck@kernel.org>
On kernels built with CONFIG_IRQ_WORK=y, when rcu_read_unlock() is
invoked within an interrupts-disabled region of code [1], it will invoke
rcu_read_unlock_special(), which uses an irq-work handler to force the
system to notice when the RCU read-side critical section actually ends.
That end won't happen until interrupts are enabled at the soonest.
In some kernels, such as those booted with rcutree.use_softirq=y, the
irq-work handler is used unconditionally.
The per-CPU rcu_data structure's ->defer_qs_iw_pending field is
updated by the irq-work handler and is both read and updated by
rcu_read_unlock_special(). This resulted in the following KCSAN splat:
------------------------------------------------------------------------
BUG: KCSAN: data-race in rcu_preempt_deferred_qs_handler / rcu_read_unlock_special
read to 0xffff96b95f42d8d8 of 1 bytes by task 90 on cpu 8:
rcu_read_unlock_special+0x175/0x260
__rcu_read_unlock+0x92/0xa0
rt_spin_unlock+0x9b/0xc0
__local_bh_enable+0x10d/0x170
__local_bh_enable_ip+0xfb/0x150
rcu_do_batch+0x595/0xc40
rcu_cpu_kthread+0x4e9/0x830
smpboot_thread_fn+0x24d/0x3b0
kthread+0x3bd/0x410
ret_from_fork+0x35/0x40
ret_from_fork_asm+0x1a/0x30
write to 0xffff96b95f42d8d8 of 1 bytes by task 88 on cpu 8:
rcu_preempt_deferred_qs_handler+0x1e/0x30
irq_work_single+0xaf/0x160
run_irq_workd+0x91/0xc0
smpboot_thread_fn+0x24d/0x3b0
kthread+0x3bd/0x410
ret_from_fork+0x35/0x40
ret_from_fork_asm+0x1a/0x30
no locks held by irq_work/8/88.
irq event stamp: 200272
hardirqs last enabled at (200272): [<ffffffffb0f56121>] finish_task_switch+0x131/0x320
hardirqs last disabled at (200271): [<ffffffffb25c7859>] __schedule+0x129/0xd70
softirqs last enabled at (0): [<ffffffffb0ee093f>] copy_process+0x4df/0x1cc0
softirqs last disabled at (0): [<0000000000000000>] 0x0
------------------------------------------------------------------------
The problem is that irq-work handlers run with interrupts enabled, which
means that rcu_preempt_deferred_qs_handler() could be interrupted,
and that interrupt handler might contain an RCU read-side critical
section, which might invoke rcu_read_unlock_special(). In the strict
KCSAN mode of operation used by RCU, this constitutes a data race on
the ->defer_qs_iw_pending field.
This commit therefore disables interrupts across the portion of the
rcu_preempt_deferred_qs_handler() that updates the ->defer_qs_iw_pending
field. This suffices because this handler is not a fast path.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
---
kernel/rcu/tree_plugin.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 0b0f56f6abc8..a91b2322a0cd 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -624,10 +624,13 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t)
*/
static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
{
+ unsigned long flags;
struct rcu_data *rdp;
rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
+ local_irq_save(flags);
rdp->defer_qs_iw_pending = false;
+ local_irq_restore(flags);
}
/*
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH rcu 3/5] rcu: Enable rcu_normal_wake_from_gp on small systems
2025-07-09 10:41 [PATCH rcu 0/5] RCU changes for v6.17 neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 1/5] rcu: Robustify rcu_is_cpu_rrupt_from_idle() neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 2/5] rcu: Protect ->defer_qs_iw_pending from data race neeraj.upadhyay
@ 2025-07-09 10:41 ` neeraj.upadhyay
2025-07-09 11:36 ` Frederic Weisbecker
2025-07-09 10:41 ` [PATCH rcu 4/5] Documentation/kernel-parameters: Update rcu_normal_wake_from_gp doc neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 5/5] rcu: Fix rcu_read_unlock() deadloop due to IRQ work neeraj.upadhyay
4 siblings, 1 reply; 11+ messages in thread
From: neeraj.upadhyay @ 2025-07-09 10:41 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, paulmck, joelagnelf, frederic, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay, Neeraj Upadhyay (AMD)
From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Automatically enable the rcu_normal_wake_from_gp parameter on
systems with a small number of CPUs. The activation threshold
is set to 16 CPUs.
This helps to reduce a latency of normal synchronize_rcu() API
by waking up GP-waiters earlier and decoupling synchronize_rcu()
callers from regular callback handling.
A benchmark running 64 parallel jobs(system with 64 CPUs) invoking
synchronize_rcu() demonstrates a notable latency reduction with the
setting enabled.
Latency distribution (microseconds):
<default>
0 - 9999 : 1
10000 - 19999 : 4
20000 - 29999 : 399
30000 - 39999 : 3197
40000 - 49999 : 10428
50000 - 59999 : 17363
60000 - 69999 : 15529
70000 - 79999 : 9287
80000 - 89999 : 4249
90000 - 99999 : 1915
100000 - 109999 : 922
110000 - 119999 : 390
120000 - 129999 : 187
...
<default>
<rcu_normal_wake_from_gp>
0 - 9999 : 1
10000 - 19999 : 234
20000 - 29999 : 6678
30000 - 39999 : 33463
40000 - 49999 : 20669
50000 - 59999 : 2766
60000 - 69999 : 183
...
<rcu_normal_wake_from_gp>
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
---
kernel/rcu/tree.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f83bbb408895..8c22db759978 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1632,8 +1632,10 @@ static void rcu_sr_put_wait_head(struct llist_node *node)
atomic_set_release(&sr_wn->inuse, 0);
}
-/* Disabled by default. */
-static int rcu_normal_wake_from_gp;
+/* Enable rcu_normal_wake_from_gp automatically on small systems. */
+#define WAKE_FROM_GP_CPU_THRESHOLD 16
+
+static int rcu_normal_wake_from_gp = -1;
module_param(rcu_normal_wake_from_gp, int, 0644);
static struct workqueue_struct *sync_wq;
@@ -3250,7 +3252,7 @@ static void synchronize_rcu_normal(void)
trace_rcu_sr_normal(rcu_state.name, &rs.head, TPS("request"));
- if (!READ_ONCE(rcu_normal_wake_from_gp)) {
+ if (READ_ONCE(rcu_normal_wake_from_gp) < 1) {
wait_rcu_gp(call_rcu_hurry);
goto trace_complete_out;
}
@@ -4854,6 +4856,12 @@ void __init rcu_init(void)
sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM, 0);
WARN_ON(!sync_wq);
+ /* Respect if explicitly disabled via a boot parameter. */
+ if (rcu_normal_wake_from_gp < 0) {
+ if (num_possible_cpus() <= WAKE_FROM_GP_CPU_THRESHOLD)
+ rcu_normal_wake_from_gp = 1;
+ }
+
/* Fill in default value for rcutree.qovld boot parameter. */
/* -After- the rcu_node ->lock fields are initialized! */
if (qovld < 0)
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH rcu 4/5] Documentation/kernel-parameters: Update rcu_normal_wake_from_gp doc
2025-07-09 10:41 [PATCH rcu 0/5] RCU changes for v6.17 neeraj.upadhyay
` (2 preceding siblings ...)
2025-07-09 10:41 ` [PATCH rcu 3/5] rcu: Enable rcu_normal_wake_from_gp on small systems neeraj.upadhyay
@ 2025-07-09 10:41 ` neeraj.upadhyay
2025-07-09 11:37 ` Frederic Weisbecker
2025-07-09 10:41 ` [PATCH rcu 5/5] rcu: Fix rcu_read_unlock() deadloop due to IRQ work neeraj.upadhyay
4 siblings, 1 reply; 11+ messages in thread
From: neeraj.upadhyay @ 2025-07-09 10:41 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, paulmck, joelagnelf, frederic, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay, Neeraj Upadhyay (AMD)
From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Update the documentation about rcu_normal_wake_from_gp parameter.
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
---
Documentation/admin-guide/kernel-parameters.txt | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f1f2c0874da9..f7e4bee2b823 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5485,7 +5485,8 @@
echo 1 > /sys/module/rcutree/parameters/rcu_normal_wake_from_gp
or pass a boot parameter "rcutree.rcu_normal_wake_from_gp=1"
- Default is 0.
+ Default is 1 if num_possible_cpus() <= 16 and it is not explicitly
+ disabled by the boot parameter passing 0.
rcuscale.gp_async= [KNL]
Measure performance of asynchronous
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH rcu 5/5] rcu: Fix rcu_read_unlock() deadloop due to IRQ work
2025-07-09 10:41 [PATCH rcu 0/5] RCU changes for v6.17 neeraj.upadhyay
` (3 preceding siblings ...)
2025-07-09 10:41 ` [PATCH rcu 4/5] Documentation/kernel-parameters: Update rcu_normal_wake_from_gp doc neeraj.upadhyay
@ 2025-07-09 10:41 ` neeraj.upadhyay
2025-07-09 12:48 ` Frederic Weisbecker
4 siblings, 1 reply; 11+ messages in thread
From: neeraj.upadhyay @ 2025-07-09 10:41 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, paulmck, joelagnelf, frederic, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay, Neeraj Upadhyay (AMD),
Xiongfeng Wang, Qi Xi
From: Joel Fernandes <joelagnelf@nvidia.com>
During rcu_read_unlock_special(), if this happens during irq_exit(), we
can lockup if an IPI is issued. This is because the IPI itself triggers
the irq_exit() path causing a recursive lock up.
This is precisely what Xiongfeng found when invoking a BPF program on
the trace_tick_stop() tracepoint As shown in the trace below. Fix by
managing the irq_work state correctly.
irq_exit()
__irq_exit_rcu()
/* in_hardirq() returns false after this */
preempt_count_sub(HARDIRQ_OFFSET)
tick_irq_exit()
tick_nohz_irq_exit()
tick_nohz_stop_sched_tick()
trace_tick_stop() /* a bpf prog is hooked on this trace point */
__bpf_trace_tick_stop()
bpf_trace_run2()
rcu_read_unlock_special()
/* will send a IPI to itself */
irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
A simple reproducer can also be obtained by doing the following in
tick_irq_exit(). It will hang on boot without the patch:
static inline void tick_irq_exit(void)
{
+ rcu_read_lock();
+ WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true);
+ rcu_read_unlock();
+
Reported-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Closes: https://lore.kernel.org/all/9acd5f9f-6732-7701-6880-4b51190aa070@huawei.com/
Tested-by: Qi Xi <xiqi2@huawei.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Reviewed-by: "Paul E. McKenney" <paulmck@kernel.org>
Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
---
kernel/rcu/tree.h | 11 ++++++++++-
kernel/rcu/tree_plugin.h | 23 +++++++++++++++++++----
2 files changed, 29 insertions(+), 5 deletions(-)
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 3830c19cf2f6..f8f612269e6e 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -174,6 +174,15 @@ struct rcu_snap_record {
unsigned long jiffies; /* Track jiffies value */
};
+/*
+ * The IRQ work (deferred_qs_iw) is used by RCU to get scheduler's attention.
+ * It can be in one of the following states:
+ * - DEFER_QS_IDLE: An IRQ work was never scheduled.
+ * - DEFER_QS_PENDING: An IRQ work was scheduler but never run.
+ */
+#define DEFER_QS_IDLE 0
+#define DEFER_QS_PENDING 1
+
/* Per-CPU data for read-copy update. */
struct rcu_data {
/* 1) quiescent-state and grace-period handling : */
@@ -192,7 +201,7 @@ struct rcu_data {
/* during and after the last grace */
/* period it is aware of. */
struct irq_work defer_qs_iw; /* Obtain later scheduler attention. */
- bool defer_qs_iw_pending; /* Scheduler attention pending? */
+ int defer_qs_iw_pending; /* Scheduler attention pending? */
struct work_struct strict_work; /* Schedule readers for strict GPs. */
/* 2) batch handling */
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index a91b2322a0cd..aec584812574 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -486,13 +486,16 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
struct rcu_node *rnp;
union rcu_special special;
+ rdp = this_cpu_ptr(&rcu_data);
+ if (rdp->defer_qs_iw_pending == DEFER_QS_PENDING)
+ rdp->defer_qs_iw_pending = DEFER_QS_IDLE;
+
/*
* If RCU core is waiting for this CPU to exit its critical section,
* report the fact that it has exited. Because irqs are disabled,
* t->rcu_read_unlock_special cannot change.
*/
special = t->rcu_read_unlock_special;
- rdp = this_cpu_ptr(&rcu_data);
if (!special.s && !rdp->cpu_no_qs.b.exp) {
local_irq_restore(flags);
return;
@@ -629,7 +632,18 @@ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
local_irq_save(flags);
- rdp->defer_qs_iw_pending = false;
+
+ /*
+ * Requeue the IRQ work on next unlock in following situation:
+ * 1. rcu_read_unlock() queues IRQ work (state -> DEFER_QS_PENDING)
+ * 2. CPU enters new rcu_read_lock()
+ * 3. IRQ work runs but cannot report QS due to rcu_preempt_depth() > 0
+ * 4. rcu_read_unlock() does not re-queue work (state still PENDING)
+ * 5. Deferred QS reporting does not happen.
+ */
+ if (rcu_preempt_depth() > 0)
+ WRITE_ONCE(rdp->defer_qs_iw_pending, DEFER_QS_IDLE);
+
local_irq_restore(flags);
}
@@ -676,7 +690,8 @@ static void rcu_read_unlock_special(struct task_struct *t)
set_tsk_need_resched(current);
set_preempt_need_resched();
if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
- expboost && !rdp->defer_qs_iw_pending && cpu_online(rdp->cpu)) {
+ expboost && rdp->defer_qs_iw_pending != DEFER_QS_PENDING &&
+ cpu_online(rdp->cpu)) {
// Get scheduler to re-evaluate and call hooks.
// If !IRQ_WORK, FQS scan will eventually IPI.
if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) &&
@@ -686,7 +701,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
else
init_irq_work(&rdp->defer_qs_iw,
rcu_preempt_deferred_qs_handler);
- rdp->defer_qs_iw_pending = true;
+ rdp->defer_qs_iw_pending = DEFER_QS_PENDING;
irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
}
}
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH rcu 2/5] rcu: Protect ->defer_qs_iw_pending from data race
2025-07-09 10:41 ` [PATCH rcu 2/5] rcu: Protect ->defer_qs_iw_pending from data race neeraj.upadhyay
@ 2025-07-09 11:27 ` Frederic Weisbecker
0 siblings, 0 replies; 11+ messages in thread
From: Frederic Weisbecker @ 2025-07-09 11:27 UTC (permalink / raw)
To: neeraj.upadhyay
Cc: rcu, linux-kernel, paulmck, joelagnelf, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay
Le Wed, Jul 09, 2025 at 04:11:15PM +0530, neeraj.upadhyay@kernel.org a écrit :
> From: "Paul E. McKenney" <paulmck@kernel.org>
>
> On kernels built with CONFIG_IRQ_WORK=y, when rcu_read_unlock() is
> invoked within an interrupts-disabled region of code [1], it will invoke
> rcu_read_unlock_special(), which uses an irq-work handler to force the
> system to notice when the RCU read-side critical section actually ends.
> That end won't happen until interrupts are enabled at the soonest.
>
> In some kernels, such as those booted with rcutree.use_softirq=y, the
> irq-work handler is used unconditionally.
>
> The per-CPU rcu_data structure's ->defer_qs_iw_pending field is
> updated by the irq-work handler and is both read and updated by
> rcu_read_unlock_special(). This resulted in the following KCSAN splat:
>
> ------------------------------------------------------------------------
>
> BUG: KCSAN: data-race in rcu_preempt_deferred_qs_handler / rcu_read_unlock_special
>
> read to 0xffff96b95f42d8d8 of 1 bytes by task 90 on cpu 8:
> rcu_read_unlock_special+0x175/0x260
> __rcu_read_unlock+0x92/0xa0
> rt_spin_unlock+0x9b/0xc0
> __local_bh_enable+0x10d/0x170
> __local_bh_enable_ip+0xfb/0x150
> rcu_do_batch+0x595/0xc40
> rcu_cpu_kthread+0x4e9/0x830
> smpboot_thread_fn+0x24d/0x3b0
> kthread+0x3bd/0x410
> ret_from_fork+0x35/0x40
> ret_from_fork_asm+0x1a/0x30
>
> write to 0xffff96b95f42d8d8 of 1 bytes by task 88 on cpu 8:
> rcu_preempt_deferred_qs_handler+0x1e/0x30
> irq_work_single+0xaf/0x160
> run_irq_workd+0x91/0xc0
> smpboot_thread_fn+0x24d/0x3b0
> kthread+0x3bd/0x410
> ret_from_fork+0x35/0x40
> ret_from_fork_asm+0x1a/0x30
>
> no locks held by irq_work/8/88.
> irq event stamp: 200272
> hardirqs last enabled at (200272): [<ffffffffb0f56121>] finish_task_switch+0x131/0x320
> hardirqs last disabled at (200271): [<ffffffffb25c7859>] __schedule+0x129/0xd70
> softirqs last enabled at (0): [<ffffffffb0ee093f>] copy_process+0x4df/0x1cc0
> softirqs last disabled at (0): [<0000000000000000>] 0x0
>
> ------------------------------------------------------------------------
>
> The problem is that irq-work handlers run with interrupts enabled, which
> means that rcu_preempt_deferred_qs_handler() could be interrupted,
> and that interrupt handler might contain an RCU read-side critical
> section, which might invoke rcu_read_unlock_special(). In the strict
> KCSAN mode of operation used by RCU, this constitutes a data race on
> the ->defer_qs_iw_pending field.
>
> This commit therefore disables interrupts across the portion of the
> rcu_preempt_deferred_qs_handler() that updates the ->defer_qs_iw_pending
> field. This suffices because this handler is not a fast path.
>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rcu 3/5] rcu: Enable rcu_normal_wake_from_gp on small systems
2025-07-09 10:41 ` [PATCH rcu 3/5] rcu: Enable rcu_normal_wake_from_gp on small systems neeraj.upadhyay
@ 2025-07-09 11:36 ` Frederic Weisbecker
0 siblings, 0 replies; 11+ messages in thread
From: Frederic Weisbecker @ 2025-07-09 11:36 UTC (permalink / raw)
To: neeraj.upadhyay
Cc: rcu, linux-kernel, paulmck, joelagnelf, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay
Le Wed, Jul 09, 2025 at 04:11:16PM +0530, neeraj.upadhyay@kernel.org a écrit :
> From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
>
> Automatically enable the rcu_normal_wake_from_gp parameter on
> systems with a small number of CPUs. The activation threshold
> is set to 16 CPUs.
>
> This helps to reduce a latency of normal synchronize_rcu() API
> by waking up GP-waiters earlier and decoupling synchronize_rcu()
> callers from regular callback handling.
>
> A benchmark running 64 parallel jobs(system with 64 CPUs) invoking
> synchronize_rcu() demonstrates a notable latency reduction with the
> setting enabled.
>
> Latency distribution (microseconds):
>
> <default>
> 0 - 9999 : 1
> 10000 - 19999 : 4
> 20000 - 29999 : 399
> 30000 - 39999 : 3197
> 40000 - 49999 : 10428
> 50000 - 59999 : 17363
> 60000 - 69999 : 15529
> 70000 - 79999 : 9287
> 80000 - 89999 : 4249
> 90000 - 99999 : 1915
> 100000 - 109999 : 922
> 110000 - 119999 : 390
> 120000 - 129999 : 187
> ...
> <default>
>
> <rcu_normal_wake_from_gp>
> 0 - 9999 : 1
> 10000 - 19999 : 234
> 20000 - 29999 : 6678
> 30000 - 39999 : 33463
> 40000 - 49999 : 20669
> 50000 - 59999 : 2766
> 60000 - 69999 : 183
> ...
> <rcu_normal_wake_from_gp>
>
> Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rcu 4/5] Documentation/kernel-parameters: Update rcu_normal_wake_from_gp doc
2025-07-09 10:41 ` [PATCH rcu 4/5] Documentation/kernel-parameters: Update rcu_normal_wake_from_gp doc neeraj.upadhyay
@ 2025-07-09 11:37 ` Frederic Weisbecker
0 siblings, 0 replies; 11+ messages in thread
From: Frederic Weisbecker @ 2025-07-09 11:37 UTC (permalink / raw)
To: neeraj.upadhyay
Cc: rcu, linux-kernel, paulmck, joelagnelf, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay
Le Wed, Jul 09, 2025 at 04:11:17PM +0530, neeraj.upadhyay@kernel.org a écrit :
> From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
>
> Update the documentation about rcu_normal_wake_from_gp parameter.
>
> Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rcu 5/5] rcu: Fix rcu_read_unlock() deadloop due to IRQ work
2025-07-09 10:41 ` [PATCH rcu 5/5] rcu: Fix rcu_read_unlock() deadloop due to IRQ work neeraj.upadhyay
@ 2025-07-09 12:48 ` Frederic Weisbecker
2025-07-10 19:41 ` Joel Fernandes
0 siblings, 1 reply; 11+ messages in thread
From: Frederic Weisbecker @ 2025-07-09 12:48 UTC (permalink / raw)
To: neeraj.upadhyay
Cc: rcu, linux-kernel, paulmck, joelagnelf, boqun.feng, urezki,
rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang1211,
neeraj.iitr10, neeraj.upadhyay, Xiongfeng Wang, Qi Xi
Le Wed, Jul 09, 2025 at 04:11:18PM +0530, neeraj.upadhyay@kernel.org a écrit :
> From: Joel Fernandes <joelagnelf@nvidia.com>
>
> During rcu_read_unlock_special(), if this happens during irq_exit(), we
> can lockup if an IPI is issued. This is because the IPI itself triggers
> the irq_exit() path causing a recursive lock up.
>
> This is precisely what Xiongfeng found when invoking a BPF program on
> the trace_tick_stop() tracepoint As shown in the trace below. Fix by
> managing the irq_work state correctly.
>
> irq_exit()
> __irq_exit_rcu()
> /* in_hardirq() returns false after this */
> preempt_count_sub(HARDIRQ_OFFSET)
> tick_irq_exit()
> tick_nohz_irq_exit()
> tick_nohz_stop_sched_tick()
> trace_tick_stop() /* a bpf prog is hooked on this trace point */
> __bpf_trace_tick_stop()
> bpf_trace_run2()
> rcu_read_unlock_special()
> /* will send a IPI to itself */
> irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
>
> A simple reproducer can also be obtained by doing the following in
> tick_irq_exit(). It will hang on boot without the patch:
>
> static inline void tick_irq_exit(void)
> {
> + rcu_read_lock();
> + WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true);
> + rcu_read_unlock();
> +
>
> Reported-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
> Closes: https://lore.kernel.org/all/9acd5f9f-6732-7701-6880-4b51190aa070@huawei.com/
> Tested-by: Qi Xi <xiqi2@huawei.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> Reviewed-by: "Paul E. McKenney" <paulmck@kernel.org>
> Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
> ---
> kernel/rcu/tree.h | 11 ++++++++++-
> kernel/rcu/tree_plugin.h | 23 +++++++++++++++++++----
> 2 files changed, 29 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index 3830c19cf2f6..f8f612269e6e 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -174,6 +174,15 @@ struct rcu_snap_record {
> unsigned long jiffies; /* Track jiffies value */
> };
>
> +/*
> + * The IRQ work (deferred_qs_iw) is used by RCU to get scheduler's attention.
> + * It can be in one of the following states:
> + * - DEFER_QS_IDLE: An IRQ work was never scheduled.
> + * - DEFER_QS_PENDING: An IRQ work was scheduler but never run.
Never as in "never ever" ? :-)
I'm not a native speaker, so you guys tell me, but isn't it less
ambiguous:
- DEFER_QS_IDLE: The IRQ work isn't pending
- DEFER_QS_PENDING: The IRQ work is pending but hasn't run yet
But then the name are already self-explanatory. And then keeping
it as a boolean should be enough too. Why do we need these two
states?
> + */
> +#define DEFER_QS_IDLE 0
> +#define DEFER_QS_PENDING 1
> +
> /* Per-CPU data for read-copy update. */
> struct rcu_data {
> /* 1) quiescent-state and grace-period handling : */
> @@ -192,7 +201,7 @@ struct rcu_data {
> /* during and after the last grace */
> /* period it is aware of. */
> struct irq_work defer_qs_iw; /* Obtain later scheduler attention. */
> - bool defer_qs_iw_pending; /* Scheduler attention pending? */
> + int defer_qs_iw_pending; /* Scheduler attention pending? */
> struct work_struct strict_work; /* Schedule readers for strict GPs. */
>
> /* 2) batch handling */
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index a91b2322a0cd..aec584812574 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -486,13 +486,16 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
> struct rcu_node *rnp;
> union rcu_special special;
>
> + rdp = this_cpu_ptr(&rcu_data);
> + if (rdp->defer_qs_iw_pending == DEFER_QS_PENDING)
> + rdp->defer_qs_iw_pending = DEFER_QS_IDLE;
> +
> /*
> * If RCU core is waiting for this CPU to exit its critical section,
> * report the fact that it has exited. Because irqs are disabled,
> * t->rcu_read_unlock_special cannot change.
> */
> special = t->rcu_read_unlock_special;
> - rdp = this_cpu_ptr(&rcu_data);
> if (!special.s && !rdp->cpu_no_qs.b.exp) {
> local_irq_restore(flags);
> return;
> @@ -629,7 +632,18 @@ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
>
> rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
> local_irq_save(flags);
> - rdp->defer_qs_iw_pending = false;
> +
> + /*
> + * Requeue the IRQ work on next unlock in following situation:
s/in/to avoid/
> + * 1. rcu_read_unlock() queues IRQ work (state -> DEFER_QS_PENDING)
> + * 2. CPU enters new rcu_read_lock()
> + * 3. IRQ work runs but cannot report QS due to rcu_preempt_depth() > 0
> + * 4. rcu_read_unlock() does not re-queue work (state still PENDING)
> + * 5. Deferred QS reporting does not happen.
> + */
> + if (rcu_preempt_depth() > 0)
> + WRITE_ONCE(rdp->defer_qs_iw_pending, DEFER_QS_IDLE);
Why WRITE_ONCE() ? Also this lacks the explanation telling why it's not
unconditionally setting back to DEFER_QS_IDLE (ie: just a few words about that
irq_work() recursion thing), because I'm sure my short memory will suggest to
make it unconditional for simplification within two years (being optimistic) :-)
Thanks.
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rcu 5/5] rcu: Fix rcu_read_unlock() deadloop due to IRQ work
2025-07-09 12:48 ` Frederic Weisbecker
@ 2025-07-10 19:41 ` Joel Fernandes
0 siblings, 0 replies; 11+ messages in thread
From: Joel Fernandes @ 2025-07-10 19:41 UTC (permalink / raw)
To: Frederic Weisbecker, neeraj.upadhyay
Cc: rcu, linux-kernel, paulmck, boqun.feng, urezki, rostedt,
mathieu.desnoyers, jiangshanlai, qiang.zhang1211, neeraj.iitr10,
neeraj.upadhyay, Xiongfeng Wang, Qi Xi
On 7/9/2025 8:48 AM, Frederic Weisbecker wrote:
> Le Wed, Jul 09, 2025 at 04:11:18PM +0530, neeraj.upadhyay@kernel.org a écrit :
>> From: Joel Fernandes <joelagnelf@nvidia.com>
>>
>> During rcu_read_unlock_special(), if this happens during irq_exit(), we
>> can lockup if an IPI is issued. This is because the IPI itself triggers
>> the irq_exit() path causing a recursive lock up.
>>
>> This is precisely what Xiongfeng found when invoking a BPF program on
>> the trace_tick_stop() tracepoint As shown in the trace below. Fix by
>> managing the irq_work state correctly.
>>
>> irq_exit()
>> __irq_exit_rcu()
>> /* in_hardirq() returns false after this */
>> preempt_count_sub(HARDIRQ_OFFSET)
>> tick_irq_exit()
>> tick_nohz_irq_exit()
>> tick_nohz_stop_sched_tick()
>> trace_tick_stop() /* a bpf prog is hooked on this trace point */
>> __bpf_trace_tick_stop()
>> bpf_trace_run2()
>> rcu_read_unlock_special()
>> /* will send a IPI to itself */
>> irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
>>
>> A simple reproducer can also be obtained by doing the following in
>> tick_irq_exit(). It will hang on boot without the patch:
>>
>> static inline void tick_irq_exit(void)
>> {
>> + rcu_read_lock();
>> + WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true);
>> + rcu_read_unlock();
>> +
>>
>> Reported-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
>> Closes: https://lore.kernel.org/all/9acd5f9f-6732-7701-6880-4b51190aa070@huawei.com/
>> Tested-by: Qi Xi <xiqi2@huawei.com>
>> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>> Reviewed-by: "Paul E. McKenney" <paulmck@kernel.org>
>> Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.upadhyay@kernel.org>
>> ---
>> kernel/rcu/tree.h | 11 ++++++++++-
>> kernel/rcu/tree_plugin.h | 23 +++++++++++++++++++----
>> 2 files changed, 29 insertions(+), 5 deletions(-)
>>
>> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
>> index 3830c19cf2f6..f8f612269e6e 100644
>> --- a/kernel/rcu/tree.h
>> +++ b/kernel/rcu/tree.h
>> @@ -174,6 +174,15 @@ struct rcu_snap_record {
>> unsigned long jiffies; /* Track jiffies value */
>> };
>>
>> +/*
>> + * The IRQ work (deferred_qs_iw) is used by RCU to get scheduler's attention.
>> + * It can be in one of the following states:
>> + * - DEFER_QS_IDLE: An IRQ work was never scheduled.
>> + * - DEFER_QS_PENDING: An IRQ work was scheduler but never run.
>
> Never as in "never ever" ? :-)
You're right this comment needs an update. It should be "An IRQ work was
scheduled, but a deferred QS hasn't been reported yet".
>
> I'm not a native speaker, so you guys tell me, but isn't it less
> ambiguous:
>
> - DEFER_QS_IDLE: The IRQ work isn't pending
> - DEFER_QS_PENDING: The IRQ work is pending but hasn't run yet
It actually could have run but we could have been in an RCU critical section at
the time.
> But then the name are already self-explanatory. And then keeping
> it as a boolean should be enough too. Why do we need these two
> states?
Its just more readable, IMO. That's why I kept it like that.
>> + */
>> +#define DEFER_QS_IDLE 0
>> +#define DEFER_QS_PENDING 1
>> +
>> /* Per-CPU data for read-copy update. */
>> struct rcu_data {
>> /* 1) quiescent-state and grace-period handling : */
>> @@ -192,7 +201,7 @@ struct rcu_data {
>> /* during and after the last grace */
>> /* period it is aware of. */
>> struct irq_work defer_qs_iw; /* Obtain later scheduler attention. */
>> - bool defer_qs_iw_pending; /* Scheduler attention pending? */
>> + int defer_qs_iw_pending; /* Scheduler attention pending? */
>> struct work_struct strict_work; /* Schedule readers for strict GPs. */
>>
>> /* 2) batch handling */
>> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
>> index a91b2322a0cd..aec584812574 100644
>> --- a/kernel/rcu/tree_plugin.h
>> +++ b/kernel/rcu/tree_plugin.h
>> @@ -486,13 +486,16 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
>> struct rcu_node *rnp;
>> union rcu_special special;
>>
>> + rdp = this_cpu_ptr(&rcu_data);
>> + if (rdp->defer_qs_iw_pending == DEFER_QS_PENDING)
>> + rdp->defer_qs_iw_pending = DEFER_QS_IDLE;
>> +
>> /*
>> * If RCU core is waiting for this CPU to exit its critical section,
>> * report the fact that it has exited. Because irqs are disabled,
>> * t->rcu_read_unlock_special cannot change.
>> */
>> special = t->rcu_read_unlock_special;
>> - rdp = this_cpu_ptr(&rcu_data);
>> if (!special.s && !rdp->cpu_no_qs.b.exp) {
>> local_irq_restore(flags);
>> return;
>> @@ -629,7 +632,18 @@ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
>>
>> rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
>> local_irq_save(flags);
>> - rdp->defer_qs_iw_pending = false;
>> +
>> + /*
>> + * Requeue the IRQ work on next unlock in following situation:
>
Sure.
> s/in/to avoid/
>
>> + * 1. rcu_read_unlock() queues IRQ work (state -> DEFER_QS_PENDING)
>> + * 2. CPU enters new rcu_read_lock()
>> + * 3. IRQ work runs but cannot report QS due to rcu_preempt_depth() > 0
>> + * 4. rcu_read_unlock() does not re-queue work (state still PENDING)
>> + * 5. Deferred QS reporting does not happen.
>> + */
>> + if (rcu_preempt_depth() > 0)
>> + WRITE_ONCE(rdp->defer_qs_iw_pending, DEFER_QS_IDLE);
>
> Why WRITE_ONCE() ? Also this lacks the explanation telling why it's not
> unconditionally setting back to DEFER_QS_IDLE (ie: just a few words about that
> irq_work() recursion thing), because I'm sure my short memory will suggest to
> make it unconditional for simplification within two years (being optimistic) :-)
The previous code was unconditionally setting it back so we would recurse before
the deferred QS report happened. I can add more comments about that. But
unfortunately, there is some hang that Neeraj and Paul are reporting so I'll go
work on that first.
thanks for the review,
- Joel
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-07-10 19:41 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-09 10:41 [PATCH rcu 0/5] RCU changes for v6.17 neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 1/5] rcu: Robustify rcu_is_cpu_rrupt_from_idle() neeraj.upadhyay
2025-07-09 10:41 ` [PATCH rcu 2/5] rcu: Protect ->defer_qs_iw_pending from data race neeraj.upadhyay
2025-07-09 11:27 ` Frederic Weisbecker
2025-07-09 10:41 ` [PATCH rcu 3/5] rcu: Enable rcu_normal_wake_from_gp on small systems neeraj.upadhyay
2025-07-09 11:36 ` Frederic Weisbecker
2025-07-09 10:41 ` [PATCH rcu 4/5] Documentation/kernel-parameters: Update rcu_normal_wake_from_gp doc neeraj.upadhyay
2025-07-09 11:37 ` Frederic Weisbecker
2025-07-09 10:41 ` [PATCH rcu 5/5] rcu: Fix rcu_read_unlock() deadloop due to IRQ work neeraj.upadhyay
2025-07-09 12:48 ` Frederic Weisbecker
2025-07-10 19:41 ` Joel Fernandes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).