* [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window
@ 2026-05-11 17:54 Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 01/12] rcutorture: Fully test lazy RCU Uladzislau Rezki (Sony)
` (11 more replies)
0 siblings, 12 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki
Hello!
The git tree with candidate patches for Linux-7.2 can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux.git
(tag: rcu-7.2-v1-20260511)
Please note, there is one patch which should be checked, it is
from Zqiang.
Paul E. McKenney (10):
rcutorture: Fully test lazy RCU
torture: Add torture_sched_set_normal() for user-specified nice values
torture: Improve kvm-series.sh header comment
torture: Allow "norm" abbreviation for "normal"
srcu: Don't queue workqueue handlers to never-online CPUs
srcu: Fix kerneldoc header comment typo in srcu_down_read_fast()
checkpatch: Undeprecate rcu_read_lock_trace() and
rcu_read_unlock_trace()
rcu: Simplify rcu_do_batch() by applying clamp()
rcu: Simplify param_set_next_fqs_jiffies() by applying clamp_val()
rcu: Document rcu_access_pointer() feeding into cmpxchg()
Uladzislau Rezki (Sony) (1):
rcu: Latch normal synchronize_rcu() path on flood
Zqiang (1):
rcu-tasks: Fix possible boot-time tests failed for the
call_rcu_tasks()
.../admin-guide/kernel-parameters.txt | 10 ++--
include/linux/rcupdate.h | 12 ++--
include/linux/srcu.h | 2 +-
include/linux/torture.h | 1 +
kernel/rcu/rcutorture.c | 21 ++++++-
kernel/rcu/srcutree.c | 12 ++--
kernel/rcu/tasks.h | 3 +-
kernel/rcu/tree.c | 56 ++++++++++++++-----
kernel/torture.c | 16 ++++++
scripts/checkpatch.pl | 5 +-
.../selftests/rcutorture/bin/kvm-series.sh | 11 ++--
.../selftests/rcutorture/bin/torture.sh | 2 +-
12 files changed, 107 insertions(+), 44 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH -next v1 01/12] rcutorture: Fully test lazy RCU
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 02/12] torture: Add torture_sched_set_normal() for user-specified nice values Uladzislau Rezki (Sony)
` (10 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki, Saravana Kannan
From: "Paul E. McKenney" <paulmck@kernel.org>
Currently, rcutorture bypasses lazy RCU by using call_rcu_hurry().
This works, avoiding the dreaded rtort_pipe_count WARN(), but fails to
fully test lazy RCU. The rtort_pipe_count WARN() splats because lazy RCU
could delay the start of an RCU grace period for a full stutter period,
which defaults to only three seconds.
This commit therefore reverts the call_rcu_hurry() instances
back to call_rcu(), but, in kernels built with CONFIG_RCU_LAZY=y,
queues a workqueue handler just before the call to stutter_wait() in
rcu_torture_writer(). This workqueue handler invokes rcu_barrier(),
which motivates any lingering lazy callbacks, thus avoiding the splat.
Questions for review:
1. Should we avoid queueing work for RCU implementations not
supporting lazy callbacks?
2. Should we avoid queueing work in kernels built with
CONFIG_RCU_LAZY=y, but that were not booted with the
rcutree.enable_rcu_lazy kernel boot parameter set? (Note that
this requires some ugliness to access this parameter, and must
also handle Tiny RCU.)
3. Does the rcu_torture_ops structure need a ->call_hurry() field,
and if so, why? If not, why not?
4. Your additional questions here!
Reported-by: Saravana Kannan <saravanak@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/rcutorture.c | 21 ++++++++++++++++++---
1 file changed, 18 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 5f2848b828dc..91ba3160ba6a 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -572,7 +572,7 @@ static unsigned long rcu_no_completed(void)
static void rcu_torture_deferred_free(struct rcu_torture *p)
{
- call_rcu_hurry(&p->rtort_rcu, rcu_torture_cb);
+ call_rcu(&p->rtort_rcu, rcu_torture_cb);
}
static void rcu_sync_torture_init(void)
@@ -619,7 +619,7 @@ static struct rcu_torture_ops rcu_ops = {
.poll_gp_state_exp = poll_state_synchronize_rcu,
.cond_sync_exp = cond_synchronize_rcu_expedited,
.cond_sync_exp_full = cond_synchronize_rcu_expedited_full,
- .call = call_rcu_hurry,
+ .call = call_rcu,
.cb_barrier = rcu_barrier,
.fqs = rcu_force_quiescent_state,
.gp_kthread_dbg = show_rcu_gp_kthreads,
@@ -1145,7 +1145,7 @@ static void rcu_tasks_torture_deferred_free(struct rcu_torture *p)
static void synchronize_rcu_mult_test(void)
{
- synchronize_rcu_mult(call_rcu_tasks, call_rcu_hurry);
+ synchronize_rcu_mult(call_rcu_tasks, call_rcu);
}
static struct rcu_torture_ops tasks_ops = {
@@ -1631,6 +1631,17 @@ static void do_rtws_sync(struct torture_random_state *trsp, void (*sync)(void))
cpus_read_unlock();
}
+/*
+ * Do an rcu_barrier() to motivate lazy callbacks during a stutter
+ * pause. Without this, we can get false-positives rtort_pipe_count
+ * splats.
+ */
+static void rcu_torture_writer_work(struct work_struct *work)
+{
+ if (cur_ops->cb_barrier)
+ cur_ops->cb_barrier();
+}
+
/*
* RCU torture writer kthread. Repeatedly substitutes a new structure
* for that pointed to by rcu_torture_current, freeing the old structure
@@ -1651,6 +1662,7 @@ rcu_torture_writer(void *arg)
int i;
int idx;
unsigned long j;
+ struct work_struct lazy_work;
int oldnice = task_nice(current);
struct rcu_gp_oldstate *rgo = NULL;
int rgo_size = 0;
@@ -1667,6 +1679,7 @@ rcu_torture_writer(void *arg)
stallsdone += (stall_cpu_holdoff + stall_gp_kthread + stall_cpu + 60) *
HZ * (stall_cpu_repeat + 1);
VERBOSE_TOROUT_STRING("rcu_torture_writer task started");
+ INIT_WORK_ONSTACK(&lazy_work, rcu_torture_writer_work);
if (!can_expedite)
pr_alert("%s" TORTURE_FLAG
" GP expediting controlled from boot/sysfs for %s.\n",
@@ -1895,6 +1908,8 @@ rcu_torture_writer(void *arg)
!rcu_gp_is_normal();
}
rcu_torture_writer_state = RTWS_STUTTER;
+ if (IS_ENABLED(CONFIG_RCU_LAZY))
+ queue_work(system_percpu_wq, &lazy_work);
stutter_waited = stutter_wait("rcu_torture_writer");
if (stutter_waited &&
!atomic_read(&rcu_fwd_cb_nodelay) &&
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 02/12] torture: Add torture_sched_set_normal() for user-specified nice values
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 01/12] rcutorture: Fully test lazy RCU Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 03/12] torture: Improve kvm-series.sh header comment Uladzislau Rezki (Sony)
` (9 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki
From: "Paul E. McKenney" <paulmck@kernel.org>
This new torture_sched_set_normal() function clamps the nice value at
the MIN_NICE..MAX_NICE limits, splatting it these limits are exceeded.
It then invokes sched_set_normal() to set the new value. This prevents
more difficult-to-debug failures within the scheduler.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
include/linux/torture.h | 1 +
kernel/torture.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/include/linux/torture.h b/include/linux/torture.h
index 1b59056c3b18..c9b47d138302 100644
--- a/include/linux/torture.h
+++ b/include/linux/torture.h
@@ -129,6 +129,7 @@ void _torture_stop_kthread(char *m, struct task_struct **tp);
#else
#define torture_preempt_schedule() do { } while (0)
#endif
+void torture_sched_set_normal(struct task_struct *t, int nice);
#if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST) || IS_ENABLED(CONFIG_LOCK_TORTURE_TEST) || IS_MODULE(CONFIG_LOCK_TORTURE_TEST)
long torture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask, bool dowarn);
diff --git a/kernel/torture.c b/kernel/torture.c
index 62c1ac777694..77cb3589b19f 100644
--- a/kernel/torture.c
+++ b/kernel/torture.c
@@ -972,3 +972,19 @@ void _torture_stop_kthread(char *m, struct task_struct **tp)
*tp = NULL;
}
EXPORT_SYMBOL_GPL(_torture_stop_kthread);
+
+/*
+ * Set the specified task's niceness value, saturating at limits.
+ * Saturating noisily, but saturating.
+ */
+void torture_sched_set_normal(struct task_struct *t, int nice)
+{
+ int realnice = nice;
+
+ if (WARN_ON_ONCE(realnice > MAX_NICE))
+ realnice = MAX_NICE;
+ if (WARN_ON_ONCE(realnice < MIN_NICE))
+ realnice = MIN_NICE;
+ sched_set_normal(t, realnice);
+}
+EXPORT_SYMBOL_GPL(torture_sched_set_normal);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 03/12] torture: Improve kvm-series.sh header comment
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 01/12] rcutorture: Fully test lazy RCU Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 02/12] torture: Add torture_sched_set_normal() for user-specified nice values Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 04/12] torture: Allow "norm" abbreviation for "normal" Uladzislau Rezki (Sony)
` (8 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki
From: "Paul E. McKenney" <paulmck@kernel.org>
The constraints on the arguments to kvm-series.sh are easy to forget,
so this commit adds examples in the header comment.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
tools/testing/selftests/rcutorture/bin/kvm-series.sh | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-series.sh b/tools/testing/selftests/rcutorture/bin/kvm-series.sh
index c4ee5f910931..be9412538fb8 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-series.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-series.sh
@@ -1,12 +1,13 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0+
#
-# Usage: kvm-series.sh config-list commit-id-list [ kvm.sh parameters ]
+# Usage: kvm-series.sh config-list commit-id-range [ kvm.sh parameters ]
#
-# Tests the specified list of unadorned configs ("TREE01 SRCU-P" but not
-# "CFLIST" or "3*TRACE01") and an indication of a set of commits to test,
-# then runs each commit through the specified list of commits using kvm.sh.
-# The runs are grouped into a -series/config/commit directory tree.
+# Tests the specified list of unadorned configs ("TREE01 SRCU-P" but
+# not "CFLIST" or "3*TRACE01") and an indication of a range of commits
+# ("v7.0-rc1..rcu/dev", but not "cd0ce7bab0408 ff74db28df623 17c52d7b31a1f")
+# to test, then runs each commit through the specified list of commits using
+# kvm.sh. The runs are grouped into a -series/config/commit directory tree.
# Each run defaults to a duration of one minute.
#
# Run in top-level Linux source directory. Please note that this is in
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 04/12] torture: Allow "norm" abbreviation for "normal"
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (2 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 03/12] torture: Improve kvm-series.sh header comment Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 05/12] srcu: Don't queue workqueue handlers to never-online CPUs Uladzislau Rezki (Sony)
` (7 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki
From: "Paul E. McKenney" <paulmck@kernel.org>
This adds "--do-norm", --do-no-norm", and "--no-norm" synonyms for the
"--do-normal" group of torture.sh command-line arguments.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
tools/testing/selftests/rcutorture/bin/torture.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/rcutorture/bin/torture.sh b/tools/testing/selftests/rcutorture/bin/torture.sh
index a33ba109ef0b..f0083891ee81 100755
--- a/tools/testing/selftests/rcutorture/bin/torture.sh
+++ b/tools/testing/selftests/rcutorture/bin/torture.sh
@@ -184,7 +184,7 @@ do
do_clocksourcewd=no
do_srcu_lockdep=no
;;
- --do-normal|--do-no-normal|--no-normal)
+ --do-normal|--do-norm|--do-no-normal|--do-no-norm|--no-normal|--no-norm)
do_normal=`doyesno "$1" --do-normal`
explicit_normal=yes
;;
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 05/12] srcu: Don't queue workqueue handlers to never-online CPUs
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (3 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 04/12] torture: Allow "norm" abbreviation for "normal" Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 06/12] srcu: Fix kerneldoc header comment typo in srcu_down_read_fast() Uladzislau Rezki (Sony)
` (6 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki, Vasily Gorbik, Samir,
Shrikanth Hegde, Tejun Heo
From: "Paul E. McKenney" <paulmck@kernel.org>
While an srcu_struct structure is in the midst of switching from CPU-0
to all-CPUs state, it can attempt to invoke callbacks for CPUs that
have never been online. Worse yet, it can attempt in invoke callbacks
for CPUs that never will be online, even including imaginary CPUs not in
cpu_possible_mask. This can cause hangs on s390, which is not set up to
deal with workqueue handlers being scheduled on such CPUs. This commit
therefore causes Tree SRCU to refrain from queueing workqueue handlers
on CPUs that have not yet (and might never) come online.
Because callbacks are not invoked on CPUs that have not been
online, it is an error to invoke call_srcu(), synchronize_srcu(), or
synchronize_srcu_expedited() on a CPU that is not yet fully online.
However, it turns out to be less code to redirect the callbacks
from too-early invocations of call_srcu() than to warn about such
invocations. This commit therefore also redirects callbacks queued on
not-yet-fully-online CPUs to the boot CPU.
Reported-by: Vasily Gorbik <gor@linux.ibm.com>
Fixes: 61bbcfb50514 ("srcu: Push srcu_node allocation to GP when non-preemptible")
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Vasily Gorbik <gor@linux.ibm.com>
Tested-by: Samir <samir@linux.ibm.com>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/srcutree.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 0d01cd8c4b4a..7c2f7cc131f7 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -897,11 +897,9 @@ static void srcu_schedule_cbs_snp(struct srcu_struct *ssp, struct srcu_node *snp
{
int cpu;
- for (cpu = snp->grplo; cpu <= snp->grphi; cpu++) {
- if (!(mask & (1UL << (cpu - snp->grplo))))
- continue;
- srcu_schedule_cbs_sdp(per_cpu_ptr(ssp->sda, cpu), delay);
- }
+ for (cpu = snp->grplo; cpu <= snp->grphi; cpu++)
+ if ((mask & (1UL << (cpu - snp->grplo))) && rcu_cpu_beenfullyonline(cpu))
+ srcu_schedule_cbs_sdp(per_cpu_ptr(ssp->sda, cpu), delay);
}
/*
@@ -1322,7 +1320,9 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
*/
idx = __srcu_read_lock_nmisafe(ssp);
ss_state = smp_load_acquire(&ssp->srcu_sup->srcu_size_state);
- if (ss_state < SRCU_SIZE_WAIT_CALL)
+ // If !rcu_cpu_beenfullyonline(), interrupts are still disabled,
+ // so no migration is possible in either direction from this CPU.
+ if (ss_state < SRCU_SIZE_WAIT_CALL || !rcu_cpu_beenfullyonline(raw_smp_processor_id()))
sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id());
else
sdp = raw_cpu_ptr(ssp->sda);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 06/12] srcu: Fix kerneldoc header comment typo in srcu_down_read_fast()
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (4 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 05/12] srcu: Don't queue workqueue handlers to never-online CPUs Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 07/12] checkpatch: Undeprecate rcu_read_lock_trace() and rcu_read_unlock_trace() Uladzislau Rezki (Sony)
` (5 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki
From: "Paul E. McKenney" <paulmck@kernel.org>
s/srcu_read_lock_safe()/srcu_read_lock_fast_updown(), there being no
such thing as srcu_read_lock_safe().
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
include/linux/srcu.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 81b1938512d5..a54ce9e808b9 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -397,7 +397,7 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct srcu_
*
* The same srcu_struct may be used concurrently by srcu_down_read_fast()
* and srcu_read_lock_fast(). However, the same definition/initialization
- * requirements called out for srcu_read_lock_safe() apply.
+ * requirements called out for srcu_read_lock_fast_updown() apply.
*/
static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *ssp) __acquires_shared(ssp)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 07/12] checkpatch: Undeprecate rcu_read_lock_trace() and rcu_read_unlock_trace()
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (5 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 06/12] srcu: Fix kerneldoc header comment typo in srcu_down_read_fast() Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 08/12] rcu: Simplify rcu_do_batch() by applying clamp() Uladzislau Rezki (Sony)
` (4 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki, Puranjay Mohan, Andy Whitcroft,
Joe Perches, Dwaipayan Ray, Lukas Bulwahn
From: "Paul E. McKenney" <paulmck@kernel.org>
It turns out that there are BPF use cases that rely on nesting RCU
Tasks Trace readers. These use cases are well-served by the old
rcu_read_lock_trace() and rcu_read_unlock_trace() functions that maintain
a nesting counter in the task_struct structure. But these use cases incur
a performance penalty when using the shiny new rcu_read_lock_tasks_trace()
and rcu_read_unlock_tasks_trace() functions, which nest in the same way
that SRCU does.
This means that rcu_read_lock_trace() and rcu_read_unlock_trace()
will be with us for some time. Therefore, remove the checkpatch.pl
deprecation.
Also, the rcu_read_lock_tasks_trace() and rcu_read_unlock_tasks_trace()
functions are intended for use only by BPF. Therefore, add them to
the list of functions that checkpatch complains about outside of BPF
(and of course, RCU).
Reported-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
scripts/checkpatch.pl | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 0492d6afc9a1..cc5bbd70cb84 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -865,8 +865,6 @@ our %deprecated_apis = (
"DEFINE_IDR" => "DEFINE_XARRAY",
"idr_init" => "xa_init",
"idr_init_base" => "xa_init_flags",
- "rcu_read_lock_trace" => "rcu_read_lock_tasks_trace",
- "rcu_read_unlock_trace" => "rcu_read_unlock_tasks_trace",
);
#Create a search pattern for all these strings to speed up a loop below
@@ -7596,12 +7594,15 @@ sub process {
# Complain about RCU Tasks Trace used outside of BPF (and of course, RCU).
our $rcu_trace_funcs = qr{(?x:
+ rcu_read_lock_tasks_trace |
rcu_read_lock_trace |
rcu_read_lock_trace_held |
rcu_read_unlock_trace |
+ rcu_read_unlock_tasks_trace |
call_rcu_tasks_trace |
synchronize_rcu_tasks_trace |
rcu_barrier_tasks_trace |
+ rcu_tasks_trace_expedite_current |
rcu_request_urgent_qs_task
)};
our $rcu_trace_paths = qr{(?x:
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 08/12] rcu: Simplify rcu_do_batch() by applying clamp()
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (6 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 07/12] checkpatch: Undeprecate rcu_read_lock_trace() and rcu_read_unlock_trace() Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 09/12] rcu: Simplify param_set_next_fqs_jiffies() by applying clamp_val() Uladzislau Rezki (Sony)
` (3 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki
From: "Paul E. McKenney" <paulmck@kernel.org>
This commit replaces a nested ?: sequence with clamp(). This does not
reduce the number of lines of code, but it does simplify the line that
it modifies.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 55df6d37145e..e46a5124c3eb 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2584,7 +2584,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
const long npj = NSEC_PER_SEC / HZ;
long rrn = READ_ONCE(rcu_resched_ns);
- rrn = rrn < NSEC_PER_MSEC ? NSEC_PER_MSEC : rrn > NSEC_PER_SEC ? NSEC_PER_SEC : rrn;
+ rrn = clamp(rrn, NSEC_PER_MSEC, NSEC_PER_SEC);
tlimit = local_clock() + rrn;
jlimit = jiffies + (rrn + npj + 1) / npj;
jlimit_check = true;
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 09/12] rcu: Simplify param_set_next_fqs_jiffies() by applying clamp_val()
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (7 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 08/12] rcu: Simplify rcu_do_batch() by applying clamp() Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 10/12] rcu: Document rcu_access_pointer() feeding into cmpxchg() Uladzislau Rezki (Sony)
` (2 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki
From: "Paul E. McKenney" <paulmck@kernel.org>
This commit replaces a nested ?: sequence with clamp_val(). This does
not reduce the number of lines of code, but it does simplify the line
that it modifies.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index e46a5124c3eb..09f0cef5014c 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -492,7 +492,7 @@ static int param_set_next_fqs_jiffies(const char *val, const struct kernel_param
int ret = kstrtoul(val, 0, &j);
if (!ret) {
- WRITE_ONCE(*(ulong *)kp->arg, (j > HZ) ? HZ : (j ?: 1));
+ WRITE_ONCE(*(ulong *)kp->arg, clamp_val(j, 1, HZ));
adjust_jiffies_till_sched_qs();
}
return ret;
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 10/12] rcu: Document rcu_access_pointer() feeding into cmpxchg()
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (8 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 09/12] rcu: Simplify param_set_next_fqs_jiffies() by applying clamp_val() Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 11/12] rcu: Latch normal synchronize_rcu() path on flood Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 12/12] rcu-tasks: Fix possible boot-time tests failed for the call_rcu_tasks() Uladzislau Rezki (Sony)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki, Maxim Mikityanskiy
From: "Paul E. McKenney" <paulmck@kernel.org>
This commit documents the rcu_access_pointer() use case for fetching the
old value of an RCU-protected pointer within a lockless updater for use
by an atomic cmpxchg() operation.
Reported-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
include/linux/rcupdate.h | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index bfa765132de8..5e95acc33989 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -592,11 +592,13 @@ context_unsafe( \
* lockdep checks for being in an RCU read-side critical section. This is
* useful when the value of this pointer is accessed, but the pointer is
* not dereferenced, for example, when testing an RCU-protected pointer
- * against NULL. Although rcu_access_pointer() may also be used in cases
- * where update-side locks prevent the value of the pointer from changing,
- * you should instead use rcu_dereference_protected() for this use case.
- * Within an RCU read-side critical section, there is little reason to
- * use rcu_access_pointer().
+ * against NULL. Within an RCU read-side critical section, there is little
+ * reason to use rcu_access_pointer(). Although rcu_access_pointer() may
+ * also be used in cases where update-side locks prevent the value of the
+ * pointer from changing, you should instead use rcu_dereference_protected()
+ * for this use case. It is also permissible to use rcu_access_pointer()
+ * within lockless updaters to obtain the old value for an atomic operation,
+ * for example, for cmpxchg().
*
* It is usually best to test the rcu_access_pointer() return value
* directly in order to avoid accidental dereferences being introduced
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 11/12] rcu: Latch normal synchronize_rcu() path on flood
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (9 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 10/12] rcu: Document rcu_access_pointer() feeding into cmpxchg() Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 12/12] rcu-tasks: Fix possible boot-time tests failed for the call_rcu_tasks() Uladzislau Rezki (Sony)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki, Samir M
Currently, rcu_normal_wake_from_gp is only enabled by default
on small systems(<= 16 CPUs) or when a user explicitly set it
enabled.
Introduce an adaptive latching mechanism:
* Track the number of in-flight synchronize_rcu() requests
using a new rcu_sr_normal_count counter;
* If the count reaches/exceeds RCU_SR_NORMAL_LATCH_THR(64),
it sets the rcu_sr_normal_latched, reverting new requests
onto the scaled wait_rcu_gp() path;
* The latch is cleared only when the pending requests are fully
drained(nr == 0);
* Enables rcu_normal_wake_from_gp by default for all systems,
relying on this dynamic throttling instead of static CPU
limits.
Testing(synthetic flood workload):
* Kernel version: 6.19.0-rc6
* Number of CPUs: 1536
* 60K concurrent synchronize_rcu() calls
Perf(cycles, system-wide):
total cycles: 932020263832
rcu_sr_normal_add_req(): 2650282811 cycles(~0.28%)
Perf report excerpt:
0.01% 0.01% sync_test/... [k] rcu_sr_normal_add_req
Measured overhead of rcu_sr_normal_add_req() remained ~0.28%
of total CPU cycles in this synthetic stress test.
Tested-by: Samir M <samir@linux.ibm.com>
Suggested-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
.../admin-guide/kernel-parameters.txt | 10 ++--
kernel/rcu/tree.c | 52 ++++++++++++++-----
2 files changed, 44 insertions(+), 18 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 4d0f545fb3ec..d5db2e85d551 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5862,13 +5862,13 @@ Kernel parameters
use a call_rcu[_hurry]() path. Please note, this is for a
normal grace period.
- How to enable it:
+ How to disable it:
- echo 1 > /sys/module/rcutree/parameters/rcu_normal_wake_from_gp
- or pass a boot parameter "rcutree.rcu_normal_wake_from_gp=1"
+ echo 0 > /sys/module/rcutree/parameters/rcu_normal_wake_from_gp
+ or pass a boot parameter "rcutree.rcu_normal_wake_from_gp=0"
- Default is 1 if num_possible_cpus() <= 16 and it is not explicitly
- disabled by the boot parameter passing 0.
+ Default is 1 if it is not explicitly disabled by the boot parameter
+ passing 0.
rcuscale.gp_async= [KNL]
Measure performance of asynchronous
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 09f0cef5014c..94274330d1db 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1632,17 +1632,21 @@ static void rcu_sr_put_wait_head(struct llist_node *node)
atomic_set_release(&sr_wn->inuse, 0);
}
-/* Enable rcu_normal_wake_from_gp automatically on small systems. */
-#define WAKE_FROM_GP_CPU_THRESHOLD 16
-
-static int rcu_normal_wake_from_gp = -1;
+static int rcu_normal_wake_from_gp = 1;
module_param(rcu_normal_wake_from_gp, int, 0644);
static struct workqueue_struct *sync_wq;
+#define RCU_SR_NORMAL_LATCH_THR 64
+
+/* Number of in-flight synchronize_rcu() calls queued on srs_next. */
+static atomic_long_t rcu_sr_normal_count;
+static int rcu_sr_normal_latched; /* 0/1 */
+
static void rcu_sr_normal_complete(struct llist_node *node)
{
struct rcu_synchronize *rs = container_of(
(struct rcu_head *) node, struct rcu_synchronize, head);
+ long nr;
WARN_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) &&
!poll_state_synchronize_rcu_full(&rs->oldstate),
@@ -1650,6 +1654,15 @@ static void rcu_sr_normal_complete(struct llist_node *node)
/* Finally. */
complete(&rs->completion);
+ nr = atomic_long_dec_return(&rcu_sr_normal_count);
+ WARN_ON_ONCE(nr < 0);
+
+ /*
+ * Unlatch: switch back to normal path when fully
+ * drained and if it has been latched.
+ */
+ if (nr == 0)
+ (void)cmpxchg(&rcu_sr_normal_latched, 1, 0);
}
static void rcu_sr_normal_gp_cleanup_work(struct work_struct *work)
@@ -1795,6 +1808,24 @@ static bool rcu_sr_normal_gp_init(void)
static void rcu_sr_normal_add_req(struct rcu_synchronize *rs)
{
+ /*
+ * Increment before publish to avoid a complete
+ * vs enqueue race on latch.
+ */
+ long nr = atomic_long_inc_return(&rcu_sr_normal_count);
+
+ /*
+ * Latch when threshold is reached. Checking for an exact match
+ * restricts cmpxchg() to a single context.
+ *
+ * This latch is intentionally relaxed and best-effort. Concurrent
+ * set/clear can race and temporarily lose the latch, which is OK
+ * because it only selects between the fast and fallback paths.
+ */
+ if (nr == RCU_SR_NORMAL_LATCH_THR)
+ (void)cmpxchg(&rcu_sr_normal_latched, 0, 1);
+
+ /* Publish for the GP kthread/worker. */
llist_add((struct llist_node *) &rs->head, &rcu_state.srs_next);
}
@@ -3278,14 +3309,15 @@ static void synchronize_rcu_normal(void)
{
struct rcu_synchronize rs;
+ init_rcu_head_on_stack(&rs.head);
trace_rcu_sr_normal(rcu_state.name, &rs.head, TPS("request"));
- if (READ_ONCE(rcu_normal_wake_from_gp) < 1) {
+ if (READ_ONCE(rcu_normal_wake_from_gp) < 1 ||
+ READ_ONCE(rcu_sr_normal_latched)) {
wait_rcu_gp(call_rcu_hurry);
goto trace_complete_out;
}
- init_rcu_head_on_stack(&rs.head);
init_completion(&rs.completion);
/*
@@ -3302,10 +3334,10 @@ static void synchronize_rcu_normal(void)
/* Now we can wait. */
wait_for_completion(&rs.completion);
- destroy_rcu_head_on_stack(&rs.head);
trace_complete_out:
trace_rcu_sr_normal(rcu_state.name, &rs.head, TPS("complete"));
+ destroy_rcu_head_on_stack(&rs.head);
}
/**
@@ -4904,12 +4936,6 @@ void __init rcu_init(void)
sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
WARN_ON(!sync_wq);
- /* Respect if explicitly disabled via a boot parameter. */
- if (rcu_normal_wake_from_gp < 0) {
- if (num_possible_cpus() <= WAKE_FROM_GP_CPU_THRESHOLD)
- rcu_normal_wake_from_gp = 1;
- }
-
/* Fill in default value for rcutree.qovld boot parameter. */
/* -After- the rcu_node ->lock fields are initialized! */
if (qovld < 0)
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH -next v1 12/12] rcu-tasks: Fix possible boot-time tests failed for the call_rcu_tasks()
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
` (10 preceding siblings ...)
2026-05-11 17:54 ` [PATCH -next v1 11/12] rcu: Latch normal synchronize_rcu() path on flood Uladzislau Rezki (Sony)
@ 2026-05-11 17:54 ` Uladzislau Rezki (Sony)
11 siblings, 0 replies; 13+ messages in thread
From: Uladzislau Rezki (Sony) @ 2026-05-11 17:54 UTC (permalink / raw)
To: Paul E . McKenney, Joel Fernandes, Frederic Weisbecker,
Boqun Feng
Cc: RCU, LKML, Uladzislau Rezki, Zqiang
From: Zqiang <qiang.zhang@linux.dev>
The following scenarios will cause the call_rcu_tasks() boot-time
tests failed:
CPU0 CPU1
rcu_init_tasks_generic()
->rcu_tasks_initiate_self_tests()
->call_rcu_tasks_trace(&tests[1].rh, test_rcu_tasks_callback)
->call_rcu_tasks_generic()
->havekthread = smp_load_acquire(&rtp->kthread_ptr)
"The havekthread is false"
....
rcu_tasks_kthread()
->smp_store_release(&rtp->kthread_ptr, current)
->rcu_tasks_one_gp()
->rcuwait_wait_event()
->rcu_tasks_need_gpcb()
->for (cpu = 0; cpu < dequeue_limit; cpu++)
->rcu_segcblist_n_cbs(&rtpcp->cblist) == 0
->schedule()
->raw_spin_trylock_rcu_node()
->needwake = (func == wakeme_after_rcu) ||
(rcu_segcblist_n_cbs(&rtpcp->cblist) == rcu_task_lazy_lim)
"the rcu_task_lazy_lim default value is 32, and the
func pointer is test_rcu_tasks_callback, lead to needwake
is false."
->if (havekthread && !needwake && !timer_pending(&rtpcp->lazy_timer))
"the havekthread is false, will not enter here."
....
"the needwake is false lead to rtp_irq_work can not queue,
even if the rtp->kthread_ptr already exists at this point."
->if (needwake && READ_ONCE(rtp->kthread_ptr))
->irq_work_queue(&rtpcp->rtp_irq_work)
For the above scenarios, if the call_rcu_tasks() is not called again
afterward, the rcu_tasks_kthread will not have a chance to be wakeup,
the test_rcu_tasks_callback() will never be called, the boot-time tests
failed can happen, this commit therefore check havekthread variable, if
it's false and the rtpcp->cblist is empty, set needwake variable is true,
if the rtp->kthread_ptr exist, the rtpcp->rtp_irq_work can be queued to
wakeup rcu_tasks_kthread.
Signed-off-by: Zqiang <qiang.zhang@linux.dev>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/tasks.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 48f0d803c8e2..f4da5fad70f5 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -373,7 +373,8 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
// Queuing callbacks before initialization not yet supported.
if (WARN_ON_ONCE(!rcu_segcblist_is_enabled(&rtpcp->cblist)))
rcu_segcblist_init(&rtpcp->cblist);
- needwake = (func == wakeme_after_rcu) ||
+ needwake = (!havekthread && rcu_segcblist_empty(&rtpcp->cblist)) ||
+ (func == wakeme_after_rcu) ||
(rcu_segcblist_n_cbs(&rtpcp->cblist) == rcu_task_lazy_lim);
if (havekthread && !needwake && !timer_pending(&rtpcp->lazy_timer)) {
if (rtp->lazy_jiffies)
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-05-11 17:55 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 17:54 [PATCH -next v1 00/12] Candidate patches for the v7.2 merge window Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 01/12] rcutorture: Fully test lazy RCU Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 02/12] torture: Add torture_sched_set_normal() for user-specified nice values Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 03/12] torture: Improve kvm-series.sh header comment Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 04/12] torture: Allow "norm" abbreviation for "normal" Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 05/12] srcu: Don't queue workqueue handlers to never-online CPUs Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 06/12] srcu: Fix kerneldoc header comment typo in srcu_down_read_fast() Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 07/12] checkpatch: Undeprecate rcu_read_lock_trace() and rcu_read_unlock_trace() Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 08/12] rcu: Simplify rcu_do_batch() by applying clamp() Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 09/12] rcu: Simplify param_set_next_fqs_jiffies() by applying clamp_val() Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 10/12] rcu: Document rcu_access_pointer() feeding into cmpxchg() Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 11/12] rcu: Latch normal synchronize_rcu() path on flood Uladzislau Rezki (Sony)
2026-05-11 17:54 ` [PATCH -next v1 12/12] rcu-tasks: Fix possible boot-time tests failed for the call_rcu_tasks() Uladzislau Rezki (Sony)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox