From: Boqun Feng <boqun@kernel.org>
To: Zqiang <qiang.zhang@linux.dev>
Cc: Joel Fernandes <joelagnelf@nvidia.com>,
"Paul E. McKenney" <paulmck@kernel.org>,
Kumar Kartikeya Dwivedi <memxor@gmail.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
frederic@kernel.org, neeraj.iitr10@gmail.com, urezki@gmail.com,
boqun.feng@gmail.com, rcu@vger.kernel.org,
Tejun Heo <tj@kernel.org>,
bpf@vger.kernel.org, Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
John Fastabend <john.fastabend@gmail.com>,
Andrea Righi <arighi@nvidia.com>
Subject: Re: [PATCH] rcu: Use an intermediate irq_work to start process_srcu()
Date: Sat, 21 Mar 2026 11:15:08 -0700 [thread overview]
Message-ID: <ab7gLPXTTM7m4AJj@tardis.local> (raw)
In-Reply-To: <4c23c66f86a2aff8f2d7b759f9dd257b82147a17@linux.dev>
On Sat, Mar 21, 2026 at 04:27:02AM +0000, Zqiang wrote:
> >
> > Since commit c27cea4416a3 ("rcu: Re-implement RCU Tasks Trace in terms
> > of SRCU-fast") we switched to SRCU in BPF. However as BPF instrument can
> > happen basically everywhere (including where a scheduler lock is held),
> > call_srcu() now needs to avoid acquiring scheduler lock because
> > otherwise it could cause deadlock [1]. Fix this by following what the
> > previous RCU Tasks Trace did: using an irq_work to delay the queuing of
> > the work to start process_srcu().
> >
> > [boqun: Apply Joel's feedback]
> >
> > Reported-by: Andrea Righi <arighi@nvidia.com>
> > Closes: https://lore.kernel.org/all/abjzvz_tL_siV17s@gpd4/
> > Fixes: commit c27cea4416a3 ("rcu: Re-implement RCU Tasks Trace in terms of SRCU-fast")
> > Link: https://lore.kernel.org/rcu/3c4c5a29-24ea-492d-aeee-e0d9605b4183@nvidia.com/ [1]
> > Suggested-by: Zqiang <qiang.zhang@linux.dev>
> > Signed-off-by: Boqun Feng <boqun@kernel.org>
> > ---
> > @Zqiang, I put your name as Suggested-by because you proposed the same
> > idea, let me know if you rather not have it.
>
> Thanks Boqun add me to Suggested-by :) .
>
No problem.
> >
> > @Joel, I did two updates (including your test feedback, other one is
> > call irq_work_sync() when we clean the srcu_struct), please give it a
> > try.
> >
> > include/linux/srcutree.h | 1 +
> > kernel/rcu/srcutree.c | 29 +++++++++++++++++++++++++++--
> > 2 files changed, 28 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
> > index dfb31d11ff05..be76fa4fc170 100644
> > --- a/include/linux/srcutree.h
> > +++ b/include/linux/srcutree.h
> > @@ -95,6 +95,7 @@ struct srcu_usage {
> > unsigned long reschedule_jiffies;
> > unsigned long reschedule_count;
> > struct delayed_work work;
> > + struct irq_work irq_work;
> > struct srcu_struct *srcu_ssp;
> > };
> >
> > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> > index 2328827f8775..73aef361a524 100644
> > --- a/kernel/rcu/srcutree.c
> > +++ b/kernel/rcu/srcutree.c
> > @@ -19,6 +19,7 @@
> > #include <linux/mutex.h>
> > #include <linux/percpu.h>
> > #include <linux/preempt.h>
> > +#include <linux/irq_work.h>
> > #include <linux/rcupdate_wait.h>
> > #include <linux/sched.h>
> > #include <linux/smp.h>
> > @@ -75,6 +76,7 @@ static bool __read_mostly srcu_init_done;
> > static void srcu_invoke_callbacks(struct work_struct *work);
> > static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay);
> > static void process_srcu(struct work_struct *work);
> > +static void srcu_irq_work(struct irq_work *work);
> > static void srcu_delay_timer(struct timer_list *t);
> >
> > /*
> > @@ -216,6 +218,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static)
> > mutex_init(&ssp->srcu_sup->srcu_barrier_mutex);
> > atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0);
> > INIT_DELAYED_WORK(&ssp->srcu_sup->work, process_srcu);
> > + init_irq_work(&ssp->srcu_sup->irq_work, srcu_irq_work);
> > ssp->srcu_sup->sda_is_static = is_static;
> > if (!is_static) {
> > ssp->sda = alloc_percpu(struct srcu_data);
> > @@ -713,6 +716,8 @@ void cleanup_srcu_struct(struct srcu_struct *ssp)
> > return; /* Just leak it! */
> > if (WARN_ON(srcu_readers_active(ssp)))
> > return; /* Just leak it! */
> > + /* Wait for irq_work to finish first as it may queue a new work. */
> > + irq_work_sync(&sup->irq_work);
> > flush_delayed_work(&sup->work);
> > for_each_possible_cpu(cpu) {
> > struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu);
> > @@ -1118,9 +1123,13 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp,
>
>
> The following should also be replaced, although under normal situation,
> we wouldn't go here:
>
> if (snp == snp_leaf && snp_seq != s) {
> srcu_schedule_cbs_sdp(sdp, do_norm ? SRCU_INTERVAL : 0);
> return;
> }
>
Sigh, another mole to whack... this one is less fatal since we don't
call it with rcu node lock:
raw_spin_unlock_irqrestore_rcu_node(snp, flags);
if (snp == snp_leaf && snp_seq != s) {
srcu_schedule_cbs_sdp(sdp, do_norm ? SRCU_INTERVAL : 0);
return;
}
but the operation is per srcu_data, so we may need per srcu_data (a
hacky way is that we compare with rcu_tasks_trace_srcu_struct and have
only one percpu irq_work for rcu_tasks_trace_srcu_struct).
Note that if we may the delay always >0, then we can dodge the pi->lock
(or the pool->lock as we recently discovered). But we will still have a
timer base lock in call_srcu(). Depending on whether it's considered as
a bug per BPF (we have the issue in v6.19 as well, see [1]). If [1] is
not considered as a bug, then I think we can just fix the issue by an
always positive delay. Otherwise, bring your mallet, we may have more
moles to whack. ;-)
[1]: https://lore.kernel.org/rcu/20260321170321.32257-1-boqun@kernel.org/
Regards,
Boqun
> Thanks
> Zqiang
>
>
>
>
> > // it isn't. And it does not have to be. After all, it
> > // can only be executed during early boot when there is only
> > // the one boot CPU running with interrupts still disabled.
> > + //
> > + // Use an irq_work here to avoid acquiring runqueue lock with
> > + // srcu rcu_node::lock held. BPF instrument could introduce the
> > + // opposite dependency, hence we need to break the possible
> > + // locking dependency here.
> > if (likely(srcu_init_done))
> > - queue_delayed_work(rcu_gp_wq, &sup->work,
> > - !!srcu_get_delay(ssp));
> > + irq_work_queue(&sup->irq_work);
> > else if (list_empty(&sup->work.work.entry))
> > list_add(&sup->work.work.entry, &srcu_boot_list);
> > }
> > @@ -1979,6 +1988,22 @@ static void process_srcu(struct work_struct *work)
> > srcu_reschedule(ssp, curdelay);
> > }
> >
> > +static void srcu_irq_work(struct irq_work *work)
> > +{
> > + struct srcu_struct *ssp;
> > + struct srcu_usage *sup;
> > + unsigned long delay;
> > +
> > + sup = container_of(work, struct srcu_usage, irq_work);
> > + ssp = sup->srcu_ssp;
> > +
> > + raw_spin_lock_irq_rcu_node(ssp->srcu_sup);
> > + delay = srcu_get_delay(ssp);
> > + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);
> > +
> > + queue_delayed_work(rcu_gp_wq, &sup->work, !!delay);
> > +}
> > +
> > void srcutorture_get_gp_data(struct srcu_struct *ssp, int *flags,
> > unsigned long *gp_seq)
> > {
> > --
> > 2.50.1 (Apple Git-155)
> >
next prev parent reply other threads:[~2026-03-21 18:15 UTC|newest]
Thread overview: 100+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-17 13:34 Next-level bug in SRCU implementation of RCU Tasks Trace + PREEMPT_RT Paul E. McKenney
2026-03-18 10:50 ` Sebastian Andrzej Siewior
2026-03-18 11:49 ` Paul E. McKenney
2026-03-18 14:43 ` Sebastian Andrzej Siewior
2026-03-18 15:43 ` Paul E. McKenney
2026-03-18 16:04 ` Sebastian Andrzej Siewior
2026-03-18 16:32 ` Paul E. McKenney
2026-03-18 16:42 ` Boqun Feng
2026-03-18 18:45 ` Paul E. McKenney
2026-03-18 16:47 ` Sebastian Andrzej Siewior
2026-03-18 18:48 ` Paul E. McKenney
2026-03-19 8:55 ` Sebastian Andrzej Siewior
2026-03-19 10:05 ` Paul E. McKenney
2026-03-19 10:43 ` Paul E. McKenney
2026-03-19 10:51 ` Sebastian Andrzej Siewior
2026-03-18 15:51 ` Boqun Feng
2026-03-18 18:42 ` Paul E. McKenney
2026-03-18 20:04 ` Joel Fernandes
2026-03-18 20:11 ` Kumar Kartikeya Dwivedi
2026-03-18 20:25 ` Joel Fernandes
2026-03-18 21:52 ` Boqun Feng
2026-03-18 21:55 ` Boqun Feng
2026-03-18 22:15 ` Boqun Feng
2026-03-18 22:52 ` Joel Fernandes
2026-03-18 23:27 ` Boqun Feng
2026-03-19 1:08 ` Boqun Feng
2026-03-19 9:03 ` Sebastian Andrzej Siewior
2026-03-19 16:27 ` Boqun Feng
2026-03-19 16:33 ` Sebastian Andrzej Siewior
2026-03-19 16:48 ` Boqun Feng
2026-03-19 16:59 ` Kumar Kartikeya Dwivedi
2026-03-19 17:27 ` Boqun Feng
2026-03-19 18:41 ` Kumar Kartikeya Dwivedi
2026-03-19 20:14 ` Boqun Feng
2026-03-19 20:21 ` Joel Fernandes
2026-03-19 20:39 ` Boqun Feng
2026-03-20 15:34 ` Paul E. McKenney
2026-03-20 15:59 ` Boqun Feng
2026-03-20 16:24 ` Paul E. McKenney
2026-03-20 16:57 ` Boqun Feng
2026-03-20 17:54 ` Joel Fernandes
2026-03-20 18:14 ` [PATCH] rcu: Use an intermediate irq_work to start process_srcu() Boqun Feng
2026-03-20 19:18 ` Joel Fernandes
2026-03-20 20:47 ` Andrea Righi
2026-03-20 20:54 ` Boqun Feng
2026-03-20 21:00 ` Andrea Righi
2026-03-20 21:02 ` Andrea Righi
2026-03-20 21:06 ` Boqun Feng
2026-03-20 22:29 ` [PATCH v2] " Boqun Feng
2026-03-23 21:09 ` Joel Fernandes
2026-03-23 22:18 ` Boqun Feng
2026-03-23 22:50 ` Joel Fernandes
2026-03-24 11:27 ` Frederic Weisbecker
2026-03-24 14:56 ` Joel Fernandes
2026-03-24 14:56 ` Alexei Starovoitov
2026-03-24 17:36 ` Boqun Feng
2026-03-24 18:40 ` Joel Fernandes
2026-03-24 19:23 ` Paul E. McKenney
2026-03-21 4:27 ` [PATCH] " Zqiang
2026-03-21 18:15 ` Boqun Feng [this message]
2026-03-21 10:10 ` Paul E. McKenney
2026-03-21 17:15 ` Boqun Feng
2026-03-21 17:41 ` Paul E. McKenney
2026-03-21 18:06 ` Boqun Feng
2026-03-21 19:31 ` Paul E. McKenney
2026-03-21 19:45 ` Boqun Feng
2026-03-21 20:07 ` Paul E. McKenney
2026-03-21 20:08 ` Boqun Feng
2026-03-22 10:09 ` Paul E. McKenney
2026-03-22 16:16 ` Boqun Feng
2026-03-22 17:09 ` Paul E. McKenney
2026-03-22 17:31 ` Boqun Feng
2026-03-22 17:44 ` Paul E. McKenney
2026-03-22 18:17 ` Boqun Feng
2026-03-22 19:47 ` Paul E. McKenney
2026-03-22 20:26 ` Boqun Feng
2026-03-23 7:50 ` Paul E. McKenney
2026-03-20 18:20 ` Next-level bug in SRCU implementation of RCU Tasks Trace + PREEMPT_RT Boqun Feng
2026-03-20 23:11 ` Paul E. McKenney
2026-03-21 3:29 ` Paul E. McKenney
2026-03-21 17:03 ` [RFC PATCH] rcu-tasks: Avoid using mod_timer() in call_rcu_tasks_generic() Boqun Feng
2026-03-23 15:17 ` Boqun Feng
2026-03-23 20:37 ` Joel Fernandes
2026-03-23 21:50 ` Kumar Kartikeya Dwivedi
2026-03-23 22:13 ` Boqun Feng
2026-03-20 16:15 ` Next-level bug in SRCU implementation of RCU Tasks Trace + PREEMPT_RT Boqun Feng
2026-03-20 16:24 ` Paul E. McKenney
2026-03-19 17:02 ` Sebastian Andrzej Siewior
2026-03-19 17:44 ` Boqun Feng
2026-03-19 18:42 ` Joel Fernandes
2026-03-19 20:20 ` Boqun Feng
2026-03-19 20:26 ` Joel Fernandes
2026-03-19 20:45 ` Joel Fernandes
2026-03-19 10:02 ` Paul E. McKenney
2026-03-19 14:34 ` Boqun Feng
2026-03-19 16:10 ` Paul E. McKenney
2026-03-18 23:56 ` Kumar Kartikeya Dwivedi
2026-03-19 0:26 ` Zqiang
2026-03-19 1:13 ` Boqun Feng
2026-03-19 2:47 ` Joel Fernandes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ab7gLPXTTM7m4AJj@tardis.local \
--to=boqun@kernel.org \
--cc=arighi@nvidia.com \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=frederic@kernel.org \
--cc=joelagnelf@nvidia.com \
--cc=john.fastabend@gmail.com \
--cc=memxor@gmail.com \
--cc=neeraj.iitr10@gmail.com \
--cc=paulmck@kernel.org \
--cc=qiang.zhang@linux.dev \
--cc=rcu@vger.kernel.org \
--cc=tj@kernel.org \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox