From: Boqun Feng <boqun.feng@gmail.com>
To: Frederic Weisbecker <frederic@kernel.org>
Cc: "Paul E . McKenney" <paulmck@kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Valentin Schneider <Valentin.Schneider@arm.com>,
Peter Zijlstra <peterz@infradead.org>,
Uladzislau Rezki <urezki@gmail.com>,
Thomas Gleixner <tglx@linutronix.de>,
Neeraj Upadhyay <neeraju@codeaurora.org>,
Josh Triplett <josh@joshtriplett.org>,
Joel Fernandes <joel@joelfernandes.org>,
rcu@vger.kernel.org
Subject: Re: [PATCH 03/11] rcu/nocb: Invoke rcu_core() at the start of deoffloading
Date: Thu, 14 Oct 2021 00:07:58 +0800 [thread overview]
Message-ID: <YWcEXj2+nqO8kIFS@boqun-archlinux> (raw)
In-Reply-To: <20211011145140.359412-4-frederic@kernel.org>
Hi Frederic,
On Mon, Oct 11, 2021 at 04:51:32PM +0200, Frederic Weisbecker wrote:
> On PREEMPT_RT, if rcu_core() is preempted by the de-offloading process,
> some work, such as callbacks acceleration and invocation, may be left
> unattended due to the volatile checks on the offloaded state.
>
> In the worst case this work is postponed until the next rcu_pending()
> check that can take a jiffy to reach, which can be a problem in case
> of callbacks flooding.
>
> Solve that with invoking rcu_core() early in the de-offloading process.
> This way any work dismissed by an ongoing rcu_core() call fooled by
> a preempting deoffloading process will be caught up by a nearby future
> recall to rcu_core(), this time fully aware of the de-offloading state.
>
> Tested-by: Valentin Schneider <valentin.schneider@arm.com>
> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Josh Triplett <josh@joshtriplett.org>
> Cc: Joel Fernandes <joel@joelfernandes.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
> Cc: Uladzislau Rezki <urezki@gmail.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> ---
> include/linux/rcu_segcblist.h | 14 ++++++++++++++
> kernel/rcu/rcu_segcblist.c | 6 ++----
> kernel/rcu/tree.c | 17 +++++++++++++++++
> kernel/rcu/tree_nocb.h | 9 +++++++++
> 4 files changed, 42 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/rcu_segcblist.h b/include/linux/rcu_segcblist.h
> index 812961b1d064..659d13a7ddaa 100644
> --- a/include/linux/rcu_segcblist.h
> +++ b/include/linux/rcu_segcblist.h
> @@ -136,6 +136,20 @@ struct rcu_cblist {
> * |--------------------------------------------------------------------------|
> * | SEGCBLIST_RCU_CORE | |
> * | SEGCBLIST_LOCKING | |
> + * | SEGCBLIST_OFFLOADED | |
> + * | SEGCBLIST_KTHREAD_CB | |
> + * | SEGCBLIST_KTHREAD_GP |
> + * | |
> + * | CB/GP kthreads handle callbacks holding nocb_lock, local rcu_core() |
> + * | handles callbacks concurrently. Bypass enqueue is enabled. |
> + * | Invoke RCU core so we make sure not to preempt it in the middle with |
> + * | leaving some urgent work unattended within a jiffy. |
> + * ----------------------------------------------------------------------------
> + * |
> + * v
> + * |--------------------------------------------------------------------------|
> + * | SEGCBLIST_RCU_CORE | |
> + * | SEGCBLIST_LOCKING | |
> * | SEGCBLIST_KTHREAD_CB | |
> * | SEGCBLIST_KTHREAD_GP |
> * | |
> diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
> index c07aab6e39ef..81145c3ece25 100644
> --- a/kernel/rcu/rcu_segcblist.c
> +++ b/kernel/rcu/rcu_segcblist.c
> @@ -265,12 +265,10 @@ void rcu_segcblist_disable(struct rcu_segcblist *rsclp)
> */
> void rcu_segcblist_offload(struct rcu_segcblist *rsclp, bool offload)
> {
> - if (offload) {
> + if (offload)
> rcu_segcblist_set_flags(rsclp, SEGCBLIST_LOCKING | SEGCBLIST_OFFLOADED);
> - } else {
> - rcu_segcblist_set_flags(rsclp, SEGCBLIST_RCU_CORE);
> + else
> rcu_segcblist_clear_flags(rsclp, SEGCBLIST_OFFLOADED);
> - }
> }
>
> /*
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index e38028d48648..b236271b9022 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2717,6 +2717,23 @@ static __latent_entropy void rcu_core(void)
> unsigned long flags;
> struct rcu_data *rdp = raw_cpu_ptr(&rcu_data);
> struct rcu_node *rnp = rdp->mynode;
> + /*
> + * On RT rcu_core() can be preempted when IRQs aren't disabled.
> + * Therefore this function can race with concurrent NOCB (de-)offloading
> + * on this CPU and the below condition must be considered volatile.
> + * However if we race with:
> + *
> + * _ Offloading: In the worst case we accelerate or process callbacks
> + * concurrently with NOCB kthreads. We are guaranteed to
> + * call rcu_nocb_lock() if that happens.
If offloading races with rcu_core(), can the following happen?
<offload work>
rcu_nocb_rdp_offload():
rcu_core():
...
rcu_nocb_lock_irqsave(); // no a lock
raw_spin_lock_irqsave(->nocb_lock);
rdp_offload_toggle():
<LOCKING | OFFLOADED set>
if (!rcu_segcblist_restempty(...))
rcu_accelerate_cbs_unlocked(...);
rcu_nocb_unlock_irqrestore();
// ^ a real unlock,
// and will preempt_enable()
// offload continue with ->nocb_lock not held
If this can happen, it means an unpaired preempt_enable() and an
incorrect unlock. Thoughts? Maybe I'm missing something here?
Regards,
Boqun
> + *
> + * _ Deoffloading: In the worst case we miss callbacks acceleration or
> + * processing. This is fine because the early stage
> + * of deoffloading invokes rcu_core() after setting
> + * SEGCBLIST_RCU_CORE. So we guarantee that we'll process
> + * what could have been dismissed without the need to wait
> + * for the next rcu_pending() check in the next jiffy.
> + */
> const bool do_batch = !rcu_segcblist_completely_offloaded(&rdp->cblist);
>
> if (cpu_is_offline(smp_processor_id()))
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index 71a28f50b40f..3b470113ae38 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -990,6 +990,15 @@ static long rcu_nocb_rdp_deoffload(void *arg)
> * will refuse to put anything into the bypass.
> */
> WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies));
> + /*
> + * Start with invoking rcu_core() early. This way if the current thread
> + * happens to preempt an ongoing call to rcu_core() in the middle,
> + * leaving some work dismissed because rcu_core() still thinks the rdp is
> + * completely offloaded, we are guaranteed a nearby future instance of
> + * rcu_core() to catch up.
> + */
> + rcu_segcblist_set_flags(cblist, SEGCBLIST_RCU_CORE);
> + invoke_rcu_core();
> ret = rdp_offload_toggle(rdp, false, flags);
> swait_event_exclusive(rdp->nocb_state_wq,
> !rcu_segcblist_test_flags(cblist, SEGCBLIST_KTHREAD_CB |
> --
> 2.25.1
>
next prev parent reply other threads:[~2021-10-13 16:08 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-11 14:51 [PATCH 00/11] rcu: Make rcu_core() safe in PREEMPT_RT with NOCB + a few other fixes v2 Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 01/11] rcu/nocb: Make local rcu_nocb_lock_irqsave() safe against concurrent deoffloading Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 02/11] rcu/nocb: Prepare state machine for a new step Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 03/11] rcu/nocb: Invoke rcu_core() at the start of deoffloading Frederic Weisbecker
2021-10-13 16:07 ` Boqun Feng [this message]
2021-10-14 11:07 ` Frederic Weisbecker
2021-10-14 11:42 ` Valentin Schneider
2021-10-14 13:57 ` Boqun Feng
2021-10-11 14:51 ` [PATCH 04/11] rcu/nocb: Make rcu_core() callbacks acceleration preempt-safe Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 05/11] rcu/nocb: Make rcu_core() callbacks acceleration (de-)offloading safe Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 06/11] rcu/nocb: Check a stable offloaded state to manipulate qlen_last_fqs_check Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 07/11] rcu/nocb: Use appropriate rcu_nocb_lock_irqsave() Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 08/11] rcu/nocb: Limit number of softirq callbacks only on softirq Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 09/11] rcu: Fix callbacks processing time limit retaining cond_resched() Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 10/11] rcu: Apply callbacks processing time limit only on softirq Frederic Weisbecker
2021-10-11 14:51 ` [PATCH 11/11] rcu/nocb: Don't invoke local rcu core on callback overload from nocb kthread Frederic Weisbecker
2021-10-13 0:32 ` [PATCH 00/11] rcu: Make rcu_core() safe in PREEMPT_RT with NOCB + a few other fixes v2 Paul E. McKenney
2021-10-13 3:28 ` Paul E. McKenney
2021-10-13 10:01 ` Frederic Weisbecker
2021-10-13 11:43 ` Frederic Weisbecker
2021-10-13 16:27 ` Paul E. McKenney
2021-10-14 10:43 ` Frederic Weisbecker
-- strict thread matches above, loose matches on Subject: below --
2021-09-29 22:10 [PATCH 00/11] rcu: Make rcu_core() safe in PREEMPT_RT with NOCB + a few other fixes Frederic Weisbecker
2021-09-29 22:10 ` [PATCH 03/11] rcu/nocb: Invoke rcu_core() at the start of deoffloading Frederic Weisbecker
2021-10-01 17:50 ` Valentin Schneider
2021-10-04 12:41 ` Frederic Weisbecker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YWcEXj2+nqO8kIFS@boqun-archlinux \
--to=boqun.feng@gmail.com \
--cc=Valentin.Schneider@arm.com \
--cc=bigeasy@linutronix.de \
--cc=frederic@kernel.org \
--cc=joel@joelfernandes.org \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=neeraju@codeaurora.org \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rcu@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox