From: Joel Fernandes <joel@joelfernandes.org>
To: Boqun Feng <boqun.feng@gmail.com>
Cc: Frederic Weisbecker <frederic@kernel.org>,
"Paul E . McKenney" <paulmck@kernel.org>,
LKML <linux-kernel@vger.kernel.org>, rcu <rcu@vger.kernel.org>,
Uladzislau Rezki <urezki@gmail.com>,
Neeraj Upadhyay <quic_neeraju@quicinc.com>
Subject: Re: [PATCH 04/10] rcu/nocb: Remove needless full barrier after callback advancing
Date: Sun, 10 Sep 2023 00:09:23 -0400 [thread overview]
Message-ID: <20230910040923.GA762577@google.com> (raw)
In-Reply-To: <ZPy3-MS7uOJfmJhs@boqun-archlinux>
On Sat, Sep 09, 2023 at 11:22:48AM -0700, Boqun Feng wrote:
> On Sat, Sep 09, 2023 at 04:31:25AM +0000, Joel Fernandes wrote:
> > On Fri, Sep 08, 2023 at 10:35:57PM +0200, Frederic Weisbecker wrote:
> > > A full barrier is issued from nocb_gp_wait() upon callbacks advancing
> > > to order grace period completion with callbacks execution.
> > >
> > > However these two events are already ordered by the
> > > smp_mb__after_unlock_lock() barrier within the call to
> > > raw_spin_lock_rcu_node() that is necessary for callbacks advancing to
> > > happen.
> > >
> > > The following litmus test shows the kind of guarantee that this barrier
> > > provides:
> > >
> > > C smp_mb__after_unlock_lock
> > >
> > > {}
> > >
> > > // rcu_gp_cleanup()
> > > P0(spinlock_t *rnp_lock, int *gpnum)
> > > {
> > > // Grace period cleanup increase gp sequence number
> > > spin_lock(rnp_lock);
> > > WRITE_ONCE(*gpnum, 1);
> > > spin_unlock(rnp_lock);
> > > }
> > >
> > > // nocb_gp_wait()
> > > P1(spinlock_t *rnp_lock, spinlock_t *nocb_lock, int *gpnum, int *cb_ready)
> > > {
> > > int r1;
> > >
> > > // Call rcu_advance_cbs() from nocb_gp_wait()
> > > spin_lock(nocb_lock);
> > > spin_lock(rnp_lock);
> > > smp_mb__after_unlock_lock();
> > > r1 = READ_ONCE(*gpnum);
> > > WRITE_ONCE(*cb_ready, 1);
> > > spin_unlock(rnp_lock);
> > > spin_unlock(nocb_lock);
> > > }
> > >
> > > // nocb_cb_wait()
> > > P2(spinlock_t *nocb_lock, int *cb_ready, int *cb_executed)
> > > {
> > > int r2;
> > >
> > > // rcu_do_batch() -> rcu_segcblist_extract_done_cbs()
> > > spin_lock(nocb_lock);
> > > r2 = READ_ONCE(*cb_ready);
> > > spin_unlock(nocb_lock);
> > >
> > > // Actual callback execution
> > > WRITE_ONCE(*cb_executed, 1);
> >
> > So related to this something in the docs caught my attention under "Callback
> > Invocation" [1]
> >
> > <quote>
> > However, if the callback function communicates to other CPUs, for example,
> > doing a wakeup, then it is that function's responsibility to maintain
> > ordering. For example, if the callback function wakes up a task that runs on
> > some other CPU, proper ordering must in place in both the callback function
> > and the task being awakened. To see why this is important, consider the top
> > half of the grace-period cleanup diagram. The callback might be running on a
> > CPU corresponding to the leftmost leaf rcu_node structure, and awaken a task
> > that is to run on a CPU corresponding to the rightmost leaf rcu_node
> > structure, and the grace-period kernel thread might not yet have reached the
> > rightmost leaf. In this case, the grace period's memory ordering might not
> > yet have reached that CPU, so again the callback function and the awakened
> > task must supply proper ordering.
> > </quote>
> >
> > I believe this text is for non-nocb but if we apply that to the nocb case,
> > lets see what happens.
> >
> > In the litmus, he rcu_advance_cbs() happened on P1, however the callback is
> > executing on P2. That sounds very similar to the non-nocb world described in
> > the text where a callback tries to wake something up on a different CPU and
> > needs to take care of all the ordering.
> >
> > So unless I'm missing something (quite possible), P2 must see the update to
> > gpnum as well. However, per your limus test, the only thing P2 does is
> > acquire the nocb_lock. I don't see how it is guaranteed to see gpnum == 1.
>
> Because P1 writes cb_ready under nocb_lock, and P2 reads cb_ready under
> nocb_lock as well and if P2 read P1's write, then we know the serialized
> order of locking is P1 first (i.e. the spin_lock(nocb_lock) on P2 reads
> from the spin_unlock(nocb_lock) on P1), in other words:
>
> (fact #1)
>
> unlock(nocb_lock) // on P1
> ->rfe
> lock(nocb_lock) // on P2
>
> so if P1 reads P0's write on gpnum
>
> (assumption #1)
>
> W(gpnum)=1 // on P0
> ->rfe
> R(gpnum)=1 // on P1
>
> and we have
>
> (fact #2)
>
> R(gpnum)=1 // on P1
> ->(po; [UL])
> unlock(nocb_lock) // on P1
>
> combine them you get
>
> W(gpnum)=1 // on P0
> ->rfe // fact #1
> ->(po; [UL]) // fact #2
> ->rfe // assumption #1
> lock(nocb_lock) // on P2
> ->([LKR]; po)
> M // any access on P2 after spin_lock(nocb_lock);
>
> so
> W(gpnum)=1 // on P0
> ->rfe ->po-unlock-lock-po
> M // on P2
>
> and po-unlock-lock-po is A-culum, hence "->rfe ->po-unlock-lock-po" or
> "rfe; po-unlock-lock-po" is culum-fence, hence it's a ->prop, which
> means the write of gpnum on P0 propagates to P2 before any memory
> accesses after spin_lock(nocb_lock)?
You and Frederic are right. I confirmed this by running herd7 as well.
Also he adds a ->co between P2 and P3, so that's why the
smp_mb__after_lock_unlock() helps to keep the propogation intact. Its pretty
much the R-pattern extended across 4 CPUs.
We should probably document these in the RCU memory ordering docs.
thanks,
- Joel
next prev parent reply other threads:[~2023-09-10 4:09 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-08 20:35 [PATCH 00/10] rcu cleanups Frederic Weisbecker
2023-09-08 20:35 ` [PATCH 01/10] rcu: Use rcu_segcblist_segempty() instead of open coding it Frederic Weisbecker
2023-10-02 15:38 ` Paul E. McKenney
2023-09-08 20:35 ` [PATCH 02/10] rcu: Rename jiffies_till_flush to jiffies_lazy_flush Frederic Weisbecker
2023-09-09 1:07 ` Joel Fernandes
2023-09-10 19:48 ` Frederic Weisbecker
2023-09-08 20:35 ` [PATCH 03/10] rcu/nocb: Remove needless LOAD-ACQUIRE Frederic Weisbecker
2023-09-09 1:48 ` Joel Fernandes
2023-09-09 1:50 ` Joel Fernandes
2023-09-10 21:17 ` Frederic Weisbecker
2023-09-08 20:35 ` [PATCH 04/10] rcu/nocb: Remove needless full barrier after callback advancing Frederic Weisbecker
2023-09-09 4:31 ` Joel Fernandes
2023-09-09 18:22 ` Boqun Feng
2023-09-10 4:09 ` Joel Fernandes [this message]
2023-09-10 10:22 ` Paul E. McKenney
2023-09-10 20:17 ` Frederic Weisbecker
2023-09-10 20:29 ` Frederic Weisbecker
2023-09-08 20:35 ` [PATCH 05/10] rcu: Assume IRQS disabled from rcu_report_dead() Frederic Weisbecker
2023-10-02 15:41 ` Paul E. McKenney
2023-09-08 20:35 ` [PATCH 06/10] rcu: Assume rcu_report_dead() is always called locally Frederic Weisbecker
2023-10-02 15:45 ` Paul E. McKenney
2023-09-08 20:36 ` [PATCH 07/10] rcu: Conditionally build CPU-hotplug teardown callbacks Frederic Weisbecker
2023-10-04 16:57 ` Paul E. McKenney
2023-09-08 20:36 ` [PATCH 08/10] rcu: Standardize explicit CPU-hotplug calls Frederic Weisbecker
2023-10-02 15:47 ` Paul E. McKenney
2023-09-08 20:36 ` [PATCH 09/10] rcu: Remove references to rcu_migrate_callbacks() from diagrams Frederic Weisbecker
2023-10-02 15:52 ` Paul E. McKenney
2023-09-08 20:36 ` [PATCH 10/10] rcu: Comment why callbacks migration can't wait for CPUHP_RCUTREE_PREP Frederic Weisbecker
2023-10-02 15:48 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230910040923.GA762577@google.com \
--to=joel@joelfernandes.org \
--cc=boqun.feng@gmail.com \
--cc=frederic@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=paulmck@kernel.org \
--cc=quic_neeraju@quicinc.com \
--cc=rcu@vger.kernel.org \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox