From: "Paul E. McKenney" <paulmck@kernel.org>
To: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
kernel-team@meta.com, mingo@kernel.org
Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com,
will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com,
npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk,
luc.maranget@inria.fr, akiyks@gmail.com,
Jonas Oberhauser <jonas.oberhauser@huaweicloud.com>,
"Paul E . McKenney" <paulmck@kernel.org>
Subject: [PATCH memory-model 2/8] tools/memory-model: Unify UNLOCK+LOCK pairings to po-unlock-lock-po
Date: Mon, 20 Mar 2023 18:02:40 -0700 [thread overview]
Message-ID: <20230321010246.50960-2-paulmck@kernel.org> (raw)
In-Reply-To: <778147e4-ccab-40cf-b6ef-31abe4e3f6b7@paulmck-laptop>
From: Jonas Oberhauser <jonas.oberhauser@huaweicloud.com>
LKMM uses two relations for talking about UNLOCK+LOCK pairings:
1) po-unlock-lock-po, which handles UNLOCK+LOCK pairings
on the same CPU or immediate lock handovers on the same
lock variable
2) po;[UL];(co|po);[LKW];po, which handles UNLOCK+LOCK pairs
literally as described in rcupdate.h#L1002, i.e., even
after a sequence of handovers on the same lock variable.
The latter relation is used only once, to provide the guarantee
defined in rcupdate.h#L1002 by smp_mb__after_unlock_lock(), which
makes any UNLOCK+LOCK pair followed by the fence behave like a full
barrier.
This patch drops this use in favor of using po-unlock-lock-po
everywhere, which unifies the way the model talks about UNLOCK+LOCK
pairings. At first glance this seems to weaken the guarantee given
by LKMM: When considering a long sequence of lock handovers
such as below, where P0 hands the lock to P1, which hands it to P2,
which finally executes such an after_unlock_lock fence, the mb
relation currently links any stores in the critical section of P0
to instructions P2 executes after its fence, but not so after the
patch.
P0(int *x, int *y, spinlock_t *mylock)
{
spin_lock(mylock);
WRITE_ONCE(*x, 2);
spin_unlock(mylock);
WRITE_ONCE(*y, 1);
}
P1(int *y, int *z, spinlock_t *mylock)
{
int r0 = READ_ONCE(*y); // reads 1
spin_lock(mylock);
spin_unlock(mylock);
WRITE_ONCE(*z,1);
}
P2(int *z, int *d, spinlock_t *mylock)
{
int r1 = READ_ONCE(*z); // reads 1
spin_lock(mylock);
spin_unlock(mylock);
smp_mb__after_unlock_lock();
WRITE_ONCE(*d,1);
}
P3(int *x, int *d)
{
WRITE_ONCE(*d,2);
smp_mb();
WRITE_ONCE(*x,1);
}
exists (1:r0=1 /\ 2:r1=1 /\ x=2 /\ d=2)
Nevertheless, the ordering guarantee given in rcupdate.h is actually
not weakened. This is because the unlock operations along the
sequence of handovers are A-cumulative fences. They ensure that any
stores that propagate to the CPU performing the first unlock
operation in the sequence must also propagate to every CPU that
performs a subsequent lock operation in the sequence. Therefore any
such stores will also be ordered correctly by the fence even if only
the final handover is considered a full barrier.
Indeed this patch does not affect the behaviors allowed by LKMM at
all. The mb relation is used to define ordering through:
1) mb/.../ppo/hb, where the ordering is subsumed by hb+ where the
lock-release, rfe, and unlock-acquire orderings each provide hb
2) mb/strong-fence/cumul-fence/prop, where the rfe and A-cumulative
lock-release orderings simply add more fine-grained cumul-fence
edges to substitute a single strong-fence edge provided by a long
lock handover sequence
3) mb/strong-fence/pb and various similar uses in the definition of
data races, where as discussed above any long handover sequence
can be turned into a sequence of cumul-fence edges that provide
the same ordering.
Signed-off-by: Jonas Oberhauser <jonas.oberhauser@huaweicloud.com>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
tools/memory-model/linux-kernel.cat | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index 07f884f9b2bf..6e531457bb73 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -37,8 +37,19 @@ let mb = ([M] ; fencerel(Mb) ; [M]) |
([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
- ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
- fencerel(After-unlock-lock) ; [M])
+(*
+ * Note: The po-unlock-lock-po relation only passes the lock to the direct
+ * successor, perhaps giving the impression that the ordering of the
+ * smp_mb__after_unlock_lock() fence only affects a single lock handover.
+ * However, in a longer sequence of lock handovers, the implicit
+ * A-cumulative release fences of lock-release ensure that any stores that
+ * propagate to one of the involved CPUs before it hands over the lock to
+ * the next CPU will also propagate to the final CPU handing over the lock
+ * to the CPU that executes the fence. Therefore, all those stores are
+ * also affected by the fence.
+ *)
+ ([M] ; po-unlock-lock-po ;
+ [After-unlock-lock] ; po ; [M])
let gp = po ; [Sync-rcu | Sync-srcu] ; po?
let strong-fence = mb | gp
--
2.40.0.rc2
next prev parent reply other threads:[~2023-03-21 1:03 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-21 1:02 [PATCH memory-model 0/8] LKMM updates for v6.4 Paul E. McKenney
2023-03-21 1:02 ` [PATCH memory-model 1/8] tools/memory-model: Update some warning labels Paul E. McKenney
2023-03-22 0:51 ` Andrea Parri
2023-03-21 1:02 ` Paul E. McKenney [this message]
2023-03-22 0:59 ` [PATCH memory-model 2/8] tools/memory-model: Unify UNLOCK+LOCK pairings to po-unlock-lock-po Andrea Parri
2023-03-22 18:06 ` Paul E. McKenney
2023-03-21 1:02 ` [PATCH memory-model 3/8] tools/memory-model: Add smp_mb__after_srcu_read_unlock() Paul E. McKenney
2023-03-21 1:02 ` [PATCH memory-model 4/8] tools/memory-model: Restrict to-r to read-read address dependency Paul E. McKenney
2023-03-22 0:53 ` Andrea Parri
2023-03-21 1:02 ` [PATCH memory-model 5/8] tools/memory-model: Provide exact SRCU semantics Paul E. McKenney
2023-03-22 1:07 ` Andrea Parri
2023-03-22 1:13 ` Alan Stern
2023-03-21 1:02 ` [PATCH memory-model 6/8] tools/memory-model: Make ppo a subrelation of po Paul E. McKenney
2023-03-21 1:02 ` [PATCH memory-model 7/8] tools/memory-model: Add documentation about SRCU read-side critical sections Paul E. McKenney
2023-03-22 1:40 ` Andrea Parri
2023-03-22 2:17 ` Joel Fernandes
2023-03-22 14:30 ` Alan Stern
2023-03-22 18:02 ` Paul E. McKenney
2023-03-21 1:02 ` [PATCH memory-model 8/8] Documentation: litmus-tests: Correct spelling Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230321010246.50960-2-paulmck@kernel.org \
--to=paulmck@kernel.org \
--cc=akiyks@gmail.com \
--cc=boqun.feng@gmail.com \
--cc=dhowells@redhat.com \
--cc=j.alglave@ucl.ac.uk \
--cc=jonas.oberhauser@huaweicloud.com \
--cc=kernel-team@meta.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=luc.maranget@inria.fr \
--cc=mingo@kernel.org \
--cc=npiggin@gmail.com \
--cc=parri.andrea@gmail.com \
--cc=peterz@infradead.org \
--cc=stern@rowland.harvard.edu \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox