From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933727AbdKPBvj (ORCPT ); Wed, 15 Nov 2017 20:51:39 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:56542 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933488AbdKPBuf (ORCPT ); Wed, 15 Nov 2017 20:50:35 -0500 X-Google-Smtp-Source: AGs4zMZA5uVXXmxvmxNJQ6WXmnnqjJBRVT04BwpDeCHNCeuYn3GjRTv7dSf9whO+mi9CZTUhIJnadg== X-ME-Sender: Date: Thu, 16 Nov 2017 09:52:12 +0800 From: Boqun Feng To: Daniel Lustig Cc: Palmer Dabbelt , "will.deacon@arm.com" , Arnd Bergmann , Olof Johansson , "linux-kernel@vger.kernel.org" , "patches@groups.riscv.org" , "peterz@infradead.org" Subject: Re: [patches] Re: [PATCH v9 05/12] RISC-V: Atomic and Locking Code Message-ID: <20171116015212.GF6280@tardis> References: <20171115180600.GR19071@arm.com> <20171116011906.GE6280@tardis> <7af820e0b90848dbac4d3120758b1cf6@HQMAIL105.nvidia.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="Pgaa2uWPnPrfixyx" Content-Disposition: inline In-Reply-To: <7af820e0b90848dbac4d3120758b1cf6@HQMAIL105.nvidia.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --Pgaa2uWPnPrfixyx Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 16, 2017 at 01:31:21AM +0000, Daniel Lustig wrote: > > -----Original Message----- > > From: Boqun Feng [mailto:boqun.feng@gmail.com] > > Sent: Wednesday, November 15, 2017 5:19 PM > > To: Daniel Lustig > > Cc: Palmer Dabbelt ; will.deacon@arm.com; Arnd > > Bergmann ; Olof Johansson ; linux- > > kernel@vger.kernel.org; patches@groups.riscv.org; peterz@infradead.org > > Subject: Re: [patches] Re: [PATCH v9 05/12] RISC-V: Atomic and Locking = Code > >=20 > > On Wed, Nov 15, 2017 at 11:59:44PM +0000, Daniel Lustig wrote: > > > > On Wed, 15 Nov 2017 10:06:01 PST (-0800), will.deacon@arm.com wrote: > > > >> On Tue, Nov 14, 2017 at 12:30:59PM -0800, Palmer Dabbelt wrote: > > > >> > On Tue, 24 Oct 2017 07:10:33 PDT (-0700), will.deacon@arm.com > > wrote: > > > >> >>On Tue, Sep 26, 2017 at 06:56:31PM -0700, Palmer Dabbelt wrote: > > > > > > > > > > Hi Palmer, > > > > > > > > > >> >>+ATOMIC_OPS(add, add, +, i, , _relaxed) > > > > >> >>+ATOMIC_OPS(add, add, +, i, .aq , _acquire) ATOMIC_OPS(add, > > > > >> >>+add, > > > > >> >>++, i, .rl , _release) > > > > >> >>+ATOMIC_OPS(add, add, +, i, .aqrl, ) > > > > >> > > > > > >> >Have you checked that .aqrl is equivalent to "ordered", since > > > > >> >there are interpretations where that isn't the case. Specifical= ly: > > > > >> > > > > > >> >// all variables zero at start of time > > > > >> >P0: > > > > >> >WRITE_ONCE(x) =3D 1; > > > > >> >atomic_add_return(y, 1); > > > > >> >WRITE_ONCE(z) =3D 1; > > > > >> > > > > > >> >P1: > > > > >> >READ_ONCE(z) // reads 1 > > > > >> >smp_rmb(); > > > > >> >READ_ONCE(x) // must not read 0 > > > > >> > > > > >> I haven't. We don't quite have a formal memory model specificat= ion > > yet. > > > > >> I've added Daniel Lustig, who is creating that model. He should > > > > >> have a better idea > > > > > > > > > > Thanks. You really do need to ensure that, as it's heavily relied= upon. > > > > > > > > I know it's the case for our current processors, and I'm pretty sure > > > > it's the case for what's formally specified, but we'll have to wait > > > > for the spec in order to prove it. > > > > > > I think Will is right. In the current spec, using .aqrl converts an > > > RCpc load or store into an RCsc load or store, but the acquire(-RCsc) > > > annotation still only applies to the load part of the atomic, and the > > > release(-RCsc) annotation applies only to the store part of the atomi= c. > > > > > > Why is that? Picture an machine which implements AMOs using something > > > that looks more like an LR/SC under the covers, or one that uses cache > > > line locking, or anything else along those same lines. In some such > > > machines, there could be a window between lock/reserve and > > > unlock/store-conditional where other later stores could squeeze into,= and > > that would break Will's example among others. > > > > > > It's likely the same reasoning that causes ARM to use a trailing dmb > > > here, rather than just using ldaxr/stlxr. Is that right Will? I know > > > that's LL/SC and this particular cases uses AMOADD, but it's the same > > > principle. Well, at least according to how we have it in the current= memory > > model draft. > > > > > > Also, RISC-V currently prefers leading fence mappings, so I think the > > > result here, for atomic_add_return() for example, should be this: > > > > > > fence rw,rw > > > amoadd.aq ... > > > > >=20 > > Hmm.. if atomic_add_return() is implemented like that, how about the > > following case: > >=20 > > {x=3D0, y=3D0} > >=20 > > P1: > >=20 > > r1 =3D atomic_add_return(&x, 1); // r1 =3D=3D 0, x will 1 afterwards > > WRITE_ONCE(y, 1); > >=20 > > P2: > >=20 > > r2 =3D READ_ONCE(y); // r2 =3D 1 > > smp_rmb(); > > r3 =3D atomic_read(&x); // r3 =3D 0? > >=20 > > , could this result in r1 =3D=3D 1 && r2 =3D=3D 1 && r3 =3D=3D 0? Given= you said .aq only > > effects the load part of AMO, and I don't see anything here preventing = the > > reordering between store of y and the store part of the AMO on P1. > >=20 > > Note: we don't allow (r1 =3D=3D 1 && r2 =3D=3D 1 && r3 =3D=3D 0) in abo= ve case for linux > > kernel. Please see Documentation/atomic_t.txt: > >=20 > > "Fully ordered primitives are ordered against everything prior and ever= ything > > subsequent. Therefore a fully ordered primitive is like having an smp_m= b() > > before and an smp_mb() after the primitive." >=20 > Yes, you're right Boqun. Good catch, and sorry for over-optimizing too q= uickly. >=20 > In that case, maybe we should just start out having a fence on both sides= for Actually, given your architecture is RCsc rather than RCpc, so I think maybe you could follow the way that ARM uses(i.e. relaxed load + release store + a full barrier). You can see the commit log of 8e86f0b409a4 ("arm64: atomics: fix use of acquire + release for full barrier semantics") for the reasoning: =09 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/= ?id=3D8e86f0b409a44193f1587e87b69c5dcf8f65be67 > now, and then we'll discuss offline whether we want to change the model's > behavior here. >=20 Sounds great! Any estimation when we can see that(maybe a draft)? Regards, Boqun > Dan --Pgaa2uWPnPrfixyx Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQEzBAABCAAdFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAloM70AACgkQSXnow7UH +ri9/ggAniDNrMcLy1p4a8ks7H2BasQqR0G/i2jmITadja5VjcTMwZtlh2wtY7JT qXKGkkCOZauu8+88ch7umBjbu3zkEKasBy6CfLo5o6LD55qZNhSz+19qDI23suwH A2VGwnOIOgKRrrSKu80vEhbPqci3zUSC+/aCdj8sUstB2dDGTqdhOtGzrWF139Ly 0XIZ7/R/WcxGM3nQQGzzX4E5yZvyjOQ3SM1SnhXiMsM3K62DT26GGL852v7we54K 288HrPlPZ9Crtwi4GnofAgSQrUGz9QI7EYzRpABsqGBe3KQ5fWmxVQmSR2ItFG+y iIaqzf4yDx1/OuCGqe5zwlv0WY0Irw== =G4dq -----END PGP SIGNATURE----- --Pgaa2uWPnPrfixyx--