From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933169AbdKPBRe (ORCPT ); Wed, 15 Nov 2017 20:17:34 -0500 Received: from mail-qt0-f182.google.com ([209.85.216.182]:54704 "EHLO mail-qt0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933103AbdKPBR2 (ORCPT ); Wed, 15 Nov 2017 20:17:28 -0500 X-Google-Smtp-Source: AGs4zMZLm8Pn+UJHhrlKr8L0eWtk+Y8ZfD2M+h9MhEgR+0Rf0BPSV20BT7TbWOBYtjE3kKqV1S1thA== X-ME-Sender: Date: Thu, 16 Nov 2017 09:19:06 +0800 From: Boqun Feng To: Daniel Lustig Cc: Palmer Dabbelt , "will.deacon@arm.com" , Arnd Bergmann , Olof Johansson , "linux-kernel@vger.kernel.org" , "patches@groups.riscv.org" , "peterz@infradead.org" Subject: Re: [patches] Re: [PATCH v9 05/12] RISC-V: Atomic and Locking Code Message-ID: <20171116011906.GE6280@tardis> References: <20171115180600.GR19071@arm.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="MIdTMoZhcV1D07fI" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --MIdTMoZhcV1D07fI Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Nov 15, 2017 at 11:59:44PM +0000, Daniel Lustig wrote: > > On Wed, 15 Nov 2017 10:06:01 PST (-0800), will.deacon@arm.com wrote: > >> On Tue, Nov 14, 2017 at 12:30:59PM -0800, Palmer Dabbelt wrote: > >> > On Tue, 24 Oct 2017 07:10:33 PDT (-0700), will.deacon@arm.com wrote: > >> >>On Tue, Sep 26, 2017 at 06:56:31PM -0700, Palmer Dabbelt wrote: > > > > > > Hi Palmer, > > > > > >> >>+ATOMIC_OPS(add, add, +, i, , _relaxed) > > >> >>+ATOMIC_OPS(add, add, +, i, .aq , _acquire) ATOMIC_OPS(add, add, > > >> >>++, i, .rl , _release) > > >> >>+ATOMIC_OPS(add, add, +, i, .aqrl, ) > > >> > > > >> >Have you checked that .aqrl is equivalent to "ordered", since there > > >> >are interpretations where that isn't the case. Specifically: > > >> > > > >> >// all variables zero at start of time > > >> >P0: > > >> >WRITE_ONCE(x) =3D 1; > > >> >atomic_add_return(y, 1); > > >> >WRITE_ONCE(z) =3D 1; > > >> > > > >> >P1: > > >> >READ_ONCE(z) // reads 1 > > >> >smp_rmb(); > > >> >READ_ONCE(x) // must not read 0 > > >> > > >> I haven't. We don't quite have a formal memory model specification = yet. > > >> I've added Daniel Lustig, who is creating that model. He should have > > >> a better idea > > > > > > Thanks. You really do need to ensure that, as it's heavily relied upo= n. > >=20 > > I know it's the case for our current processors, and I'm pretty sure it= 's the > > case for what's formally specified, but we'll have to wait for the spec= in order > > to prove it. >=20 > I think Will is right. In the current spec, using .aqrl converts an RCpc= load > or store into an RCsc load or store, but the acquire(-RCsc) annotation st= ill > only applies to the load part of the atomic, and the release(-RCsc) annot= ation > applies only to the store part of the atomic. >=20 > Why is that? Picture an machine which implements AMOs using something th= at > looks more like an LR/SC under the covers, or one that uses cache line lo= cking, > or anything else along those same lines. In some such machines, there co= uld be > a window between lock/reserve and unlock/store-conditional where other la= ter > stores could squeeze into, and that would break Will's example among othe= rs. >=20 > It's likely the same reasoning that causes ARM to use a trailing dmb here, > rather than just using ldaxr/stlxr. Is that right Will? I know that's L= L/SC > and this particular cases uses AMOADD, but it's the same principle. Well= , at > least according to how we have it in the current memory model draft. >=20 > Also, RISC-V currently prefers leading fence mappings, so I think the res= ult > here, for atomic_add_return() for example, should be this: >=20 > fence rw,rw > amoadd.aq ... >=20 Hmm.. if atomic_add_return() is implemented like that, how about the following case: {x=3D0, y=3D0} P1: =09 r1 =3D atomic_add_return(&x, 1); // r1 =3D=3D 0, x will 1 afterwards WRITE_ONCE(y, 1); P2: r2 =3D READ_ONCE(y); // r2 =3D 1 smp_rmb(); r3 =3D atomic_read(&x); // r3 =3D 0? , could this result in r1 =3D=3D 1 && r2 =3D=3D 1 && r3 =3D=3D 0? Given you= said .aq only effects the load part of AMO, and I don't see anything here preventing the reordering between store of y and the store part of the AMO on P1. Note: we don't allow (r1 =3D=3D 1 && r2 =3D=3D 1 && r3 =3D=3D 0) in above c= ase for linux kernel. Please see Documentation/atomic_t.txt: "Fully ordered primitives are ordered against everything prior and everything subsequent. Therefore a fully ordered primitive is like having an smp_mb() before and an smp_mb() after the primitive." Regards, Boqun > Note that at this point, I think you could even elide the .rl. If I'm re= ading > it right it looks like the ARM mapping does this too (well, the reverse: = ARM > elides the "a" in ldaxr due to the trailing dmb making it redundant). >=20 > Does that seem reasonable to you all? >=20 > Dan --MIdTMoZhcV1D07fI Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQEzBAABCAAdFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAloM54YACgkQSXnow7UH +rj6hwf+IuLDEOLOs3MIQC1WCb0aAGheNEl2GRvQzdRax5v7gYgRIzYg+jOusyTx XsApHppcWh4uMHIw+Y+g9KnfvYaW6hcm3K1EpmZiFMtQo4XPf/vDoT6JmiDdM+LQ UCJ/RxZw4vi+6GnhL4GsVC2ldchZOJ8Z2frb/Zw86UKATiWVJPl8jFmt8SdZ8AL6 aZubFkWzejze5yBS6zvm56oSLw/coDSszjmJbGgV9uYO2d4yyE5Bgm0QAMHWsKEY UY8v2ounO5uxMsOYI34Lj8sbwmSjpsJocSyY2Rk9GEq6Xga++ri9piXQ79tcvFGn 6ilONhAKR3NzngWQyiw7AjxSPluGFA== =dSzl -----END PGP SIGNATURE----- --MIdTMoZhcV1D07fI--