From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from n23.mail01.mtsvc.net (mailout32.mail01.mtsvc.net [216.70.64.70]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 720B01A049D for ; Fri, 5 Sep 2014 12:47:35 +1000 (EST) Message-ID: <5409243C.4080704@hurleysoftware.com> Date: Thu, 04 Sep 2014 22:47:24 -0400 From: Peter Hurley MIME-Version: 1.0 To: James Bottomley , paulmck@linux.vnet.ibm.com Subject: Re: bit fields && data tearing References: <20140712181328.GA8738@redhat.com> <54079B70.4050200@hurleysoftware.com> <1409785893.30640.118.camel@pasglop> <21512.10628.412205.873477@gargle.gargle.HOWL> <20140904090952.GW17454@tucnak.redhat.com> <540859EC.5000407@hurleysoftware.com> <20140904175044.4697aee4@alan.etchedpixels.co.uk> <5408C0AB.6050801@hurleysoftware.com> <20140905001751.GL5001@linux.vnet.ibm.com> <1409883098.5078.14.camel@jarvis.lan> In-Reply-To: <1409883098.5078.14.camel@jarvis.lan> Content-Type: text/plain; charset=utf-8 Cc: Jakub Jelinek , One Thousand Gnomes , linux-arch@vger.kernel.org, linux-ia64@vger.kernel.org, Mikael Pettersson , Oleg Nesterov , linux-kernel@vger.kernel.org, Tony Luck , Paul Mackerras , "H. Peter Anvin" , linuxppc-dev@lists.ozlabs.org, Miroslav Franc , Richard Henderson List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Hi James, On 09/04/2014 10:11 PM, James Bottomley wrote: > On Thu, 2014-09-04 at 17:17 -0700, Paul E. McKenney wrote: >> +And there are anti-guarantees: >> + >> + (*) These guarantees do not apply to bitfields, because compilers often >> + generate code to modify these using non-atomic read-modify-write >> + sequences. Do not attempt to use bitfields to synchronize parallel >> + algorithms. >> + >> + (*) Even in cases where bitfields are protected by locks, all fields >> + in a given bitfield must be protected by one lock. If two fields >> + in a given bitfield are protected by different locks, the compiler's >> + non-atomic read-modify-write sequences can cause an update to one >> + field to corrupt the value of an adjacent field. >> + >> + (*) These guarantees apply only to properly aligned and sized scalar >> + variables. "Properly sized" currently means "int" and "long", >> + because some CPU families do not support loads and stores of >> + other sizes. ("Some CPU families" is currently believed to >> + be only Alpha 21064. If this is actually the case, a different >> + non-guarantee is likely to be formulated.) > > This is a bit unclear. Presumably you're talking about definiteness of > the outcome (as in what's seen after multiple stores to the same > variable). No, the last conditions refers to adjacent byte stores from different cpu contexts (either interrupt or SMP). > The guarantees are only for natural width on Parisc as well, > so you would get a mess if you did byte stores to adjacent memory > locations. For a simple test like: struct x { long a; char b; char c; char d; char e; }; void store_bc(struct x *p) { p->b = 1; p->c = 2; } on parisc, gcc generates separate byte stores void store_bc(struct x *p) { 0: 34 1c 00 02 ldi 1,ret0 4: 0f 5c 12 08 stb ret0,4(r26) 8: 34 1c 00 04 ldi 2,ret0 c: e8 40 c0 00 bv r0(rp) 10: 0f 5c 12 0a stb ret0,5(r26) which appears to confirm that on parisc adjacent byte data is safe from corruption by concurrent cpu updates; that is, CPU 0 | CPU 1 | p->b = 1 | p->c = 2 | will result in p->b == 1 && p->c == 2 (assume both values were 0 before the call to store_bc()). Regards, Peter Hurley