From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934043AbdKPKBU (ORCPT ); Thu, 16 Nov 2017 05:01:20 -0500 Received: from mail-wr0-f195.google.com ([209.85.128.195]:44218 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933052AbdKPKBG (ORCPT ); Thu, 16 Nov 2017 05:01:06 -0500 X-Google-Smtp-Source: AGs4zMaXfjzxTeiMNn6WnPa/vQbwEYyUStm6dMzAwUcE57ckp6zFOz+/tCFDJKu46wmM1OOWVecKcg== Date: Thu, 16 Nov 2017 11:00:58 +0100 From: Andrea Parri To: Peter Zijlstra Cc: Alan Stern , Will Deacon , "Reshetova, Elena" , "linux-kernel@vger.kernel.org" , "gregkh@linuxfoundation.org" , "keescook@chromium.org" , "tglx@linutronix.de" , "mingo@redhat.com" , "ishkamiel@gmail.com" , Paul McKenney , boqun.feng@gmail.com, dhowells@redhat.com, david@fromorbit.com Subject: Re: [PATCH] refcount: provide same memory ordering guarantees as in atomic_t Message-ID: <20171116100058.GA5625@andrea> References: <20171115180540.GQ19071@arm.com> <20171115200307.ns4ja7xjwhunen65@hirez.programming.kicks-ass.net> <20171115205823.GA2608@andrea> <20171116085804.ixw4x7ssf2ruooqg@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171116085804.ixw4x7ssf2ruooqg@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 16, 2017 at 09:58:04AM +0100, Peter Zijlstra wrote: > On Wed, Nov 15, 2017 at 10:01:11PM +0100, Andrea Parri wrote: > > > > And in specific things like: > > > > > > 135e8c9250dd5 > > > ecf7d01c229d1 > > > > > > which use the release of rq->lock paired with the next acquire of the > > > same rq->lock to match with an smp_rmb(). > > > > Those cycles are currently forbidden by LKMM _when_ you consider the > > smp_mb__after_spinlock() from schedule(). See rfi-rel-acq-is-not-mb > > from my previous email and Alan's remarks about cumul-fence. > > I'm not sure I get your point; and you all seem to forget I do not in > fact speak the ordering lingo. So I have no idea what > rfi-blah-blah or cumul-fence mean. I expand on my comment. Consider the following test: C T1 {} P0(int *x, int *y, spinlock_t *s) { spin_lock(s); WRITE_ONCE(*x, 1); spin_unlock(s); spin_lock(s); WRITE_ONCE(*y, 1); spin_unlock(s); } P1(int *x, int *y) { int r0; int r1; r0 = READ_ONCE(*y); smp_rmb(); r1 = READ_ONCE(*x); } exists (1:r0=1 /\ 1:r1=0) According to LKMM, the store to x happens before the store to y but there is no guarantee that the former store propagate (to P1) before the latter (which is what we need to forbid that state). As a result, that state in the "exists" clause is _allowed_ by LKMM. The LKMM encodes happens-before (or execution) ordering with a relation named "hb", while it encodes "propagation ordering" with "cumul-fence". Andrea > > I know rel-acq isn't smp_mb() and I don't think any of the above patches > need it to be. They just need it do be a local ordering, no? > > Even without smp_mb__after_spinlock() we get that: > > spin_lock(&x) > x = 1 > spin_unlock(&x) > spin_lock(&x) > y = 1 > spin_unlock(&x) > > guarantees that x happens-before y, right? > > And that should be sufficient to then order something else against, like > for example: > > r2 = y > smp_rmb() > r1 = x > > no? > >