From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753936AbdHUTmx (ORCPT ); Mon, 21 Aug 2017 15:42:53 -0400 Received: from merlin.infradead.org ([205.233.59.134]:42048 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753555AbdHUTmv (ORCPT ); Mon, 21 Aug 2017 15:42:51 -0400 Date: Mon, 21 Aug 2017 21:42:46 +0200 From: Peter Zijlstra To: Will Deacon Cc: Waiman Long , Ingo Molnar , linux-kernel@vger.kernel.org, Pan Xinhui , Boqun Feng , Andrea Parri , Paul McKenney Subject: Re: [RESEND PATCH v5] locking/pvqspinlock: Relax cmpxchg's to improve performance on some archs Message-ID: <20170821194246.GA32112@worktop.programming.kicks-ass.net> References: <20170810161524.2wzocpcxrliy7nt6@hirez.programming.kicks-ass.net> <7cb318a8-d5b9-0019-a537-1720fc5222cc@redhat.com> <73daa6e6-537e-b0ce-e1e0-7afa75334509@redhat.com> <20170811090601.2owslxi4lgv3kond@hirez.programming.kicks-ass.net> <20170814120121.GA24249@arm.com> <20170814184711.GL6524@worktop.programming.kicks-ass.net> <20170815184034.GD10801@arm.com> <20170821105508.j3p4zv7mdojpyb7e@hirez.programming.kicks-ass.net> <20170821180001.GA22335@arm.com> <20170821192550.3dbj3jbgl33v2eeg@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170821192550.3dbj3jbgl33v2eeg@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 21, 2017 at 09:25:50PM +0200, Peter Zijlstra wrote: > On Mon, Aug 21, 2017 at 07:00:02PM +0100, Will Deacon wrote: > > > No, I meant _from_ the LL load, not _to_ a later load. > > > > Sorry, I'm still not following enough to give you a definitive answer on > > that. Could you give an example, please? These sequences usually run in > > a loop, so the conditional branch back (based on the status flag) is where > > the read-after-read comes in. > > > > Any control dependencies from the loaded data exist regardless of the status > > flag. > > Basically what Waiman ended up doing, something like: > > if (cmpxchg_relaxed(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted) > return; > > WRITE_ONCE(l->locked, _Q_SLOW_VAL); > > Where the STORE depends on the LL value being 'complete'. > > > For any RmW we can only create a control dependency from the LOAD. The > the same could be done for something like: > > if (atomic_inc_not_zero(&obj->refs)) > WRITE_ONCE(obj->foo, 1); Obviously I meant the hypothetical atomic_inc_not_zero_relaxed() here, otherwise all the implied smp_mb() spoil the game. > Where we only do the STORE if we acquire the reference. While the > WRITE_ONCE() will not be ordered against the increment, it is ordered > against the LL and we know it must not be 0. > > Per the LL/SC loop we'll have observed a !0 value and committed the SC > (which need not be visible or ordered against any later store) but both > STORES (SC and the WRITE_ONCE) must be after the ->refs LOAD.